Speculative Trainers: Large Language Models and Techniques of Affirmative Speculation
This article proposes a reorientation of large language models (LLMs) towards “affirmative speculation,” exploring possibilities of speculative representation within the glitches of current chatbot implementations. Embracing LLMs’ sociohistorical and stochastic approach to language, we suggest that the serendipitous nature of word-by-word prediction affords innovative ways to experiment with discursive conventions. We present techniques of prompt engineering that test semantic limits and generate unexpected turns of expression. These techniques are designed to train LLMs and their human companions for co-speculative interactions, including: roleplaying beyond the LLM “helpful assistant” persona; translating concepts and discursive features from one disciplinary field to another, exploring conjectural mashups; simulating expert roundtables and hypothetical research conferences; encouraging associative navigation of obscure topic connections; appreciating LLM “hallucinations” as creative fictions rather than as errors, embracing their potential for speculative insights; and creating innovative, as-yet inexistent theoretical frameworks, blending real and fictional elements. By treating LLMs as co-speculative companions, we propose alternative ways to engage with AI in interdisciplinary research and creative thought. We also attend to the ethical and environmental consequences of speculating with LLMs and argue that the measurable costs of speculation are far outweighed by the immeasurable costs of failing to speculate at all.