Journal of Digital Social Research (Dec 2024)
Strategic misrecognition and speculative rituals in generative AI
Abstract
Public conversation around generative AI is saturated with the ‘realness question’: is the software really intelligent? At what point could we say it is thinking? I argue that attempts to define and measure those thresholdsmisses the fire for the smoke. The primary societal impact of realness question comes not from the constantly deferred sentient machine of the future, but its present form as rituals of misrecognition. Persistent confusion between plausible textual output and internal cognitive processes, or the use of mystifying language like ‘learning’ and ‘hallucination’, configure public expectations around what kinds of politics and ethics of genAI are reasonable or plausible. I adapt the notion of abductive agency, originally developed by the anthropologist Alfred Gell, to explain how such misrecognition strategically defines the terms of the AI conversation. I further argue that such strategic misrecognition is not new or accidental, but a central tradition in the social history of computing and artificial intelligence. This tradition runs through the originary deception of the Turing Test, famously never intended as a rigorous test of artificial intelligence, to the present array of drama and public spectacle in the form of competitions, demonstrations and product launches. The primary impact of this tradition is not to progressively clarify the nature of machine intelligence, but to constantly redefine values like intelligence in order to legitimise and mythologise our newest machines – and their increasingly wealthy and powerful owners.
Keywords