The act of naming is more than just a simple labeling exercise; it's a potent exercise of power with political implications. As the discourse around AI intensifies, it may be time to reassess its nomenclature and inherent biases, writes David Gunkel.
Naming is anything but a nominal operation. Nowhere is this more evident and clearly on display than in recent debates about the moniker “artificial intelligence” (AI). Right now, in fact, it appears that AI—the technology and the scientific discipline that concerns this technology—is going through a kind of identity crisis, as leading voices in the field are beginning to ask whether the name is (and maybe already was) a misnomer and a significant obstacle to accurate understanding. “As a computer scientist,” Jaron Lanier recently wrote in a piece for The New Yorker, “I don’t like the term A.I. In fact, I think it’s misleading—maybe even a little dangerous.”
What’s in a Name?
The term “artificial intelligence” was originally proposed and put into circulation by John McCarthy in the process of organizing a scientific meeting at Dartmouth College in the summer of 1956. And it immediately had traction. It not only was successful for securing research funding for the event at Dartmouth but quickly became the nom célèbre for a brand-new scientific discipline.
For better or worse, McCarthy’s neologism put the emphasis on intelligence. And it is because of this that we now find ourselves discussing and debating questions like: Can machines think? (Alan Turing’s initial query), are large language models sentient? (something that became salient with the Lemoine affair last June), or when might have an AI that achieves consciousness (a question that has been posed in numerous headlines in the wake of recent innovations with generative algorithms). But for many researchers, scholars, and developers these are not just the wrong questions, they are potentially deceptive and even dangerous to the extent that they distract us with speculative matters that are more science fiction than science fact.
Since the difficulty derives from the very name “artificial intelligence,” one solution has been to select or fabricate a better or more accurate signifier.
Since the difficulty derives from the very name “artificial intelligence,” one solution has been to select or fabricate a better or more accurate signifier. The science fiction writer Ted Chiang, for instance, recommends that we replace AI with something less “sexy,” like “applied statistics.” Others, like Emily Bender, have encouraged the use of the acronym SALAMI (Systematic Approaches to Learning Algorithms and Machine Inferences), which was originally coined by Stefano Quintarelli in an effort to avoid what he identified as the “implicit bias” residing in the name “artificial intelligence.”
Though these alternative designations may be, as Chiang argues, more precise descriptors for recent innovations with machine learning (ML) systems, neither of them would apply to or entirely fit other architectures, like GOFAI (aka symbolic reasoning) and hybrid systems. Consequently, the proposed alternatives would, at best, only describe a small and relatively recent subset of what has been situated under the designation “artificial intelligence.”
But inventing new names—whether it is something like that originally proposed by McCarthy or one of the recently proposed alternatives—is not the only way to proceed. As French philosopher and literary theorist Jacques Derrida pointed out, there are at least two different ways to designate a new concept: neologism (the fabrication of a new name) and paleonymy (the reuse of an old name). If the former has produced less than suitable results, perhaps it is time to try the latter.
The good news is that we do not have to look far or wide to find a viable alternative. There was one already available at the time of the Dartmouth meeting with “cybernetics.” This term—derived from the ancient Greek word (κυβερνήτης) for the helmsman of a boat—had been introduced and developed by Norbert Wiener in 1948 to designate the science of communication and control in the animal and machine.
Cybernetics does not get diverted by or lost in speculation about intelligence, consciousness, or sentience.
Cybernetics has a number of advantages when it comes to rebranding what had been called AI. First, cybernetics does not get diverted by or lost in speculation about intelligence, consciousness, or sentience. It is only concerned with and focuses attention on decision-making capabilities and processes. The principal example utilized throughout the literature on the subject is the seemingly mundane but nevertheless indicative thermostat. This homeostatic device can accurately adjust for temperature without knowing anything about the concept of temperature, understanding the difference between “hot” and “cold,” or needing to think (or be thought to be thinking).
Second, cybernetics avoids one of the main epistemological problems and sticking points that continually frustrates AI—something philosophers call “the problem of other minds.” For McCarthy and colleagues, one of the objectives of the Dartmouth meeting—in fact, the first goal listed on the proposal—was to figure out “how to make machines use language.” This is because language use—as Turing already had operationalized with the imitation game—had been taken to be a sign of intelligence. But as John Searle demonstrated with his Chinese Room thought experiment, the manipulation of linguistic tokens can transpire without knowing anything at all about the language. Unlike AI, cybernetics can attend to the phenomenon and effect of this communicative behavior without needing to resolve or even broach the question concerning the problem of other minds.
Finally, cybernetics does not make the same commitment to human exceptionalism that has been present in AI from the beginning. Because the objectives initially listed by the Dartmouth proposal (e.g. language use, form abstractions and concepts, solve problems reserved for humans, and improve themselves), definitions of AI tend to concentrate on the emulation or simulation of “human intelligence.” Cybernetics by contrast is more diverse and less anthropocentric. As the general science of communication and control in the animal and the machine, it takes a more holistic view that can accommodate a wider range of things. It is, as N. Katherine Hayles argues, a posthuman framework that is able to respond to and take responsibility for others and other forms of socially significant otherness.
Back to the Future
If “cybernetics” had already provided a viable alternative, one has to ask why the term “artificial intelligence” became the privileged moniker in the first place? The answer to this question returns us to where we began—with names and the act of naming. As McCarthy explained many years later, one of the reasons “for inventing the term ‘artificial intelligence’ was to escape association with cybernetics” and to “avoid having either to accept Norbert Wiener as a guru or having to argue with him.” Thus, the term “artificial intelligence” was as much a political decision and strategy as it was a matter of scientific designation. But for this reason, it is entirely possible and perhaps even prudent to reverse course and face what the nascent discipline of AI had so assiduously sought to avoid. The way forward may be by going back.