A Google AI engineer has been put on leave for thinking an AI has become sentient. However, this is an illusion caused by a clever language model and a human anthropomorphising, writes Gary Marcus.
Blaise Aguera y Arcas, polymath, novelist, and Google VP, has a way with words. When he found himself impressed with Google’s recent AI system LaMDA, he didn’t just say, “Cool, it creates really neat sentences that in some ways seem contextually relevant”, he said, rather lyrically, in an interview with The Economist on Thursday,
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, drawn from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun. Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)
SUGGESTED READING
The uncontrollability of Artificial Intelligence
By Roman V. Yampolskiy
To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA, a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test. None of the software in either of those systems has survived in modern efforts at “artificial general intelligence”, and I am not sure that LaMDA and its cousins will play any important role in the future of AI, either. What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.
I am not saying that no software ever could connects its digital bits to the world, à la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine. Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critiquing systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another:
If the media is fretting over LaMDA being sentient (and leading the public to do the same), the AI community categorically isn’t. We in the AI community have our differences, but pretty much all of find the notion that LaMDA might be sentient completely ridiculous. Stanford economist Erik Brynjolfsson used this great analogy:
Join the conversation