A Google AI engineer has been put on leave for thinking an AI has become sentient. However, this is an illusion caused by a clever language model and a human anthropomorphising, writes Gary Marcus.
Blaise Aguera y Arcas, polymath, novelist, and Google VP, has a way with words. When he found himself impressed with Google’s recent AI system LaMDA, he didn’t just say, “Cool, it creates really neat sentences that in some ways seem contextually relevant”, he said, rather lyrically, in an interview with The Economist on Thursday,
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, drawn from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun. Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)
Join the conversation