The dangerous illusion of AI consciousness

The false dawn of machine minds

AI expert and DeepMind contributor Shannon Vallor explores OpenAI's latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI's human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.

You can see Shannon Vallor, debating Ken Cukier and Joscha Bach about the future of Big Tech and AI regulation on Controlling the Tech Titans, 1:15pm Monday 27th at HowTheLightGetsIn Hay-on-Wye 2024.

This article is presented in association with Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Festival.

 

This week OpenAI announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.

related-video-image SUGGESTED VIEWING The dream and danger of AI With Liv Boeree, Dominic Walliman

This next step in the commercial rollout of AI chatbot technology might seem like a nothingburger. After all, we don’t seem to be getting any nearer to AGI, or to the apocalyptic Terminator scenarios that the AI hype/doom cycle was warning of just one year ago. But it’s not benign at all—it might be the most dangerous moment in generative AI’s development.

What’s the problem? It’s far more than the ick factor of seeing yet another AI assistant marketed as a hyper-feminized, irrepressibly perky and compliant persona, one that will readily bend ‘her’ (its) emotional state to the will of the two men running the demo (plus another advertised bonus feature – you can interrupt ‘her’ all day long with no complaints!).

The bigger problem is the grand illusion of artificial consciousness that is now more likely to gain a stronger hold on many human users of AI, thanks to the multimodal, real-time conversational capacity of a GPT-4o-enabled chatbot and others like it, such as Google DeepMind’s Gemini Live. And consciousness is not the sort of thing it is good to have grand illusions about.

As noted in a new paper from Google DeepMind researchers to which I contributed, the deliberately anthropomorphic design features of this new class of AI assistants -- fluid human-sounding voices, customized ‘personalities’ and even greater ‘memory’ of conversation history enables “interactions that feel truly dynamic and social” (93). Now, we’re a social and curious species, so most people welcome dynamic and social interactions. How could this be a bad thing?

It might not be a bad thing, if there were much stronger guardrails to prevent people from being misled by these interactions, and made even more vulnerable to manipulation by them. We’re already being scammed by deepfake audio and video calls pretending to be our parents and bosses. How resistant are we going to be to deception by chatbots that can mimic nearly every superficial feature of a conscious, alert companion? We know that humans already have a strong and largely involuntary tendency to attribute states of mind to objects that lack them, and that anthropomorphic design strengthens this tendency.

AI needs to sleep min SUGGESTED READING AI may need sleep too By Darcy Bounsall

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation