The dangerous illusion of AI consciousness

The false dawn of machine minds

AI expert and DeepMind contributor Shannon Vallor explores OpenAI's latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI's human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.

You can see Shannon Vallor, debating Ken Cukier and Joscha Bach about the future of Big Tech and AI regulation on Controlling the Tech Titans, 1:15pm Monday 27th at HowTheLightGetsIn Hay-on-Wye 2024.

This article is presented in association with Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Festival.

 

This week OpenAI announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Join the conversation