AI expert and DeepMind contributor Shannon Vallor explores OpenAI's latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI's human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.
You can see Shannon Vallor, debating Ken Cukier and Joscha Bach about the future of Big Tech and AI regulation on Controlling the Tech Titans, 1:15pm Monday 27th at HowTheLightGetsIn Hay-on-Wye 2024.
This article is presented in association with Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Festival.
This week OpenAI announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.
SUGGESTED VIEWING The dream and danger of AI With Liv Boeree, Dominic Walliman
This next step in the commercial rollout of AI chatbot technology might seem like a nothingburger. After all, we don’t seem to be getting any nearer to AGI, or to the apocalyptic Terminator scenarios that the AI hype/doom cycle was warning of just one year ago. But it’s not benign at all—it might be the most dangerous moment in generative AI’s development.
Join the conversation