Big tech doesn’t want AI to become conscious

An interview with Susan Schneider

Artificial intelligence can be impressive to the extent that people think it might one day acquire human intelligence and, with it, consciousness. But AI can be far more intelligent than humans without ever being conscious. And apart from us having no idea how to create conscious AI, it might not even be that desirable. We fool ourselves if we think conscious beings are the exemplar of intelligence in the universe, argues Susan Schneider in this interview with iai News.  

 

 

If we define consciousness along the lines of Thomas Nagel as the inner feel of existence, the fact that for some beings “there is something it is like to be them”, is it outlandish to believe that Artificial Intelligence, given what it is today, can ever be conscious?

 

The idea of conscious AI is not outlandish. Yet I doubt that today’s well-known AI companies have built, or will soon build, systems that have conscious experiences.  In contrast, we Earthlings already know how to build intelligent machines—machines that recognise visual patterns, prove theorems, generate creative images, chat intelligently with humans, etc.  The question is whether, and how, the gap between Big Tech's ability to build intelligent systems and its ability (or lack thereoff) to build conscious systems will narrow.

Humankind is on the cusp of building “savant systems”: AIs that outthink humans in certain respects, but which also have radical deficits, such as moral reasoning. If I had to bet, savant systems already exist, being underground and unbeknownst to the public. Anyway, savant systems will probably emerge, or already have emerged, before conscious machines are developed, assuming that conscious machines can be developed at all. 

___

There is no reason to assume that sophisticated AI will inevitably be conscious.

___

Why am I focusing on savant systems? I suspect they are under our radar and are the form of sophisticated synthetic general intelligence of relevance to our near future. Savant systems will exhibit integration across topical and sensory domains and outperform humans in significant ways (e.g., they will have almost instant access to an immense range of facts, as with today’s large language models like ChatGPT3 and LaMDA), and they will likely underperform us in vivid and even unnerving ways (e.g., moral and causal reasoning). And because they have moral deficits and can be used in military and social media contexts, they are of grave concern from an AI safety standpoint. It doesn’t take superintelligent AI for a control problem to arise.

AdobeStock 412599556 SUGGESTED READING Consciousness may not require a brain By Annaka Harris

Savant systems will not be human-level intelligences (what are often called “AGIs”), but they will nevertheless be domain-general intelligences, integrating different “sensory” capacities, such as associating linguistic commands with visual outputs.  Indeed, the idea that humans will build AGI machines that functionally align with ‘human level intelligence’ is a myth. AIs already surpass as in various domains, so why dumb them down in certain ways to align with the “human level”?

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation