Artificial intelligence can be impressive to the extent that people think it might one day acquire human intelligence and, with it, consciousness. But AI can be far more intelligent than humans without ever being conscious. And apart from us having no idea how to create conscious AI, it might not even be that desirable. We fool ourselves if we think conscious beings are the exemplar of intelligence in the universe, argues Susan Schneider in this interview with iai News.
If we define consciousness along the lines of Thomas Nagel as the inner feel of existence, the fact that for some beings “there is something it is like to be them”, is it outlandish to believe that Artificial Intelligence, given what it is today, can ever be conscious?
The idea of conscious AI is not outlandish. Yet I doubt that today’s well-known AI companies have built, or will soon build, systems that have conscious experiences. In contrast, we Earthlings already know how to build intelligent machines—machines that recognise visual patterns, prove theorems, generate creative images, chat intelligently with humans, etc. The question is whether, and how, the gap between Big Tech's ability to build intelligent systems and its ability (or lack thereoff) to build conscious systems will narrow.
Humankind is on the cusp of building “savant systems”: AIs that outthink humans in certain respects, but which also have radical deficits, such as moral reasoning. If I had to bet, savant systems already exist, being underground and unbeknownst to the public. Anyway, savant systems will probably emerge, or already have emerged, before conscious machines are developed, assuming that conscious machines can be developed at all.
___
There is no reason to assume that sophisticated AI will inevitably be conscious.
___
Join the conversation