The idea that Artificial Intelligence (AI) machines should have any moral status of their own or could one day become conscious is a sci-fi fantasy. Ethical concerns about the use of AI in fields like healthcare and finance are legitimate. But we should not worry about conscious AI suffering, because machines are not, and will never be, conscious writes Tim Crane. Read Thomas Metzinger's original article here.
The recent opening of an Institute for Ethics in AI at the University of Oxford is decisive proof, if any were needed, that AI ethics is now ‘A Thing’. The Oxford institute will address ethical questions about the use of AI in (e.g.) healthcare, finance, the law, in changing the way in which we relate to work — and many other things besides. AI is now becoming so dominant in our lives (whether we realise it or not) and its ethical implications in all these areas are not well-understood. It is good news that philosophers and others are addressing these questions seriously.
SUGGESTED READING Why we should worry about computer suffering By Thomas Metzinger However, it is common to find discussions of AI these days addressing even more ambitious questions. For example: what should we do if the machines become smarter than us? What happens if AI machines develop their own values, and these values conflict with ours? How should we treat these AI machines if they become conscious? What should their moral status be?
The problem with questions like this is not that they make no sense, but rather that their immediate sci-fi appeal can obscure the really urgent practical questions about AI and its implications. There is nothing wrong with speculations about machine consciousness or the moral status of machines as such — as a philosopher, I could hardly object to them. But some thinkers give the impression that these are questions of the same kind as the questions about AI in healthcare, finance and so on, or (even worse) that the answers to the sci-fi questions are required in order to answer the practical questions about real AI. The truth, it seems to me, is that the answers to these sci-fi questions are of no relevance to the real ethical questions, and that they are a distraction from real AI ethics.
The truth is that the answers to these sci-fi questions are a distraction from real AI ethics.
Take the idea of machines becoming ‘smarter’ than us. Anyone with the slightest familiarity with recent AI will know that AI machines are already smarter than us. AI machines have for some time been far better than humans at chess, they have beaten the world champion of Go, they are much better than most of us at remembering phone numbers, searching documents for information, finding the best route to your destination on public transport, and (of course) at mathematical calculations. They are computers, after all, and that’s what computers do — compute.
But of course, this is not what people mean when they talk about the machines becoming smarter. They mean that an AI might become ‘smart’ in the way that we are smart, but perhaps to an even higher degree. It might be able to think, to reason, to make decisions, to be creative, inventive, witty, reasonable, sensitive to other creatures and… whatever else it is that we are talking about when we say that someone is smart or intelligent.
Being smart or intelligent in this sense is not just having a ‘special purpose’ ability, like the ability to play chess like a Grand Master. It involves something else: what AI researchers call ‘General Intelligence’. For the machines to become smart in the way that we are, they must have an artificial version of this general intelligence: ‘Artificial General Intelligence’ or AGI. The search for AGI is now at the heart of many of the most brilliant and ambitious AI groups in the world today.
The problem, however, is that no-one in the field of AI has any plausible idea of what AGI would be, because really they have no idea what general intelligence is. (A bold claim, I know — but I would be very happy to be corrected!) My impression is that there is so little scrutiny of the idea of ‘intelligence’ itself here that it is hardly surprising that they get nowhere with AGI.
SUGGESTED VIEWING The miracle of mind With Hilary Lawson, Tim Crane, Susan Blackmore, Bernardo Kastrup
An AI machine which has general intelligence cannot just be one which is able to process more information than we do, and more quickly. Rather, it would have to have an ability to communicate in the way humans do, to note the relevance of certain things over other things, to have goals and aims, to have a sense of what matters, and to perceive what matters to others — among countless other things. These tasks would require thought, reasoning, knowledge and understanding. And inevitably, any creature that had these capacities would also be conscious.