The idea that Artificial Intelligence (AI) machines should have any moral status of their own or could one day become conscious is a sci-fi fantasy. Ethical concerns about the use of AI in fields like healthcare and finance are legitimate. But we should not worry about conscious AI suffering, because machines are not, and will never be, conscious writes Tim Crane. Read Thomas Metzinger's original article here.
The recent opening of an Institute for Ethics in AI at the University of Oxford is decisive proof, if any were needed, that AI ethics is now ‘A Thing’. The Oxford institute will address ethical questions about the use of AI in (e.g.) healthcare, finance, the law, in changing the way in which we relate to work — and many other things besides. AI is now becoming so dominant in our lives (whether we realise it or not) and its ethical implications in all these areas are not well-understood. It is good news that philosophers and others are addressing these questions seriously.
SUGGESTED READING
Why we should worry about computer suffering
By Thomas Metzinger
However, it is common to find discussions of AI these days addressing even more ambitious questions. For example: what should we do if the machines become smarter than us? What happens if AI machines develop their own values, and these values conflict with ours? How should we treat these AI machines if they become conscious? What should their moral status be?
The problem with questions like this is not that they make no sense, but rather that their immediate sci-fi appeal can obscure the really urgent practical questions about AI and its implications. There is nothing wrong with speculations about machine consciousness or the moral status of machines as such — as a philosopher, I could hardly object to them. But some thinkers give the impression that these are questions of the same kind as the questions about AI in healthcare, finance and so on, or (even worse) that the answers to the sci-fi questions are required in order to answer the practical questions about real AI. The truth, it seems to me, is that the answers to these sci-fi questions are of no relevance to the real ethical questions, and that they are a distraction from real AI ethics.
The truth is that the answers to these sci-fi questions are a distraction from real AI ethics.
Take the idea of machines becoming ‘smarter’ than us. Anyone with the slightest familiarity with recent AI will know that AI machines are already smarter than us. AI machines have for some time been far better than humans at chess, they have beaten the world champion of Go, they are much better than most of us at remembering phone numbers, searching documents for information, finding the best route to your destination on public transport, and (of course) at mathematical calculations. They are computers, after all, and that’s what computers do — compute.
But of course, this is not what people mean when they talk about the machines becoming smarter. They mean that an AI might become ‘smart’ in the way that we are smart, but perhaps to an even higher degree. It might be able to think, to reason, to make decisions, to be creative, inventive, witty, reasonable, sensitive to other creatures and… whatever else it is that we are talking about when we say that someone is smart or intelligent.
Join the conversation