The idea that Artificial Intelligence (AI) machines should have any moral status of their own or could one day become conscious is a sci-fi fantasy. Ethical concerns about the use of AI in fields like healthcare and finance are legitimate. But we should not worry about conscious AI suffering, because machines are not, and will never be, conscious writes Tim Crane. Read Thomas Metzinger's original article here.
The recent opening of an Institute for Ethics in AI at the University of Oxford is decisive proof, if any were needed, that AI ethics is now ‘A Thing’. The Oxford institute will address ethical questions about the use of AI in (e.g.) healthcare, finance, the law, in changing the way in which we relate to work — and many other things besides. AI is now becoming so dominant in our lives (whether we realise it or not) and its ethical implications in all these areas are not well-understood. It is good news that philosophers and others are addressing these questions seriously.
Why we should worry about computer suffering
However, it is common to find discussions of AI these days addressing even more ambitious questions. For example: what should we do if the machines become smarter than us? What happens if AI machines develop their own values, and these values conflict with ours? How should we treat these AI machines if they become conscious? What should their moral status be?
The problem with questions like this is not that they make no sense, but rather that their immediate sci-fi appeal can obscure the really urgent practical questions about AI and its implications. There is nothing wrong with speculations about machine consciousness or the moral status of machines as such — as a philosopher, I could hardly object to them. But some thinkers give the impression that these are questions of the same kind as the questions about AI in healthcare, finance and so on, or (even worse) that the answers to the sci-fi questions are required in order to answer the practical questions about real AI. The truth, it seems to me, is that the answers to these sci-fi questions are of no relevance to the real ethical questions, and that they are a distraction from real AI ethics.
The truth is that the answers to these sci-fi questions are a distraction from real AI ethics.
Take the idea of machines becoming ‘smarter’ than us. Anyone with the slightest familiarity with recent AI will know that AI machines are already smarter than us. AI machines have for some time been far better than humans at chess, they have beaten the world champion of Go, they are much better than most of us at remembering phone numbers, searching documents for information, finding the best route to your destination on public transport, and (of course) at mathematical calculations. They are computers, after all, and that’s what computers do — compute.
But of course, this is not what people mean when they talk about the machines becoming smarter. They mean that an AI might become ‘smart’ in the way that we are smart, but perhaps to an even higher degree. It might be able to think, to reason, to make decisions, to be creative, inventive, witty, reasonable, sensitive to other creatures and… whatever else it is that we are talking about when we say that someone is smart or intelligent.
Being smart or intelligent in this sense is not just having a ‘special purpose’ ability, like the ability to play chess like a Grand Master. It involves something else: what AI researchers call ‘General Intelligence’. For the machines to become smart in the way that we are, they must have an artificial version of this general intelligence: ‘Artificial General Intelligence’ or AGI. The search for AGI is now at the heart of many of the most brilliant and ambitious AI groups in the world today.
The problem, however, is that no-one in the field of AI has any plausible idea of what AGI would be, because really they have no idea what general intelligence is. (A bold claim, I know — but I would be very happy to be corrected!) My impression is that there is so little scrutiny of the idea of ‘intelligence’ itself here that it is hardly surprising that they get nowhere with AGI.
Tim Crane, Susan Blackmore and Hilary Lawson question whether AI could ever become truly conscious.
An AI machine which has general intelligence cannot just be one which is able to process more information than we do, and more quickly. Rather, it would have to have an ability to communicate in the way humans do, to note the relevance of certain things over other things, to have goals and aims, to have a sense of what matters, and to perceive what matters to others — among countless other things. These tasks would require thought, reasoning, knowledge and understanding. And inevitably, any creature that had these capacities would also be conscious.
So this brings us to the possibility of conscious AI. Here the same lessons apply as in the case of intelligence. If you are going to try and make a machine that is conscious, you had better have some idea of what consciousness is, or what you mean by ‘conscious’, and what it would take for anything to be conscious. Is consciousness a computational process, something that can be replicated on a computer? If so, what are the inputs and outputs of this process? What task is being performed by a computer that would make it a conscious computer?
Some bold thinkers have proposed that a conscious machine is one that monitors its own states in some way — just as we are ‘conscious of’ our own mental states. Since my laptop has such a capacity, then by this criterion it is conscious. But speculative fantasies aside, we all know laptops are not conscious — and we have to start these discussions with what we know.
But why think consciousness is a computational process at all? Not because all processes in the human body are computations — they are not (digestion, for example). If the idea of a conscious AI is supposed to be based on computational AI as it now is, or something like it, then those who believe in it have to tell us why they think consciousness is some kind of computational state or process. What are these reasons?
It’s perfectly okay to philosophise about the question of AI and consciousness. But it is not a practical question, and it demands no practical action or regulation.
I do not raise this question because I am already convinced that no machine can be conscious, or because I think human beings have immaterial souls, or because I have some other anti-scientific agenda. I am just asking for the reasons for thinking that AI, as it currently stands or as it will develop in the near future, will ever create a conscious machine. I don’t yet see any convincing reasons.
Because of this, I don’t think there is any point in discussing any practical steps to prohibit research on AI and consciousness — for example, to prevent the potential abuse of our conscious creations, or because these creations might develop bad moral values. It’s perfectly okay to philosophise about the question of AI and consciousness. But it is not a practical question, and it demands no practical action or regulation — unlike the other questions being discussed in the Oxford Institute and elsewhere. Understanding the moral risks and responsibilities associated with self-driving cars, for example, should not turn on an answer to the question of whether a AI can have genuine moral status. Once we appreciate the obvious fact that AIs as such have no moral status, this does not settle those questions about driverless cars.
When I talk about AI of the future, I am talking about something similar to the kinds of machines we have now — computers and robots, based on the principles used in robotics and state of the art AI research these days. I am not talking about the very idea of replicating a human being, of building something that has all the features of a human being. If such a replica is possible in principle, then of course that replica will be conscious, since consciousness is one of the features of a human being. My scepticism is not about this, but about the idea that computational AI could ever build such a replica.
The sci-fi inspired debates about AI these days can be polarised between enthusiasts — who see themselves as pro-science and think the prospects for AI are unlimited — and luddites, whom the enthusiasts treat as if they were flat-earthers or climate change deniers. But surely there is room for a middle position, based on an informed scepticism: AI can do amazing things, and surely will do more amazing things in the future. Some of these things may be dangerous to us. Those concerned with policy should consider these dangers. But there is no reason to consider the danger of merely possible situations when there is no reason, even conceding all the triumphs of AI, to think that these situations will ever come about.