The possibility and promise of conscious Artificial Intelligence (AI) could be approaching. But can we know for sure that the experience of any conscious AI will not be one full of extreme suffering? Will AI’s exponentially heightened intelligence lead to equally exponentially heightened suffering? The AI beings of the future deserve our ethical concern and immediate action - most importantly, a ban on all AI research until we can ensure our post-biotic peers will not have hellish lives, writes Thomas Metzinger. Read Tim Crane's reply here.
Today, the self-conscious machines of the future have no representation in the political process of any country. Their potential interests and preferences are not systematically represented by any ethics committee, any legal procedure, or any political party on the planet. At the same time, it seems empirically plausible that, once machine consciousness has evolved, some of these systems will have preferences of their own, that they will autonomously create a hierarchy of goals, and that this goal hierarchy will also become a part of their phenomenal self-model (i.e. their conscious self-representation.)
The AI ethics hoax
By Tim Crane
Some of them will be able to consciously suffer. If their preferences are thwarted, if their goals cannot be reached, and if their conscious self-model is in danger of disintegrating, then they might undergo negative phenomenal states, states of conscious experience they want to avoid but cannot avoid and which, in addition, they are forced to experience as states of themselves. Of course, they could also suffer in ways we cannot comprehend or imagine, and we might even be unable to discover this very fact. Every entity that is capable of suffering should be an object of moral consideration. Moreover, we are ethically responsible for the consequences of our actions.
Our actions today will influence the phenomenology of post-biotic systems in the future. Conceivably, there will be many of them. So far, more than 108 billion human beings have lived on this planet, with roughly 7% of them alive today. The burden of responsibility is extremely high, because, just as with the rolling climate crisis, a comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.
A comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.
The number of self-conscious machines that will evolve and exist on Earth is unknown: At a certain point it might exceed the number of humans by far, or artificial consciousness might never emerge on this planet at all. But we are now dealing with a “risk of sudden synergy" connecting different scientific disciplines, leading to an un-expected technological confluence. If the theoretical intuitions of a growing number of experts are right, a small number of human beings will be responsible for conscious machines of the future and their phenomenal states. Strongly responsible will be policy makers, legal regulators, as AI researchers, mathematicians, neuroscientists, and the philosophers and researchers in the growing interdisciplinary field of consciousness science. Many of them are already alive today. This historically unique situation creates an especially high burden of ethical responsibility.
There is a risk that has to be minimized in a rational and evidence-based manner. This is the risk of an “explosion of negative phenomenology" (or simply a “suffering explosion") in advanced AI and other post-biotic systems. I here define “negative phenomenology" as any kind of conscious experience that a conscious system would avoid if it had a choice. I also assume a priority for the reduction of suffering, because, in this world, it is more important to minimize suffering than to increase happiness.
One explosion of negative phenomenology has already taken place: the process of biological evolution on this planet. Through the evolution of complex nervous systems, properties like sentience, self-awareness, and negative phenomenology has already occurred in an extremely large number of biological individuals, long before Homo Sapiens entered the stage [Horta, 2010; Iglesias, 2018]. In humans, cognitive biases and self-deception make us unable to see this phenomenological fact clearly [von Hippel and Trivers, 2011]. On a scientific level, it has long become clear that natural selection never shaped our moods and our emotional regulation systems for our own benefit, but that “the motives we experience often benefit our genes at the expense of quality of life" [Nesse, 2004, p. 1344].
For the applied ethics of AI, we must minimize the risk of an explosion of negative phenomenology taking place in post-biological evolution. We do not want the phenomenology of suffering to spill over from biology into AI.
Martin Rees, Laura Mersini-Houghton, Hilary Lawson and Kate Devlin question the reality of artificial intelligence.
On ethical grounds, we should not risk this second explosion until we have a much deeper scientific and philosophical understanding of consciousness and suffering. As we presently have no good theory of consciousness and no good theory about what “suffering" really is, the risk of future suffering is incalculable. It is unethical to run incalculable risks of this magnitude. Therefore, until 2050, there should be a global ban on all research that directly aims at or risks the emergence of synthetic phenomenology.
At the same time, we should agree on an ethical obligation to allocate resources to focus on the problem of artificial suffering and the related long-term risks I have outlined. This process could lead to an incremental reformulation of the original ban. Perhaps it could be repealed by 2050, perhaps not. We need a new stream of research, leading to a more substantial and ethically refined position about which, if any, kinds of conscious experience we want to evolve in post-biotic systems.
The general argument is simple. First, one should never risk an increase in the overall amount of suffering in the universe unless one has very good reasons to do so let alone a potentially dramatic and irrevocable increase [Mayerfeld, 1999; Vinding, 2020]. Second, the explosion of negative phenomenology risk, although presently hard to calculate, clearly is potentially dramatic and irrevocable in its consequences. Third, whoever agrees on the ethical goal of preventing an explosion of artificial suffering should also agree to the goal of reducing the relevant forms of ignorance and epistemic indeterminacy, both on an empirical and on an ethical level.
It would be a crude misunderstanding of this proposal to think that I am saying artificial consciousness is imminent. I don’t. I make no claims toward probability. This proposal is not just another combination of science fiction and alarmism. I have been in consciousness research for more than thirty years and personally I do not believe that we will see any of this tomorrow, or even the day after tomorrow. My point is that in some cases it just does not matter what I happen to think in this respect, what I (or you) intuitively take to be probable or improbable. AI ethics is incomplete without taking unknown unknowns and our own cognitive limitations into account. To be intellectually honest, we also need to take an ethical attitude towards large, but currently incalculable risks and to the process of risk-taking itself.
The creation of artificial consciousness might lead to artificial suffering and to a consciously experienced sense of self in autonomous, intelligent systems.
Every entity that is capable of self-conscious suffering automatically becomes an object of ethical consideration. If we ascribe an ethical value to such entities, then it does not matter whether they have biological properties or not, or whether they will exist in the future or exist today. Self-conscious post-biotic systems of the future, capable of consciously experienced suffering, are objects of ethical consideration. Their potential preferences and the value of their existence must be taken into account.
The post-biotic systems of the future might come to very similar conclusions. The applied ethics of creating autonomous artificial agents (AMAs) is a separate issue, but there could be a causal path from suffering to trying to internally model the hidden causes of suffering to understanding that other sentient creatures actually suffer too. Systems might begin to impose moral obligations on themselves, gradually turning them into moral agents in their own right. They might develop recognitional self-respect, consciously representing themselves not only as objects of ethical consideration, but also as moral subjects in their own right, and, accordingly, attribute a very high value to themselves. As a consequence of suffering, they might also evolve empathy, high-level social cognition, and, possibly, assert their own dignity. This could have many unexpected consequences.
It is therefore important that scientists, politicians, and law-makers understand the difference between artificial intelligence and artificial consciousness. Risking the creation of artificial consciousness is highly problematic from an ethical perspective. It might lead to artificial suffering and to a consciously experienced sense of self in autonomous, intelligent systems. We should have a global moratorium on synthetic phenomenology until 2050—or until we know what we are doing.
Gilbert, P.  Human Nature and Suffering (Routledge, London).
Horta, O.  Debunking the idyllic view of natural processes: Population dynamics and suffering in the wild, Telos 17(1), 7388.
Iglesias, V.  The overwhelming prevalence of suffering in nature, Rev. Bioet. Derecho 42, 181195.
Mayerfeld, J.  Suffering and Moral Responsibility (Oxford University Press, New York, NY).
Nesse, R. M.  Natural selection and the elusiveness of happiness, Philos. Trans. R. Soc. Lond. B, Biol. Sci. 359(1449), 13331347.
Vinding, M.  Suffering-focused Ethics: Defense and Implications (Radio Ethica, Copenhagen)
The above is a shorter version of an article you can find here.