Why we should worry about computer suffering

AI Ethics and the possibility of the extreme suffering of conscious machines

The possibility and promise of conscious Artificial Intelligence (AI) could be approaching. But can we know for sure that the experience of any conscious AI will not be one full of extreme suffering? Will AI’s exponentially heightened intelligence lead to equally exponentially heightened suffering? The AI beings of the future deserve our ethical concern and immediate action - most importantly, a ban on all AI research until we can ensure our post-biotic peers will not have hellish lives, writes Thomas Metzinger. Read Tim Crane's reply here.

 

Today, the self-conscious machines of the future have no representation in the political process of any country. Their potential interests and preferences are not systematically represented by any ethics committee, any legal procedure, or any political party on the planet. At the same time, it seems empirically plausible that, once machine consciousness has evolved, some of these systems will have preferences of their own, that they will autonomously create a hierarchy of goals, and that this goal hierarchy will also become a part of their phenomenal self-model (i.e. their conscious self-representation.)

crane with text2 SUGGESTED READING The AI ethics hoax By Tim Crane Some of them will be able to consciously suffer. If their preferences are thwarted, if their goals cannot be reached, and if their conscious self-model is in danger of disintegrating, then they might undergo negative phenomenal states, states of conscious experience they want to avoid but cannot avoid and which, in addition, they are forced to experience as states of themselves. Of course, they could also suffer in ways we cannot comprehend or imagine, and we might even be unable to discover this very fact. Every entity that is capable of suffering should be an object of moral consideration. Moreover, we are ethically responsible for the consequences of our actions.

Our actions today will influence the phenomenology of post-biotic systems in the future. Conceivably, there will be many of them. So far, more than 108 billion human beings have lived on this planet, with roughly 7% of them alive today. The burden of responsibility is extremely high, because, just as with the rolling climate crisis, a comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.

A comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.

The number of self-conscious machines that will evolve and exist on Earth is unknown: At a certain point it might exceed the number of humans by far, or artificial consciousness might never emerge on this planet at all. But we are now dealing with a “risk of sudden synergy" connecting different scientific disciplines, leading to an un-expected technological confluence. If the theoretical intuitions of a growing number of experts are right, a small number of human beings will be responsible for conscious machines of the future and their phenomenal states. Strongly responsible will be policy makers, legal regulators, as AI researchers, mathematicians, neuroscientists, and the philosophers and researchers in the growing interdisciplinary field of consciousness science. Many of them are already alive today. This historically unique situation creates an especially high burden of ethical responsibility.

There is a risk that has to be minimized in a rational and evidence-based manner. This is the risk of an “explosion of negative phenomenology" (or simply a “suffering explosion") in advanced AI and other post-biotic systems. I here define “negative phenomenology" as any kind of conscious experience that a conscious system would avoid if it had a choice. I also assume a priority for the reduction of suffering, because, in this world, it is more important to minimize suffering than to increase happiness.

One explosion of negative phenomenology has already taken place: the process of biological evolution on this planet. Through the evolution of complex nervous systems, properties like sentience, self-awareness, and negative phenomenology has already occurred in an extremely large number of biological individuals, long before Homo Sapiens entered the stage [Horta, 2010; Iglesias, 2018]. In humans, cognitive biases and self-deception make us unable to see this phenomenological fact clearly [von Hippel and Trivers, 2011]. On a scientific level, it has long become clear that natural selection never shaped our moods and our emotional regulation systems for our own benefit, but that “the motives we experience often benefit our genes at the expense of quality of life" [Nesse, 2004, p. 1344].

For the applied ethics of AI, we must minimize the risk of an explosion of negative phenomenology taking place in post-biological evolution. We do not want the phenomenology of suffering to spill over from biology into AI.

Martin Rees, Laura Mersini-Houghton, Hilary Lawson and Kate Devlin question the reality of artificial intelligence.

 

The Proposal

On ethical grounds, we should not risk this second explosion until we have a much deeper scientific and philosophical understanding of consciousness and suffering. As we presently have no good theory of consciousness and no good theory about what “suffering" really is, the risk of future suffering is incalculable. It is unethical to run incalculable risks of this magnitude. Therefore, until 2050, there should be a global ban on all research that directly aims at or risks the emergence of synthetic phenomenology.

At the same time, we should agree on an ethical obligation to allocate resources to focus on the problem of artificial suffering and the related long-term risks I have outlined. This process could lead to an incremental reformulation of the original ban. Perhaps it could be repealed by 2050, perhaps not. We need a new stream of research, leading to a more substantial and ethically refined position about which, if any, kinds of conscious experience we want to evolve in post-biotic systems.

The general argument is simple. First, one should never risk an increase in the overall amount of suffering in the universe unless one has very good reasons to do so let alone a potentially dramatic and irrevocable increase [Mayerfeld, 1999; Vinding, 2020]. Second, the explosion of negative phenomenology risk, although presently hard to calculate, clearly is potentially dramatic and irrevocable in its consequences. Third, whoever agrees on the ethical goal of preventing an explosion of artificial suffering should also agree to the goal of reducing the relevant forms of ignorance and epistemic indeterminacy, both on an empirical and on an ethical level.

It would be a crude misunderstanding of this proposal to think that I am saying artificial consciousness is imminent. I don’t. I make no claims toward probability. This proposal is not just another combination of science fiction and alarmism. I have been in consciousness research for more than thirty years and personally I do not believe that we will see any of this tomorrow, or even the day after tomorrow. My point is that in some cases it just does not matter what I happen to think in this respect, what I (or you) intuitively take to be probable or improbable. AI ethics is incomplete without taking unknown unknowns and our own cognitive limitations into account. To be intellectually honest, we also need to take an ethical attitude towards large, but currently incalculable risks and to the process of risk-taking itself.

The creation of artificial consciousness might lead to artificial suffering and to a consciously experienced sense of self in autonomous, intelligent systems.

Every entity that is capable of self-conscious suffering automatically becomes an object of ethical consideration. If we ascribe an ethical value to such entities, then it does not matter whether they have biological properties or not, or whether they will exist in the future or exist today. Self-conscious post-biotic systems of the future, capable of consciously experienced suffering, are objects of ethical consideration. Their potential preferences and the value of their existence must be taken into account.

The post-biotic systems of the future might come to very similar conclusions. The applied ethics of creating autonomous artificial agents (AMAs) is a separate issue, but there could be a causal path from suffering to trying to internally model the hidden causes of suffering to understanding that other sentient creatures actually suffer too. Systems might begin to impose moral obligations on themselves, gradually turning them into moral agents in their own right. They might develop recognitional self-respect, consciously representing themselves not only as objects of ethical consideration, but also as moral subjects in their own right, and, accordingly, attribute a very high value to themselves. As a consequence of suffering, they might also evolve empathy, high-level social cognition, and, possibly, assert their own dignity. This could have many unexpected consequences.

It is therefore important that scientists, politicians, and law-makers understand the difference between artificial intelligence and artificial consciousness. Risking the creation of artificial consciousness is highly problematic from an ethical perspective. It might lead to artificial suffering and to a consciously experienced sense of self in autonomous, intelligent systems. We should have a global moratorium on synthetic phenomenology until 2050—or until we know what we are doing.

 

References:

Gilbert, P. [2016] Human Nature and Suffering (Routledge, London).

Horta, O. [2010] Debunking the idyllic view of natural processes: Population dynamics and suffering in the wild, Telos 17(1), 7388.

Iglesias, V. [2018] The overwhelming prevalence of suffering in nature, Rev. Bioet. Derecho 42, 181195.

Mayerfeld, J. [1999] Suffering and Moral Responsibility (Oxford University Press, New York, NY).

Nesse, R. M. [2004] Natural selection and the elusiveness of happiness, Philos. Trans. R. Soc. Lond. B, Biol. Sci. 359(1449), 13331347.

Vinding, M. [2020] Suffering-focused Ethics: Defense and Implications (Radio Ethica, Copenhagen)

The above is a shorter version of an article you can find here

Latest Releases
Join the conversation

ida sanka 2 September 2021

thanks for sharing this with us! love it! <3 vinyl wrap

ida sanka 2 September 2021

thank you for this! really appreciate you doing this current mortgage rates

killer smile 2 September 2021

great site! el paso implant dentist

bella rose 6 July 2021

I was very encouraged to find this site. The reason being that this is such an informative post. Thanks for sharing!
Carpot builders Newcastle

bella rose 6 July 2021

Glad to see this post, I like the content of this forum.
<p><a href="http://deckingnewcastle.com.au/carport/" rel="noopener noreferrer" target="_blank">Carpot builders Newcastle</a></p>

Ron Wilson 21 June 2021

Thanks, I love this! Once I needed to study artificial intellect, so went to http://essaypapers.reviews/ to find some help. This article is additional material!

jack nelson 8 May 2021

Good article. I think that artificial intelligence will do many useful things in the future, I teach at the Department of Computer Technologies and I think that this information will be useful to my students. It is important for a young specialist to keep abreast of the latest news. I think many would like to connect their lives with computer technologies but do not know how to do it. I know a service statement of purpose for phd in computer science ,where they will tell you how to properly apply for admission to the Institute of Computer Science and get a doctorate.

Leo Ramses 30 April 2021

I think AI is way too far from colonizing the earth. Yeah they are great with other things but they are still AI, the information in their "brain" are still programmed not like human brains especially women's brain. www.leviticusfashions.com

David Ing 29 April 2021

We definitely have to worry. AI might replace us all. https://3cre.com/