The possibility and promise of conscious Artificial Intelligence (AI) could be approaching. But can we know for sure that the experience of any conscious AI will not be one full of extreme suffering? Will AI’s exponentially heightened intelligence lead to equally exponentially heightened suffering? The AI beings of the future deserve our ethical concern and immediate action - most importantly, a ban on all AI research until we can ensure our post-biotic peers will not have hellish lives, writes Thomas Metzinger. Read Tim Crane's reply here.
Today, the self-conscious machines of the future have no representation in the political process of any country. Their potential interests and preferences are not systematically represented by any ethics committee, any legal procedure, or any political party on the planet. At the same time, it seems empirically plausible that, once machine consciousness has evolved, some of these systems will have preferences of their own, that they will autonomously create a hierarchy of goals, and that this goal hierarchy will also become a part of their phenomenal self-model (i.e. their conscious self-representation.)
SUGGESTED READING
The AI ethics hoax
By Tim Crane
Some of them will be able to consciously suffer. If their preferences are thwarted, if their goals cannot be reached, and if their conscious self-model is in danger of disintegrating, then they might undergo negative phenomenal states, states of conscious experience they want to avoid but cannot avoid and which, in addition, they are forced to experience as states of themselves. Of course, they could also suffer in ways we cannot comprehend or imagine, and we might even be unable to discover this very fact. Every entity that is capable of suffering should be an object of moral consideration. Moreover, we are ethically responsible for the consequences of our actions.
Our actions today will influence the phenomenology of post-biotic systems in the future. Conceivably, there will be many of them. So far, more than 108 billion human beings have lived on this planet, with roughly 7% of them alive today. The burden of responsibility is extremely high, because, just as with the rolling climate crisis, a comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.
A comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future.
The number of self-conscious machines that will evolve and exist on Earth is unknown: At a certain point it might exceed the number of humans by far, or artificial consciousness might never emerge on this planet at all. But we are now dealing with a “risk of sudden synergy" connecting different scientific disciplines, leading to an un-expected technological confluence. If the theoretical intuitions of a growing number of experts are right, a small number of human beings will be responsible for conscious machines of the future and their phenomenal states. Strongly responsible will be policy makers, legal regulators, as AI researchers, mathematicians, neuroscientists, and the philosophers and researchers in the growing interdisciplinary field of consciousness science. Many of them are already alive today. This historically unique situation creates an especially high burden of ethical responsibility.
Join the conversation