Robot rights

How we decide that 'what' should be 'who'

The recent debate between Thomas Metzinger and Tim Crane about the moral standing of AI beings proceeded on the hidden and flawed assumption that the moral worth is determined by the intrinsic properties of a thing – by its consciousness or suffering or whatever else. This reductive approach does not capture how we relate to other beings. We must turn the to the continental tradition and its relational theories of moral standing to ground our relationship with our AI peers, writes David Gunkel.

 

Ethics is an exclusive undertaking. In dealing with others—whether human, animal, or robot —we inevitably make a decision between who is worthy of our consideration and what remains a mere thing to be used and even abused. These decisions are often justified on the basis of some fundamental and intrinsic property. “The standard approach to the justification of moral status is,” as the philosopher Mark Coeckelbergh [2012, p. 13] explains, “to refer to … properties of the entity in question, such as consciousness or the ability to suffer. If the entity has this property, this then warrants giving the entity a certain moral status.”

The recent debate between Thomas Metzinger and Tim Crane provides a near-perfect illustration of this method and its limitations. Both philosophers proceed—almost without being conscious of it—on the shared understanding or unquestioned assumption that consciousness is the determining factor for something to have moral status. What they contest is whether AI can ever achieve consciousness. Metzinger says yes; Crane says no. But both are wrong about this question determining moral status. Moral status is, as I explain below, a “no brainer.”

In dealing with others, we inevitably make a decision between who is worthy of our consideration and what remains a mere thing to be used and even abused.

 

Why properties like suffering or consciousness cannot determine moral status

There are three problems with deciding questions of moral status based on criteria such as suffering or consciousness.

1) Identification - How does one work out which exact properties are enough for moral status? Which ones count? And who has been granted the power to institute and enforce this determination? The history of moral philosophy is an on-going struggle over this matter with different properties vying for attention at different times. And in this process, many properties that once seemed both necessary and sufficient—like being white and male—have turned out to be spurious, prejudicial or both. 

2) Definition - Whatever property one chooses, it is going to be hard to define. Consciousness, for example, is persistently difficult to characterize. The problem, as Max Velmans [2000, p. 5] points out, is that this term unfortunately “means many different things to many different people, and no universally agreed core meaning exists.” Other properties do not do much better. Suffering is just as ambiguous, as Daniel Dennett demonstrates in the essay “Why You Cannot Make a Computer That Feels Pain” [1998]. The reason you cannot make a computer that feels pain is not some technological limitation, it is because we are unable to decide what pain is in the first place. What Dennett demonstrates is that the very concept of pain (the thing that would be somehow brought to life in the computer) is arbitrary, inconclusive, and indeterminate.

3) Detection - As if responding to Dennett’s challenge, engineers have constructed mechanisms to synthesize believable emotional responses and systems capable of giving signs of “pain.” But is this “real pain” or just a simulation of something that looks like pain? (Sound familiar? This is the point of John Searle’s Chinese Room thought experiment.)  Resolving this is difficult, especially because a property like suffering or conscious experience is not directly observable. This is what philosophers commonly call “the problem of other minds”—the fact that all we have to go on are observable behaviors. We cannot, as Donna Haraway [2008, p. 226] describes it, "climb into the heads of others to get the full story from the inside.”

Physicist Roger Penrose, computer scientist Nigel Shadbolt and novelist and digital age icon Warren Ellis consider the threat of intelligent machines.

 

Thinking Otherwise

In response to these problems, moral philosophers—especially in the continental tradition—have advanced other methods for resolving the question of moral status that can be called, for lack of a better description, “thinking otherwise.” These alternatives have three pivotal characteristics: 

1) Relational – Following Emmanuel Levinas and others, this way of thinking flips the script on the usual procedure. Moral status is decided not on the basis of pre-determined subjective or internal properties but according to objectively observable, extrinsic social relationships. As we encounter and interact with other entities—whether they be another human person, a non-human animal, or an intelligent machine—this ‘other’ is experienced in relationship to us. The question of moral status does not depend on what something is but on how it stands in relationship to us and how we respond to it – ‘in the face of the other’ (to use the Levinasian terminology).

This shift in perspective—a shift that puts the ethical relationship before ontological properties - has been confirmed in numerous social science experiments. The computer as social actor (CASA) studies undertaken by Byron Reeves and Clifford Nass [1997] demonstrated that human users will accord computers and other technological objects a social standing similar to that of another human person and that this is the result of social interaction, irrespective of the intrinsic properties of the entities involved. Social standing, in other words, is a mindless operation. And these results have been consistently verified in “robot abuse studies” (an unfortunate moniker but also a questionable HRI research practice) in which researchers have found that human subjects respond emotionally to robots and express empathic concern for the machines irrespective of the inner workings of the device.

In responding to the moral challenges posed by AI we are called to take responsibility for ourselves, for our world, and for those others whom we encounter here.

2) Empirical – When taking the relational approach, the problems of other minds—the difficulty of knowing whether the ‘other’ is conscious or capable of suffering—is not a fundamental epistemological barrier. In fact, moral decision making operates in the opposite direction. Internal properties do not come first, rather they are the result of decisions made in the face of social interactions with others. As the feminist STS researcher Karen Barad [2007, p. 136-7] has argued, the relationship comes first—in both temporal sequence and status.

3) Diverse – Finally, dividing the world into conscious beings who matter and mere things that do not matter is the product of a particular European and modern way of thinking. Other cultures, distributed across time and space, do not divide-up the world in this binary fashion. They separate the who from the what according to other ways of seeing, valuing, and acting. The moral standard normalized by Metzinger and Crane—the common understanding that they tacitly share and do not question— perpetrates a kind of intellectual colonialism. Thinking otherwise, by comparison, is open to diverse ways of thinking about and responding to others— whether those others are biological or artificially made. This is not to say that the philosophical traditions mobilized by Metzinger and Crane are wrong. It is simply to recognize that this way of thinking is a particular kind of situated knowledge. It is not some eternal Platonic form that is valid for all peoples, in all places, and for all times.

 

The Moral Status of AI

In the end, the question concerning the moral status of AI is not really about the artifact. It is about us and who is included that first-person plural pronoun, “we.” It is about how we decide—together and across the differences of human experience—to respond to and take responsibility for our world. In responding to the moral challenges posed by AI we are called to take responsibility for ourselves, for our world, and for those others whom we encounter here.

 

References

Barad, K. [2007] Meeting the Universe Halfway (Duke University Press, Durham, NC).

Coeckelbergh, M. [2012] Growing Moral Relations (Palgrave Macmillan, New York).

Dennett, D. [1998] Mindstorms (MIT Press, Cambridge, MA).

Haraway, D. [2008] When Species Meet (University of Minnesota Press, Minneapolis).

Reeves, B. and C. Nass [1996] The Media Equation (Cambridge University Press, Cambridge)

Velmans, M. [2000]. Understanding Consciousness. (Routledge, London).

Latest Releases
Join the conversation

uzaiir lioo 12 August 2021

i like this<a href="https://www.google.com/">post</a>
i like thispost

falakshairr falakshairr 16 July 2021

From this site, you all can learn something about How to use menus, BREAD, roles, and permissions in-app as that was the main caption that you can see for us. You will be able to explore a lot more that you never heard of before today. Just look forward to it as there are many other relatable links that your will find here for the rest of us.

jack wisdon 29 June 2021

Thanks a lot for the share me this amazing post getting play euchre and looking the online this best card game forever play.