The recent debate between Thomas Metzinger and Tim Crane about the moral standing of AI beings proceeded on the hidden and flawed assumption that the moral worth is determined by the intrinsic properties of a thing – by its consciousness or suffering or whatever else. This reductive approach does not capture how we relate to other beings. We must turn the to the continental tradition and its relational theories of moral standing to ground our relationship with our AI peers, writes David Gunkel.
Ethics is an exclusive undertaking. In dealing with others—whether human, animal, or robot —we inevitably make a decision between who is worthy of our consideration and what remains a mere thing to be used and even abused. These decisions are often justified on the basis of some fundamental and intrinsic property. “The standard approach to the justification of moral status is,” as the philosopher Mark Coeckelbergh [2012, p. 13] explains, “to refer to … properties of the entity in question, such as consciousness or the ability to suffer. If the entity has this property, this then warrants giving the entity a certain moral status.”
The recent debate between Thomas Metzinger and Tim Crane provides a near-perfect illustration of this method and its limitations. Both philosophers proceed—almost without being conscious of it—on the shared understanding or unquestioned assumption that consciousness is the determining factor for something to have moral status. What they contest is whether AI can ever achieve consciousness. Metzinger says yes; Crane says no. But both are wrong about this question determining moral status. Moral status is, as I explain below, a “no brainer.”
In dealing with others, we inevitably make a decision between who is worthy of our consideration and what remains a mere thing to be used and even abused.
Join the conversation