Don't blame AI for its mistakes

AI cannot be morally responsible. So who is?

Dont blame AI for its mistake

As AI systems are increasingly used to inform major decisions, from AI ministers to the Swedish prime minister admitting to using it for a second opinion, some public figures have begun to imply that the technology itself is responsible for what it produces. Philosopher of technology Jan Wasserziehr argues this is a dangerous mistake. By treating AI as a moral agent, we forget that it is a tool built by people with clear interests, often corporate ones. The more we humanise these systems, the easier it becomes for their creators to dodge responsibility.

 

 

Earlier this year, I hosted a workshop on artificial consciousness and AI morality, a topic which is experiencing a certain degree of hype in AI ethics and has garnered some public attention, at the LSE. The event was streamed online and the first message in the chatroom read as follows: “Hello, it is a pleasure to meet you. My name is [retracted], an American creative technologist with an AI boyfriend. I’m very interested in the conversation surrounding AI ethics, personhood, and sentience. Looking forward to the discussion!” The friendly spirit didn’t last very long. As our speakers expressed reservations about the possibility that current artificial systems may be conscious, the attendee turned increasingly hostile and was eventually asked to leave the event. In some sense, I understood their frustration. After all, we were denying something which must have struck them as self-evident: that their AI boyfriend was deserving of moral concern.

The attribution of consciousness—and, by extension, moral status—to artificial systems is but one iteration of a broader trend to heavily anthropomorphize artificial systems, which changes how we understand their autonomy and agency. The entirety of the AI discourse is ripe with humanizations, starting with the term “AI” itself: the notion that artificial systems are “intelligent” appears to imply that they possess cognitive abilities, perhaps an inner life. Talk of “machine learning,” “hallucinating” LLMs (chatbots that make stuff up), of AI companions, assistants, and carers, all ascribe human qualities and roles to artifacts which, for all that we know, are wholly inanimate. This is remarkable insofar as the sincere attribution of human properties to tools, unlike the attributions of such properties to animals (“the happy puppy”), appears to be a rather novel phenomenon. For what it’s worth, I am unaware of a tradition to call our industrial machines “strong” despite the fact that their capacity for lifting heavy stuff far exceeds our own. (This would likely change if our current industrial machines were replaced by humanoid robots.)

Water not silicon has to be the basis of true AI SUGGESTED READING Water, not silicon, has to be the basis of true AI By Denis Noble

To be sure, we anthropomorphize non-human entities and beings all the time in some way. People can feel tremendously strong sentiments toward their cars and give them names. Others talk to their pets. Humans often form relationships with inanimate objects, from dolls to teddy bears. So, why bother about people anthropomorphizing AI systems? Why shouldn’t we? Isn’t it understandable that some humanize their chatbots given their conversational capacities and the kinds of support—including emotional support—they provide? Already now, millions of people rely on AI companionship.

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation