Are we asking the wrong question about loving chatbots? Author of Technology and Nihilism, Nolen Gertz argues that fixating on whether AI romance is “real” hides the costs that make it possible, from mass data extraction and invisible moderation work to water- and energy-hungry data centers. Framed as personal choice, AI intimacy shifts private pleasure onto public harms, so criticism is not prudishness but a defence of everyone else.
The question “Can a human really fall in love with an AI chatbot?” is being raised more and more as of late. But I don’t believe this is the question we ought to be asking. After all, humans can fall in love with all sorts of non-human things: animals, cars, pet rocks, body pillows (or “dakimakura” as they’re known in Japan). I’ve heard some people even love the New York Yankees. So it shouldn’t be too surprising that people have found AI chatbots to be something to fall in love with as well, especially since unlike a cat, a car, a rock, a pillow, or a baseball team, an AI chatbot can actually talk back and carry on conversations with people.
Yet it is precisely the intimate and engaging nature of these conversations that seem to be what is most alarming to people. Consequently articles about these relationships tend to raise subsequent questions like:
Again, I don’t think these are the right questions for us to be asking about AI chatbots. With regards to the first question, it can of course be pointed out that plenty of conversations we have with real people are completely fake. Just think of how often a cafeteria cashier or restaurant server says, “Have a nice meal!” and without thinking you blurt out, “Thanks, you too!” So clearly talking to a person need not be any more real or meaningful than talking to an AI chatbot. With regards to the second question, being in a relationship with a human being can also lead to being cut off from friends and family and becoming lonelier, especially if you fall in love with someone who is abusive, or even just a jerk no one else but you wants to spend any time with. With regards to the third question, again it could be pointed out that plenty of human-human relationships can also tend towards fantasy more than reality, not only because of the “honeymoon phase,” but also because we have a hard enough time trying to figure out our own thoughts, feelings, and desires, let alone those of another person.
So clearly we don’t need AI chatbots in order to have concerns about whether people are in meaningful, isolating, or healthy relationships. Instead what we ought to be concerned about is how these questions all focus only on the individual human in the human-AI relationship. For of course the easiest response to any such individualistic question about the potential harms of human-AI relationships is to turn to libertarianism and simply say: “If people are enjoying themselves, what business is it of yours to interfere?” The more we read clickbait articles interviewing the individuals in these human-AI relationships, the more likely it is that we take up an individualistic perspective on these relationships, and so the more easy it becomes to fall into the trap of this libertarian response that we ought not to care since we should just let people live their own lives and love whoever or whatever they want.
Join the conversation