From ChatGPT to Replika, AI is being touted as a tool to solve the loneliness epidemic. But is this just a false marketing promise, or perhaps something more sinister. In these habitually lonely times wanting someone to talk to, even if they’re just an AI chat bot, is understandable. But besides the emptiness of a conversation with an AI, author of Nihilism and Technology, Nolen Gertz, argues that even the most optimistic story of connection could lead to danger.
Nolen Gertz will be speaking at the IAI's upcoming music and ideas festival HowTheLightGetsIn alongside Brian Greene, Roger Penrose, Fiona Hill, Slavoj Žižek and many, many more. Explore the full programme and book your tickets here.
There is something disturbing and yet understandable about wanting to talk to a machine.
On the one hand, it could seem that wanting to talk to a chatbot demonstrates a preference for programs over people. People can talk back, making a monologue into a dialogue. Presumably then the desire to talk to a program rather than a person is precisely because of the desire to have a monologue rather than a dialogue, to have someone to talk at rather than talk with. A conversation with a chatbot is then not a conversation but a performance of something that looks like a conversation. Two players are seemingly involved in the conversation game, but really there’s only one player, like playing a game of catch with a brick wall. The ball bounces back to you, but the wall did not throw the ball, as instead it merely reflects back what it is put into it, a reflection that can give the appearance of a game of catch, especially if you do not understand geometry and so think the ball comes back to you more unpredictably than it really does.
SUGGESTED READING All-knowing machines are a fantasy By Emily M. Bender
On the other hand, it could seem that wanting to talk to a chatbot need not be nefarious or misanthropic, but simply a new way to find out about the world and about ourselves. If conversations are a way to get outside of one’s experience by engaging with the experience of others, then the experience provided by chatting with a chatbot could be just as self-transcendent as the experience provided by chatting with a person. In fact, it could be even more so, as human conversation partners are typically more inclined to try to be polite and so possibly restrain themselves in order to avoid offense or confusion. Conversation with people could thus be seen as more conventional and rote than could be the case when talking to something that does not experience anxiety or shame. Likewise, we might feel less anxiety and shame when talking to a chatbot, and so, like talking to a baby or a cat, we might feel more able to open up to a machine. Consequently we might engage in acts of confession with a chatbot that we might not otherwise be able to experience with someone who we fear might not like what we are confessing. Cats, babies, and chatbots do not judge us for our sins, and so it makes sense to seek out non-judgmental conversation partners even if the conversations will of course be rather one-sided.
So on the one hand, we seek out chatbots because we are in a technological world that alienates us from humanity, leading us to seek out inhuman conversation “partners,” and so chatbots help us to become more alienated and more isolated from other human beings. On the other hand, we seek out chatbots because we are in a judgmental world that alienates us from ourselves, leading us to often pretend to be something we are not when talking to other people in order to avoid discomfort or censure, and so chatbots help us to become more comfortable and more confessional.
Conversation with people could thus be seen as more conventional and rote than could be the case when talking to something that does not experience anxiety or shame
From these two perspectives, it might seem like there is danger on the one side, and opportunity on the other. But in reality I think there is danger on the one side, and even more danger on the other. The danger of our becoming more and more individualistic and disconnected, needing to talk to machines because we can no longer talk to other people, would seem to be bad enough. But perhaps the greater danger is actually not machine-induced misanthropy, but rather machine-induced forgetfulness, forgetfulness of the fact that chatbots are not mere programs, but are programs that are owned and operated by the tech companies that create them.
In the more optimistic scenario I provided above, where we become more comfortable and more confessional thanks to chatbots, I suggested this was because the chatbot provided us a non-judgmental way to unburden ourselves to an other, just like we might do with a cat or with a baby. But unlike a cat or a baby, a chatbot is just the forward-facing part of a corporation that does not provide chatbots as a public service. Rather the chatbot is a way to collect data from users, whether that data is for marketing or for training or for both. And the more comfortable we become unburdening ourselves to a chatbot, the more data the chatbot collects. So it is in the interest of the tech company behind the chatbot to remain behind the chatbot, which means that will be in the interest of tech companies if, as people have already been doing, we come to think of chatbots as alive, as sentient, as conscious, as beings in other words who are capable of acting alone rather than as part of corporate data-gathering schemes.
SUGGESTED READING Future AI in the therapist's chair By Keith Frankish
The more we come to use chatbots—whether because we don’t want to talk to people, don’t feel comfortable talking to people, or are just bored and chatbots seem like a way to be temporarily less bored—the more chatbots will seem like just a tool that is only as good or bad as the people who use them. In other words, we will focus on the chatbot and on the intentions of users, and forget about the people behind the chatbot and the need to pay attention to their intentions. If chatbots are seen by society as, more than anything else, just another distraction, then at least we need to ask what we are being distracted from (e.g., the role of tech companies in everyday life), and why so many people seem to want to be distracted (e.g., the success of tech companies in shaping everyday life).
Join the conversation