There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.
There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.
Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.
So is Longtermism just futuristic fearmongering or does it point to genuine risks? The long-term risks are not entirely imaginary, we believe, but one must see beyond the sensationalized existential risks that receive so much airtime and reconceptualize things, considering realities of today’s AI ecosystem. In what follows, we propose a view we call Moderate Longtermism.
1. Reconceptualizing long-term risks
First, notice that AI systems do not need to be superintelligent to outthink humans in significant ways and pose a serious risk to human survival. For example, an emotionally deficient system, such as one that is deficient in empathy and moral programming, yet which can set and achieve goals, self-improve, and possesses superior planning and execution abilities, would seem to be the most dangerous system of all, especially when given knowledge of (and access to) critical infrastructure.
Because this system wouldn’t outthink us in every domain, it would not be superintelligent. Indeed, this kind of system is already within the technological near-term horizon. And it is precisely the combination of superhuman abilities in certain domains and massive gaps in other abilities that makes it dangerous.
Second, a Terminator-style scenario does not seem to be the real threat with superintelligent AI systems, at least for now. For one thing, robotics developments lag far behind developments in generative AI. Second, the danger concerns not just a single “killer robot” system, but rather the unforeseen impacts of the interaction of many of these systems in the larger AI ecosystem. There are legitimate cybersecurity worries about interacting AIs on the internet that could exhibit emergent features, including greater-than-human intelligence. These are not embodied, humanoid intelligences, yet they are serious candidates for the aforementioned emotionally deficient systems of concern.
SUGGESTED VIEWING Consciousness and ChatGPT With James Tartaglia
Join the conversation