The real long-term dangers of AI

AI megasystems and the freedom to think

There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

 

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.

related-video-image SUGGESTED VIEWING AI and the end of humanity With Liv Boeree, Güneş Taylor, Joscha Bach, Eliezer Yudkowsky, Scott Aaronson

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation