AI and the end of reason

Seeing through black boxes

Life changing decisions are increasingly being outsourced to Artificial Intelligence.  The problem is, AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd, writes Alexis Papazoglou.

 

One of the greatest risks that AI poses to humanity is already here.  It’s an aspect of the technology that is affecting human lives today and, if left unchecked, will forever change the shape of our social reality. This is not about AI triggering an event that might end human history, or about the next version of Chat GPT putting millions of people out of a job, or the welter of deep fakes and misinformation war that’s coming. It’s about a feature of current AI that its own designers openly admit to yet remains astonishing when put into words: no one really understands it.

23 05 04 The culture of bees and the AI apocalypse.dc SUGGESTED READING The culture of bees and the AI apocalypse By Grant Ramsey

Of course, AI designers understand at an abstract level what products like Chat GPT do: they are pattern recognizers; they predict the next word, image, or sound in a series; they are trained on large data sets; they adjust their own algorithms as they go along, etc. But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

Often referred to as the issue of interpretability, this is a significant challenge in the field of AI because it means that the system is unable to explain or make clear the reasons behind its decisions, predictions, or actions. This may seem like an innocuous detail when asking Chat GPT to write a love letter in the style of your favorite poet, but less so when AI systems are used to make decisions that have real-world impacts such as whether you get shortlisted for a job, whether you get a bank loan, or whether you go to prison rather than given bail, all of which are decisions already being outsourced to AI. When there’s no possibility of an explanation behind life-changing decisions, when the reasoning of machines (if any) is as opaque to the humans who make them as to the humans whose lives they alter, we are left with a bare “Computer says no” answer. When we don’t know the reason something’s happened, we can’t argue back, we can’t challenge, and so we enter the Kafkaesque realm of the absurd in which reason and rationality are entirely absent. This is the world we are sleepwalking towards.

___

It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped. But that’s not the case.

___

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation