AI and the end of reason

Seeing through black boxes

Life changing decisions are increasingly being outsourced to Artificial Intelligence.  The problem is, AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd, writes Alexis Papazoglou.

 

One of the greatest risks that AI poses to humanity is already here.  It’s an aspect of the technology that is affecting human lives today and, if left unchecked, will forever change the shape of our social reality. This is not about AI triggering an event that might end human history, or about the next version of Chat GPT putting millions of people out of a job, or the welter of deep fakes and misinformation war that’s coming. It’s about a feature of current AI that its own designers openly admit to yet remains astonishing when put into words: no one really understands it.

23 05 04 The culture of bees and the AI apocalypse.dc SUGGESTED READING The culture of bees and the AI apocalypse By Grant Ramsey

Of course, AI designers understand at an abstract level what products like Chat GPT do: they are pattern recognizers; they predict the next word, image, or sound in a series; they are trained on large data sets; they adjust their own algorithms as they go along, etc. But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation