AI and the end of reason

Seeing through black boxes

Life changing decisions are increasingly being outsourced to Artificial Intelligence.  The problem is, AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd, writes Alexis Papazoglou.

 

One of the greatest risks that AI poses to humanity is already here.  It’s an aspect of the technology that is affecting human lives today and, if left unchecked, will forever change the shape of our social reality. This is not about AI triggering an event that might end human history, or about the next version of Chat GPT putting millions of people out of a job, or the welter of deep fakes and misinformation war that’s coming. It’s about a feature of current AI that its own designers openly admit to yet remains astonishing when put into words: no one really understands it.

23 05 04 The culture of bees and the AI apocalypse.dc SUGGESTED READING The culture of bees and the AI apocalypse By Grant Ramsey

Of course, AI designers understand at an abstract level what products like Chat GPT do: they are pattern recognizers; they predict the next word, image, or sound in a series; they are trained on large data sets; they adjust their own algorithms as they go along, etc. But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

Often referred to as the issue of interpretability, this is a significant challenge in the field of AI because it means that the system is unable to explain or make clear the reasons behind its decisions, predictions, or actions. This may seem like an innocuous detail when asking Chat GPT to write a love letter in the style of your favorite poet, but less so when AI systems are used to make decisions that have real-world impacts such as whether you get shortlisted for a job, whether you get a bank loan, or whether you go to prison rather than given bail, all of which are decisions already being outsourced to AI. When there’s no possibility of an explanation behind life-changing decisions, when the reasoning of machines (if any) is as opaque to the humans who make them as to the humans whose lives they alter, we are left with a bare “Computer says no” answer. When we don’t know the reason something’s happened, we can’t argue back, we can’t challenge, and so we enter the Kafkaesque realm of the absurd in which reason and rationality are entirely absent. This is the world we are sleepwalking towards.

___

It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped. But that’s not the case.

___

If you’re tuned into philosophy debates of the 20th century involving post-modern thinkers like Foucault and Derrida, or even Critical Theory philosophers like Adorno and Horkheimer, perhaps you think the age of “Reason” is already over, or that it never really existed – it was simply another myth of the Enlightenment. The concept of a universal human faculty that dictates rational, logical, upright thought has been criticised and deconstructed many times over. The very idea that Immanuel Kant, a white 18th-century German philosopher, could come up with the rules of universal thought simply through introspection rings all kinds of alarm bells today. Accusations vary from lazy, arm-chair philosophy (though Kant was very aware of that problem), to racism.  But we are currently entering an era that will make even the harshest critics of reason nostalgic for the good old days.

It’s one thing to talk about Reason as a universal, monolithic, philosophical ideal. But small ‘r’ reason, and small ‘r’ rationality is intrinsic to almost all human interaction.  Being able to offerjustifications for our beliefs and our actions is key to who we are as a social species. It’s something we are taught as children. We have an ability to explain to others why we think what we think and why we do what we do. In other words, we are not black boxes, if asked we can show our reasoning process to others.

SUGGESTED VIEWING The AI hoax With Mazviita Chirimuta

Of course, that’s also not completely true. Humans aren’t entirely transparent, even to themselves if we believe Nietzsche and Freud. The real reasons behind our thoughts and actions might be very different from the ones we tell ourselves and give to others. This discrepancy can have deep roots in our own personal history, something that psychoanalysis might attempt to uncover, or it might be due to social reasons, as implied by a concept such as unconscious bias. If that’s the case, one could argue that humans are in fact worse than black boxes – they can offer misleading answers as to why they behave the way they do.

But despite the fact that our reasoning can sometimes be biased, faulty, or misleading, its very existence allows others to engage with it, pick holes in it, challenge us, and ultimately demonstrate why we might be wrong. What is more, being rational means that we can and should adjust our position when given good reason to do so. That’s something black-box AI systems can’t do.

It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped.  But that’s not the case. Recognizing the issues that arise from outsourcing important decisions to machines without being able to explain how those decisions were arrived at, has led to an effort to produce so-called Explainable AI: AI that is capable of explaining the rationale and results of what would otherwise be opaque algorithms.

The currently available versions of Explainable AI, however, are not without their problems. To begin with, the kind of explanations they offer are post-hoc. Explainable AI comes in the form of a second algorithm that attempts to make sense of the results of the black-box algorithm we are interested in the first place. So even though we can be given an account of how the black-box algorithm arrived at the results it did, this is not in fact the way the actual results were reached. This amounts more to an audit of algorithms, making sure their results are not compatible with problematic bias. This is closer to what is often referred to as interpretable AI: we can make sense of their results and check that they fulfil certain criteria, even if that’s not how they actually arrived at them.

___

The idea that only algorithms we can’t understand have the power to reveal hidden patterns in the data – patterns that mere mortals can’t detect – has a powerful pull, almost theological in nature: the idea of a higher intelligence than ours, one we can’t even begin to understand.

___

Another problem with Explainable AI is that it is widely believed that the more explainable an algorithm, the less accurate it is. The argument is that more accurate algorithms tend to be more complex, and hence, by definition, harder to explain. This pitches the virtue of explainability against that of accuracy, and it’s not clear that explainability wins such a clash. It’s important to be able to explain why an algorithm predicts a criminal is likely to reoffend, but it’s arguably more important that such an algorithm doesn’t make mistakes in the first place.

However, computer scientist Cynthia Rudin argues that it’s a myth that accuracy and explainability are competing values when it comes to designing algorithms and has demonstrated that the results of black box algorithms can be replicated by much simpler models. Rudin instead suggests that the argument for the epistemological advantage of black boxes hides the fact that the advantage is in fact a monetary one. There is a financial incentive in developing black-box algorithms. The more opaque an algorithm, the easier it is to profit from it and prevent competition from developing something similar. Rudin argues that complexity and opaqueness exist merely as a means to profit from the algorithm that has such features, since their predictions can be replicated by a much simpler, interpretable algorithm that would have been harder to sell given its simplicity.  Furthermore, the cost of developing opaque algorithms might in fact be much lower than developing interpretable ones, since the constraint of making an algorithm transparent to its user can make things harder for the algorithm’s designer.

Future AI in the therapists chair low SUGGESTED READING Future AI in the therapist's chair By Keith Frankish

But beyond the crude financial incentives for developing opaque algorithms lies something deeper: the mystique around black box algorithms is itself part of their allure. The idea that only algorithms we can’t understand have the power to reveal hidden patterns in the data – patterns that mere mortals can’t detect – has a powerful pull, almost theological in nature: the idea of a higher intelligence than ours, one we can’t even begin to understand. Even if this is true, the veil of mystique surrounding AI serves to inadvertently stifle the idea that regulation is even possible.

One of the best examples of our reverence of such artificial intelligence, but also of the absurdity that results from outsourcing important questions to machine processes we don’t fully understand, is found in Douglas Adams’ The Hitchhikers’ Guide to the Galaxy. When a supercomputer named Deep Thought is tasked with finding the answer to “the ultimate question of life, the universe and everything” it takes it 7.5 million years to complete the task. When it finally reveals the answer, it is simply “42”.

If we can’t see properly inside the AI black boxes, we should stop asking them important questions and outsourcing high stake decisions to them. Regulators need to emphasize to developers the need to produce Explainable AI systems, ones whose reasoning we can make sense of. The alternative is to live in a social world without any rhyme or reason, of absurd answers we can’t question.

Latest Releases
Join the conversation