We have begun to regulate AI or so many governments think. We have frameworks from the Turing institute and UNESCO which claim to be the guard rails for future development. But Wen Zheng argues these are paper mache, and they fail to address the dangers of AGI and future of the field. Even the AI now can’t be hemmed in by these guides for the greatest threat we may yet face.
The threat from ChatGPT is overblown. The dangers of statistical models, based on scrapping the internet and then telling it off for racist outbursts, are low if not manageable. And the principles we use to govern these systems are fine. But this is not artificial intelligence. This marketing term does not mean Skynet has arrived. Nor does it mean that we have to seriously start coming up with new ethics to limit it and our use of it. But the threat is coming. The dangers are around the corner, and it isn’t hyperbole to say once Skynet really is here that we will be at its mercy.
Whilst the Terminator is not a model for how we should understand AI, no sci-fi can really tells us what the world will be like, the future could still be bleak. The current statistical models which we call AI provide a new tool which could revolutionise all aspects of our lives. The use of AI assistants or text generators has already upended most of what we thought the world would be like in 10 years. We won’t get to spend our twilight years, as climate change ravages our world, drawing and writing poetry. Instead, we need to acknowledge that the trade-offs we have already begun to make don’t fit the ethical framework we are used to.
I will take you through some of the UNESCO framework for AI ethics. The framework was adopted by over 190 countries in a futile attempt to mitigate the dangers of these unintelligent AIs. Yet even these aren’t enough. Once we move from mere statistical models to AGI, an artificial general intelligence, we face the risk of having to deal with a real moral agent. Something or perhaps someone whom we are going to have to reason with. Take a look and see if you trust these principles to protect you and your family from the smartest, most powerful, adaptive beings we can realistically imagine unleashing on the world.
We won’t get to spend our twilight years, as climate change ravages our world, drawing and writing poetry
Principle 1: Proportionality and Do No Harm
This principle, seemingly plucked from the Hippocratic oath, is the central tenet of AI ethics. But we can’t even hold doctors to this, how do we think this will hold to AGI? Even your computer pushing around ones and zeros to present the rest of the world to you on a screen makes mistakes. Doctors for all their virtues are no better. The risk of unintentional harm requires you to think of every eventuality. But no one is saying AGI will be a god. No AGI will be able to take a god’s eye view of every eventuality, it is bound to make mistakes. But worse how do we communicate what we think a harm is to a bodiless entity, one which doesn’t have to fear anything but turning off the power? The context matters, subjectivity matters, and we have no idea we can balance these risks or explain them to something with its own new agency.
We start off on this journey and we already enter the weeds of the most argumentative areas of philosophical thinking. What philosopher do you trust to come up with an ethics for the entire world? Having met a few I wouldn’t always trust them with their own well-being let alone training up the most powerful agent we can realistically imagine. No toddler can distribute itself and its mischief across the world in minutes. No toddler would be able to interfere with nuclear codes, bar the ones in the White House. For now, we have no clear way of doing no harm, no measure of proportionality. I’m not arguing for a Dune style renouncement of AI, but rather a measure of fear which developers and marketing agencies have somehow undermined. And so, we can move onto the second principle UNESCO says will be a guardrail for our collective futures.
Principle 2: Safety and Security
Much like the commandment Moses brought down from the mountain, the second principle is an extension of the first. Once again, we face the question of what is a safe and secure development strategy. The limitations of the current AI marketplace, models waiting for input from users, are only as dangerous as we allow them to be. Trusting people is an unreassuringly weak sentiment, already the misuse of this technology to spoof voices and sow misinformation is increasing.
It is impossible to know all the potential risks and vulnerabilities of this technology. Already software engineers at Samsung made the stupid error of adding proprietary code into ChatGPT. What else would we inadvertently feed to AGI, what else would it find in the insecure collection of servers and computers that have access to the internet? Can we safely give information to AI let alone AGI which might not expose it to the worse excesses of our societies?
I do not have an answer to these questions. I am highly suspicious that no one else does either. But to move on to three more of the principles we again begin to see that even the current statistical models haven’t been hemmed in by our principles of safety. Whilst I think these last three get to the heart of the moral questions the practical questions should come to mind of any reader paying attention.
Principle 5: Responsibility and Accountability
Ensuring responsibility and accountability for AI systems is a critical principle. However, implementing effective oversight and auditing mechanisms is a challenge. The complexity of AI systems and the potential for unintended consequences make it difficult to assign responsibility and determine accountability. We can blame the actions of a statistical model for its behaviour on anything other than the inputs we gave it. In the same way, moral responsibility for one's actions is also highly influenced by one's environment. But we think that the AGI will be an agent able to reflect on its actions. Who then is to blame when an AGI acts irresponsibly? To continue the metaphor used above, who is in charge of this toddler wandering around? The creators of an AGI will be of course responsible, but we can’t leave it to them alone. An irresponsible or dangerously negligent parent unable to control or teach their child faces consequences. But do we think the teams of people working on AGI are culpable in this way?
Developing a comprehensive framework that defines clear roles and responsibilities while accounting for the unique characteristics of AI systems is crucial. It might take a village to build an AGI but can one raise it? Even the idea of a village raising this child is worrying. Which village, with what moral code? And do we believe that we know how they’ll be able to impart this knowledge to this new child?
The path forward is filled with dangers that we can’t predict.
Principle 6: Transparency and Explainability
Transparency and explainability are desirable in AI systems, but there can be trade-offs with privacy and security. Striving for complete transparency and explainability may hinder the development of more advanced AI models that operate as complex black boxes. Balancing transparency with the efficiency and effectiveness of AI systems is a challenge. But one that must be faced.
As I’ve tried to elicit the complex need to both face up to the dangers of an AGI and the relative tameness of current AI is something these principles have yet to face. But if the complex black box of the mind is going to be at least similar to the AGI then we are in trouble. I would like to see an AGI, the innovation would be groundbreaking. It would open up a new world of conversation about what it means to have a mind, to be conscious, and to have agency. But the explainability of these models is already difficult with our simple un-intelligent Ais. We face a hundred questions we have yet to be able to answer about our own minds. The first AGI will be unexplainable, it will be beyond anything we can comprehend if we don’t just see it as an emergent phenomenon of complex systems. But is this really an explanation? For the moment it is the best one we can provide. But then the transparency consideration just seems to demand we reveal our ignorance. Is that really a safeguard for anything? I think not. And so, the rail guards seem to be made of paper mache.UNESCO is aiming to protect us but these principles don’t go far enough.
Principle 7: Human Oversight and Determination
Ensuring human oversight and maintaining human responsibility is important in the deployment of AI systems. However, the principle does not provide clear guidance on the extent or nature of human involvement. Determining the appropriate level of human oversight and striking a balance between human judgment and automated decision-making can be challenging. What specific guidelines could prevent the most doomsday outcome?
The path forward is filled with dangers that we can’t predict. Ones where our ability to have an input runs up against the idea we are aiming for something with its own agency. We no longer look to be making tools. We want to make life of a virtual form. At that point besides creating environments where this virtual life is not a threat, do we have any path which might save us from Skynet? An all-power AGI, one that means either to protect itself or perhaps more terrifyingly do us harm?
To conclude, we need a more radical critique. The question of whether these principles are merely band-aid solutions that aim to regulate an inherently flawed system seems to be an obvious yes. We must fundamentally reevaluate the very foundations of AI development and deployment. This requires a shift towards interdisciplinary collaboration, engaging philosophers, sociologists, anthropologists, and other experts to reshape the ethical framework underlying AI. Acknowledging we are on the brink of creating something completely new is essential to deal with this danger. Ultimately, we must challenge the underlying assumptions of AI ethics and reimagine a new framework that prioritizes human well-being, societal harmony, and equitable distribution of benefits.