The recent corporate drama at Open AI, with the firing of its CEO Sam Altman by its board only to be reinstated days later, highlights the weakness of corporations, caught up in internal politics, when it comes to dealing with the risks around artificial intelligence. But governments are also not the answer, sluggish and bureaucratic they are always behind their regulations always playing catch up. Instead, democratic accountability of AI companies through citizens' assemblies is the way forward, argue Hélène Landemore and John Tasioulas.
The drama of the last week at OpenAI has been entertaining, to say the least. But it also signals that it is high time to have a conversation about AI politics and, specifically, the governance of AI companies, perhaps as distinct from any other type of corporate governance or perhaps as an inspiration for all companies going forward.
OpenAI is legally structured as a non-profit devoted to the development of AI for the benefit of humanity. The legal status is meant to prevent economic goals from trumping human interests, which is reassuring in theory. Preserving human interests might well have been the motivation for the original board firing Altman in the first place (at least according to some reports).
Unfortunately, the non-profit status is not by itself strong enough to preserve human interests. It is not even clear that it is strictly necessary, since preservation of humanity should operate as an overarching constraint on all companies, including for-profit companies. The question is then: how come a board appointed to preserve human interests couldn’t do its job? We believe that one of the reasons why the original board was so easily steamrolled and its decision to fire Altman reversed—regardless of the merits of the decision itself, on which we can stay agnostic—was because it lacked legitimacy, namely the authority to issue binding orders.
From this point of view, this failure of governance teaches us that accountability of powerful corporations is too important to be left to their own appointed board members, let alone the sluggish bureaucracy of state regulation, and that a more legitimate form of corporate governance, one that has better claims to represent humanity’s interests, is a better hope for containing the risks AI poses to our future.
Accountability is key to legitimacy, in politics and elsewhere, and mechanisms for accountability were lacking from OpenAI’s governance structure. It is alarming that a company and its people, and incidentally the rest of us, are at the mercy of just four board members’ decisions—four people who so far haven’t explained themselves except in the vaguest of terms (Altman wasn’t “candid” enough in his communication—whatever that means).
For all its advantages, the speed and efficiency of concentrated power also increase the chances of making dumb decisions.
A basic accountability mechanism for any organization is deliberative accountability, i.e., the requirement that a group of decision-makers provide reasons for its decisions. The previous board did not seem to have been accountable in that sense to anyone, whether internally (toward employees) or externally (toward the general public).
Another accountability mechanism is the inclusion of the voices of those potentially affected. Involving employees, for one thing, would have slowed down the board in their decision-making process. For all its advantages, the speed and efficiency of concentrated power also increase the chances of making dumb decisions. And speed and efficiency get you nowhere fast if all your decision does is trigger immediate, massive pushback, as we see time and time again in the world of politics more broadly.
Admittedly, employees had a conflict of interest in this particular case—as there was a major cash payout on the horizon, which may partly explain their massive rally behind Altman. But all those engineers presumably also had a first-hand understanding of what they were building and whether or not it was genuinely a threat to humanity. If it is indeed the case that “OpenAI is nothing without its people,” the employees’ point of view should have mattered more, before the decision was taken.
Yet, somehow, key decisions at the top level of OpenAI were, and for now likely will remain, inputted only by a very few people, all taken from the tiniest circles of Western elites.
Beyond OpenAI employees, ChatGPT users and the whole wide world—8 billion people—arguably had a stake in the board’s decision and might have wanted a say too. This might seem fanciful but Sam Altman himself first floated the idea of a global deliberation involving all humans on the matter of AI regulation. OpenAI recently launched a grant program called “Democratic Inputs to AI”—an initiative that recognizes the vast societal impacts of AI and aims to involve a broader spectrum of voices in the development and governance of it (full disclosure, one of us is on the board of academic advisors for that program). Even Meta, not particularly known for its democratic commitments, recently organized so-called Community Forums involving thousands of randomly selected Meta users in online deliberations in order to get their informed views on cyberbullying in the Metaverse and AI regulation.
Yet, somehow, key decisions at the top level of OpenAI were, and for now likely will remain, inputted only by a very few people, all taken from the tiniest circles of Western elites. Is it still possible in the 21st century to decide for people without consulting them? Can there be production of AI for the benefit of humanity without humanity’s input?
A more democratic representation of all affected interests matters
This brings us to another key point. Who has the right to represent humanity? The few people on the OpenAI old board, as well as the people on the new board, are all scientists and technologists, appointed on the grounds of their scientific expertise. But while scientific knowledge is a source of authority, it is much more questionable whether it gives anyone the right to speak on behalf of 8 billion people. If war is too serious a topic to be left to the generals, AI is certainly too important to be left to the experts.
What is needed, also and more urgently, is democratic representation of humanity in the room where key company decisions are made, namely on or at the level of the company board of administrators.
Some think the solution lies in domestic and international regulations, and the equivalent of a global FDA for AI production. Putting in place such safety nets would be a good thing. But, first of all, they’ll never be enough. External regulation can be slow, clumsy, overly generic or too specific, and sometimes counterproductive. In a field as fast-moving as AI, external regulators are usually condemned to fighting the last war. Second and perhaps even more importantly, international governance structures, as well as the independent agencies they authorize, are neither democratic nor terribly representative of humanity’s interests, being mired in the long legacies of past power imbalances, including between the Global North and the Global South.
What is needed, also and more urgently, is democratic representation of humanity in the room where key company decisions are made, namely on or at the level of the company board of administrators. The production of AI for the benefit of humanity cannot happen without humanity itself, and at the very least credible representatives of it. Does this call for global elections of some sort? No. Elections are only one way to select for representatives (and a very costly, corruption-prone, and biased one at that). One potential experiment in democratic representation could be, instead, through the creation of global citizens’ assemblies appointed on the basis of “one person, one lottery ticket.”
Citizens’ assemblies are now a well-established process through which local and national governments—including Ireland, France, Belgium and Canada--seek to remedy their legitimacy deficit and their governance issues. They are large bodies of ordinary people, typically convened (sometimes physically, sometimes virtually) to deliberate about complex political issues with the goal of producing policy recommendations and in some cases even legislative proposals. A recent OECD report has documented hundreds of such assemblies (or similar concepts) around the world.
Since they draw from a random sample, usually with some stratification, citizens’ assemblies will include 50% of women. They will include a wide range of nationalities and ethnicities. They will include engineers and the occasional CEO, but also teachers, Uber drivers, farmers, healthcare workers, stay-at-home parents, journalists, builders, people working in call centers or for Amazon Mechanical Turk etc.—all of whom will be affected by AI development in myriad ways. As a group they offer an imperfect but better and more democratic representation of humanity than any current company board can claim to.
In the context of corporations, such bodies could play an advisory role, but they could also help set an agenda for the board, based on their own internal deliberations. They could also be empowered to veto business decisions, for example if such decisions are judged to violate key human values, including those entrenched in a democratically written constitution for AI (a project currently explored by the company Anthropic). Like existing citizens’ assemblies in other contexts, such assemblies would be supported by the work of experts like Helen Toner or Larry Summers. But they would not be controlled by or subordinated to them.
A global citizens’ assembly, regularly rotated (perhaps every year or so), could constitute the totality of the board’s structure, or it could be an element of it, or it could be an accountability mechanism external to it. The point is that the deliberations of these bodies of ordinary humans could shape a company’s decision-making process and help keep it in line with the whole range of human interests, norms, and values as they already exist and, importantly, keep evolving over time.
To achieve maximal legitimacy, such citizens’ assemblies should themselves be connected to the larger public, and humanity as a whole, through various online and offline consultations mechanisms, some of which could be uniquely enabled by AI tools. AI technology can indeed uniquely help scale deliberation and participation. Among the many tasks that AI tools can fulfill are selection and allotment of participants to citizens’ assemblies (an algorithm called LEXIMIN designed by MIT scientists is already available and used), facilitation of deliberation (Meta’s Community Forums pioneered that function), summarization of deliberations, and aggregation of vast amounts of online input (the pol.is system does this routinely for the Taiwanese government). And that’s just the beginning.
There are many takeaways from the OpenAI episode, including the dangers of the CEO personality cult and the gendered dimension of the conflict, pitting the boys' club of bold entrepreneurs against the pesky women who cared about human safety and honesty in communication. Many of these problems would arguably be attenuated if there was better and more democratic representation at the top. There might even be a model of more democratic governance here for other companies to explore, even if their products are nowhere as civilization-threatening as AI might one day become. As we see with the use of citizens’ assemblies in the political context, more inclusive decision-making does not necessarily mean importing conflict and decision paralysis. On the contrary, it usually leads to better and more legitimate decisions that are also less likely to attract populist backlash.