We need to democratize AI

Too important to leave to corporations or the government.

The recent corporate drama at Open AI, with the firing of its CEO Sam Altman by its board only to be reinstated days later, highlights the weakness of corporations, caught up in internal politics, when it comes to dealing with the risks around artificial intelligence. But governments are also not the answer, sluggish and bureaucratic they are always behind their  regulations always playing catch up. Instead, democratic accountability of AI companies through citizens' assemblies is the way forward, argue Hélène Landemore and John Tasioulas.

 

The drama of the last week at OpenAI has been entertaining, to say the least. But it also signals that it is high time to have a conversation about AI politics and, specifically, the governance of AI companies, perhaps as distinct from any other type of corporate governance or perhaps as an inspiration for all companies going forward.

OpenAI is legally structured as a non-profit devoted to the development of AI for the benefit of humanity. The legal status is meant to prevent economic goals from trumping human interests, which is reassuring in theory. Preserving human interests might well have been the motivation for the original board firing Altman in the first place (at least according to some reports).

Unfortunately, the non-profit status is not by itself strong enough to preserve human interests. It is not even clear that it is strictly necessary, since preservation of humanity should operate as an overarching constraint on all companies, including for-profit companies. The question is then: how come a board appointed to preserve human interests couldn’t do its job? We believe that one of the reasons why the original board was so easily steamrolled and its decision to fire Altman reversed—regardless of the merits of the decision itself, on which we can stay agnostic—was because it lacked legitimacy, namely the authority to issue binding orders.

  related-video-image SUGGESTED VIEWING The AI illusion With Kate Devlin, Martin Rees, Hilary Lawson, Laura Mersini-Houghton, Stephanie Hare

From this point of view, this failure of governance teaches us that accountability of powerful corporations is too important to be left to their own appointed board members, let alone the sluggish bureaucracy of state regulation, and that a more legitimate form of corporate governance, one that has better claims to represent humanity’s interests, is a better hope for containing the risks AI poses to our future.

 

Accountability matters

Accountability is key to legitimacy, in politics and elsewhere, and mechanisms for accountability were lacking from OpenAI’s governance structure. It is alarming that a company and its people, and incidentally the rest of us, are at the mercy of just four board members’ decisions—four people who so far haven’t explained themselves except in the vaguest of terms (Altman wasn’t “candid” enough in his communication—whatever that means).

___

For all its advantages, the speed and efficiency of concentrated power also increase the chances of making dumb decisions.

___

A basic accountability mechanism for any organization is deliberative accountability, i.e., the requirement that a group of decision-makers provide reasons for its decisions. The previous board did not seem to have been accountable in that sense to anyone, whether internally (toward employees) or externally (toward the general public).

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation