Opening the black box

Transparency in AI is not enough

As artificial intelligence becomes intricately interwoven with our daily experiences, influencing both personal and societal decisions, mere transparency in such AI systems falls short of being the ultimate solution to their inherent value-ladenness, writes Rune Nyrup.

 

AI is the future. Or so it would appear from countless press releases and strategies emanating from tech industry and governments alike. And indeed, AI-driven technology has become ubiquitous, whether in the form of applications like ChatGPT and DALL-E that sit visibly in our palms generating text and images, or algorithms that work discretely to personalise adverts, estimate our credit scores and much else besides, based on data streams that we are scarcely aware of. Regardless of what we call these technologies, increasing digitalisation has and will continue to define and reshape our individual and collective lives.

Technological change is always value laden. That is, how we design and implement technologies will always promote specific social and political values, either unintentionally or by design. This has long been argued by philosophers of technology such as Langdon Winner, who in his classical essay “Do Artefacts have Politics?”, explored how building low overpass bridges across the roads in a certain neighbourhood makes the area inaccessible to buses. This itself embodies a preference for private motorists over public transport, which in turn becomes a preference against those who rely on public transport—such as poorer or disabled people—which furthermore correlates with other dimensions of marginalisation, including race, ethnicity, age and class.

While this point is clear enough in the case of physical infrastructure, it applies no less to the digital infrastructures that are increasingly embedded into our lives. As Jenny Davis puts it, digital technologies variously demand, refuse, request, allow, encourage or discourage certain kinds of actions from different kinds of people. For instance, social media doesn’t just provide us with a means to communicate: implicit in the very design of these sites are embodiments of philosophies about what a person is and how we should value and relate to one another. Artificial intelligence is no exception either. How we design and deploy AI will inevitably involve value-laden choices regarding what it should be possible and convenient to do, and for whom.

___

For one thing, transparency alone does not enable autonomous choice or democratic deliberation.

___

How, then, should we manage the value-ladenness of AI? One of the commonly proposed solutions levelled by policymakers, NGOs, and private companies when discussing the ethical challenges presented by AI has been to focus on transparency. Making explicit how AI systems function and the value choices that go into their design—so the idea goes—will enable people to make informed choices about how to use AI, allow regulators to hold developers accountable and facilitate public debate about how these technologies should be deployed.

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation

Ross Parker 5 November 2023

Interesting...