Opening the black box

Transparency in AI is not enough

As artificial intelligence becomes intricately interwoven with our daily experiences, influencing both personal and societal decisions, mere transparency in such AI systems falls short of being the ultimate solution to their inherent value-ladenness, writes Rune Nyrup.

 

AI is the future. Or so it would appear from countless press releases and strategies emanating from tech industry and governments alike. And indeed, AI-driven technology has become ubiquitous, whether in the form of applications like ChatGPT and DALL-E that sit visibly in our palms generating text and images, or algorithms that work discretely to personalise adverts, estimate our credit scores and much else besides, based on data streams that we are scarcely aware of. Regardless of what we call these technologies, increasing digitalisation has and will continue to define and reshape our individual and collective lives.

Technological change is always value laden. That is, how we design and implement technologies will always promote specific social and political values, either unintentionally or by design. This has long been argued by philosophers of technology such as Langdon Winner, who in his classical essay “Do Artefacts have Politics?”, explored how building low overpass bridges across the roads in a certain neighbourhood makes the area inaccessible to buses. This itself embodies a preference for private motorists over public transport, which in turn becomes a preference against those who rely on public transport—such as poorer or disabled people—which furthermore correlates with other dimensions of marginalisation, including race, ethnicity, age and class.

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Join the conversation

Ross Parker 5 November 2023

Interesting...