Our fictionalised fears that AI is going to take over the world and replace humans, has driven much of the discussion around AI ethics. There is much talk of the need to put “the human” and “human values” at the centre when developing AI technology. But “the human” is a concept with a dark history, used to impose the values of some as “universal”, while excluding all others. At the same time, “the human” does not exist separately from technology, it is shaped by it. A richer and more inclusive understanding of “the human”, as well as an acknowledgement of our interdependence with the technology we develop, are necessary steps to developing AI that serves us all, writes Eleanor Drage.
The human: an elusive, critical term in philosophy, and an even trickier one in the multi-billion dollar market of artificial intelligence (AI). In its ‘ethical’ sub-field, the increasingly saturated and lucrative domain of ‘AI Ethics’, the conversation around what ‘the human’ is in relation to non-human others (technology, ecology, other kinds of animal life, waste) has reached a heightened pitch. It is populated by conflicting debates over “universal values”, “human values”, “human-computer interaction” and “human-AI ecosystems”. At stake is the development and design of AIs that have increasingly intimate interactions with humans: from the chatbots that help the World Food Programme collect data and the AI-powered recruitment and re-skilling tools that are reshaping the workforce, to AI’s used in care homes and that assist in medical imaging. Humans and technology have always shaped one another, but AI is unique in its ability to bring about the reality that it purports to ‘describe’, ‘identify’ or ‘predict’. This capacity has already infiltrated all areas of public and private life, with room for exponential future expansion.
The most dominant question in AI ethics has therefore been, “how do we defend ‘the human’ against the onset of AI?”. It underpins Transhumanist thought, the study of ‘existential risk’, and has become widespread in current public perception towards AI. The latter has been particularly influenced by what researchers from the Royal Society and the Leverhulme Centre for the Future of Intelligence have called “a prevalence of narratives of fear”. Mainstream science fiction has made a lucrative business of exacerbating AI fearmongering, as evidenced by the success of the Terminator franchise.While we are perhaps used to seeing AI as a potential source of concern, ‘the human’ as it has traditionally been conceptualised – white, male, able-bodied, Western – perhaps engenders a greater danger.
When ethicists seek “universal values” with which to regulate the use of AI, we should worry that this takes us back in time, to the 18th century Enlightenment quest for “universal Man”
When ethicists seek “universal values” with which to regulate the use of AI, we should worry that this takes us back in time, to the 18th century Enlightenment quest for “universal Man”: a white male able-bodied European whose universality depended on the disavowal of the humanity of racialised, gendered and disabled others. If we still want to lay claim to a universal ‘humanity’, we must shift our vantage point away from this European subject who has historically indexed exclusions: women, the incarcerated, the enslaved, the colonised. Zakkiyah Iman Jackson has taught us that “the concept of humanity itself is fractured and relational” - not relative, but relational - our humanity emerges or is denied in concert with others. It is calculated according to social hierarchies and capitalist imperatives.
Join the conversation