Decoding digital prejudice

AI for real humans

Our fictionalised fears that AI is going to take over the world and replace humans, has driven much of the discussion around AI ethics. There is much talk of the need to put “the human” and “human values” at the centre when developing AI technology. But “the human” is a concept with a dark history, used to impose the values of some as “universal”, while excluding all others. At the same time, “the human” does not exist separately from technology, it is shaped by it. A richer and more inclusive understanding of “the human”, as well as an acknowledgement of our interdependence with the technology we develop, are necessary steps to developing AI that serves us all, writes Eleanor Drage.

 

The human: an elusive, critical term in philosophy, and an even trickier one in the multi-billion dollar market of artificial intelligence (AI). In its ‘ethical’ sub-field, the increasingly saturated and lucrative domain of ‘AI Ethics’, the conversation around what ‘the human’ is in relation to non-human others (technology, ecology, other kinds of animal life, waste) has reached a heightened pitch. It is populated by conflicting debates over “universal values”, “human values”, “human-computer interaction” and “human-AI ecosystems”. At stake is the development and design of AIs that have increasingly intimate interactions with humans: from the chatbots that help the World Food Programme collect data and the AI-powered recruitment and re-skilling tools that are reshaping the workforce, to AI’s used in care homes and that assist in medical imaging. Humans and technology have always shaped one another, but AI is unique in its ability to bring about the reality that it purports to ‘describe’, ‘identify’ or ‘predict’. This capacity has already infiltrated all areas of public and private life, with room for exponential future expansion.

The most dominant question in AI ethics has therefore been, “how do we defend ‘the human’ against the onset of AI?”. It underpins Transhumanist thought, the study of ‘existential risk’, and has become widespread in current public perception towards AI. The latter has been particularly influenced by what researchers from the Royal Society and the Leverhulme Centre for the Future of Intelligence have called “a prevalence of narratives of fear”. Mainstream science fiction has made a lucrative business of exacerbating AI fearmongering, as evidenced by the success of the Terminator franchise.While we are perhaps used to seeing AI as a potential source of concern, ‘the human’ as it has traditionally been conceptualised – white, male, able-bodied, Western – perhaps engenders a greater danger.

When ethicists seek “universal values” with which to regulate the use of AI, we should worry that this takes us back in time, to the 18th century Enlightenment quest for “universal Man”

 

The dark history of “the human”

When ethicists seek “universal values” with which to regulate the use of AI, we should worry that this takes us back in time, to the 18th century Enlightenment quest for “universal Man”: a white male able-bodied European whose universality depended on the disavowal of the humanity of racialised, gendered and disabled others. If we still want to lay claim to a universal ‘humanity’, we must shift our vantage point away from this European subject who has historically indexed exclusions: women, the incarcerated, the enslaved, the colonised. Zakkiyah Iman Jackson has taught us that “the concept of humanity itself is fractured and relational” - not relative, but relational - our humanity emerges or is denied in concert with others. It is calculated according to social hierarchies and capitalist imperatives.

Unfortunately, the assumption that the male European human is equivalent to or stands for all of humanity has resulted in the development of violent and exclusionary AI: facial recognition that makes Black faces hypervisible to law enforcement but illegible to national border control that validates subjects as citizens; AI-powered soap dispensers that selectively function in favour of white hands; AI devices that don’t work at all for people with disabilities. When we deem a system to be ‘functional’, we ask the question of who it functions best for, and who it does not serve as well or actively puts at risk. These questions were posed by Timnit Gebru and Margaret Mitchell in a paper asking whether large language models – which are developed by Google – might be dangerous if they became too big, prior to their dismissal from the company. Firing Gebru and Mitchell bore testament to the company’s superficial interest in appearing ethical, but never at the expense of its singular commercial prerogative: profit.

Philosophers of technology, race and gender from Catherine D’Ignazio and Ruha Benjamin to Simone Browne and Wendy Chun have reconfigured AI ethics to ask the following questions: Whose knowledge is reflected in the way a system functions, and whose has been excluded? Who is most exposed by the system? The answer points towards a technohuman configuration where what it means to be ‘human’ hangs in the balance. 

Unfortunately, the assumption that the male European human is equivalent to or stands for all of humanity has resulted in the development of violent and exclusionary AI

 

The shaping of “the human” by technology

Technology defines and creates ‘the human’ in particular ways. As demonstrated in the above examples, it does this by racialising and gendering us, by establishing the bounded limits of the human. The eminent and recently deceased French philosopher of technology Bernard Stiegler argued that technology is always more than a mere ‘tool’ for humans, because humanity is co-constituted through technology: it makes us who we are, and neither entity can ever be fully distinct from the other. This finding, which draws from Heidegger’s essay The Question Concerning Technology, continues to be supported by cognitive neuroscience, which has proven that technology affects the development and evolution of the human body.

Stiegler’s contention was that the most important process in human evolution was when primates learned to balance on two feet, freeing up their fish-fin inherited hands for tool use. From there, technology irreversibly informed our development into Neanderthals. Stone-tipped spears, a strong grip, and nimble fingers secured our ancestors the food energy needed to grow bigger brains. The discovery of the Olduvai Hominid 7 hand in 1960 showed how early tool use generated precise and powerful human hands. Today, the jury is still out on the long-term effects of new technologies on the body. However, debates rage in neuroscience over whether computers are the body’s poison or remedy: is the computer’s impairment of memory and augmentation of attention-deficit symptoms more significant than its ability to improve knowledge and cognitive abilities? If AI becomes an increasingly intimate part of human existence, will the boundaries between offline and online become so blurred, that the human brain’s default rest state ceases to be ‘offline’? More likely, those who can afford it will live an ever more AI-integrated life while the rest provide the manual ‘offline’ labour that supports their functionality. In either scenario, as we have determined from historical studies the human will always owe its humanity to technology, from which it is never fully distinct.

That the human is inextricably linked to other ‘non-human’ entities, and that this makes ‘us’ ‘never fully human’, or ‘more than merely human’ is a suggestion that is also native to feminist posthumanism and indigenous knowledges: N. Katherine Hayles has argued that human information processing is dependent on non-conscious processes, for example when ‘electrical voltages are transformed into a bit stream within a computational medium’, or when water makes a human body or hydroelectric power system functional; Rosi Braidotti has used the concept of the ‘posthuman’ to describe how human subjectivity under advanced capitalism is non-unitary, dynamic and changing, and claims that the unified subject of traditional humanisms is inadequate in expressing this mode of contemporary existence; and Kim Tallbear has highlighted some of the devastating ecological consequences of Western ideas about the human/non-human divide: by separating the human out from everything else occupying the planet we have privileged human needs over our fragile ecosystem, disrupting sustainable processes and driving mass extinction. These ecologically minded interventions stand in contrast to John Locke’s ‘vacuum domicilium’, as used to justify colonial land grabs on the basis that property was a natural right, and land could be the property of the individual if exploited in a way that was recognizable to the European colonizer.

Unlike many AI ethicists, thinkers like Tallbear, Braidotti, and Hayles have dedicated decades to exposing the inadequacy of a universalized notion of the ‘human’. In particular, this feminist work has critiqued what ‘the human’ as a concept has signified and effected over the course of history: as Braidotti says, we can no longer go by the assumption that ‘the human’ is a neutral, universal category that has been applied to all people equally. The uneven applications of ‘humanity’ still scar our racialized, gendered, and ableist present.

If AI becomes an increasingly intimate part of human existence, will the boundaries between offline and online become so blurred, that the human brain’s default rest state ceases to be ‘offline’?

 

Towards a new vision of “the human”?

We might ask then: can we think ‘the human’ outside of an exclusive and exclusionary ‘we’? Is ‘the human’ worth salvaging, or should it be left behind? Paul Gilroy, a leading historian and theorist of race and racism, believes that the idea of ‘humanity’ can be salvaged through a new form of humanism. Part of this work, he says, is returning to the archives to establish a new genealogy of human rights that stems from the history of European conquest and focuses on the management of colonies and the plantation economy. It should explore why debates raged over whether indigenous communities should have property rights, and why the ‘free human subject’ hung in the balance. It should ground itself in the contributions of abolitionist texts, which were forced to rework the European literature on human rights in order to include oppressed populations. Perhaps, then, AI ethics can redeem itself from the amnesia of its uncritical use of “the human”, “human values” and “universal values” by acknowledging how an inclusive plea for all humans – including Black Americans and women of all ethnicities - to share equal rights, exorcised of colonial greed, is indebted to abolitionist movements. 

As our understanding of how AI figures in systemic sexism, ableism and racism increases, and as we become more aware of how human agency is bound up with non-human processes, we must question how technology shapes our recognition of a person’s ‘humanity’. Bringing feminist, postcolonial, critical race scholarship and indigenous knowledge to bear on the question of how social systems and values interact with technology is essential when considering ‘the human’ in the age of AI.

The Indigenous Protocol and Artificial Intelligence Working Group has provided a fine example of imaginative, radical indigenous-centered AI design without the human at its center. Collectively, the team conceive of future AI that adheres to indigenous protocol, such as AI that protects the data of indigenous communities and AI whose component materials are considered to be non-human kin with needs and protections of its own. They have shown that there is a way of practicing AI ethics that de-centres and complicates what we mean by ‘the human’. As humans engage in ever more intimate ways with AI, a non-human centred, ecologically-responsible ethics that explores how different forms of life sustain one another may in fact be far better equipped to describe and design human-AI relationships in the future.

Latest Releases
Join the conversation