Do no harm: AI and medical racism

Overturning the dark legacy of the Enlightenment

The troubling legacy of medical racism is still alive today. A recent study found that people of colour were 29% less likely to get local anaesthesia for minor surgery compared to white patients. In the quest to harness Artificial Intelligence in healthcare, medical racism has crept into the algorithms that shape medical decisions today. Arshin Adib-Moghaddam, traces these deep-seated biases of healthcare systems all the back to the Enlightenment era and race science, but offers hope and a blueprint for eliminating corrupt data moving forward.

 

 

Can we trust our hospitals and doctors? Is medicine a neutral science? These are some of the questions that need to be addressed before Artificial Intelligence (AI) is fully integrated into our health-care sector. As I have argued in a new book: the AI algorithms governing our life are prone to the mistakes of the past. Every single aspect of contemporary society, certainly in highly technologized settings such as health care, banking and education, is already affected and increasingly shaped by AI. Unfortunately, a painful history of discrimination and outright racism against minorities is part of that process, including in sciences considered to be “neutral” such as medicine.

___

We have to understand that in the birthplaces of western modernity, certainly also in the United States, medicine evolved in close conjunction with the “science” of racism.

___

Medical Racism

Racism as a “science” was a distinct invention of the European Enlightenment and western modernity more generally. In laboratories stacked with skulls of homo sapiens, the idea was concocted that the (heterosexual) “White Man” was destined to save humanity from the barbarism of the inferior races. In short, medical racism framed and enabled Empire and colonialism. For instance, James Marion Sims, a 19th century surgeon widely considered as one of the founders of modern gynaecology, furthered a treatment for vesicovaginal fistulas, a condition that affects bladder control and fertility in women. In his experiments between 1845-1849, Sims carried out surgeries on a dozen slave women without using any anaesthetic. He believed the common misconception at the time that Black people could endure more pain than white people. This view is still present in the field of medicine and feeds into the data of AI algorithms. For example, recent research has shown that a prominent health care algorithm that determines which patients need more medical attention, favoured white patients over black patients whose condition was worse and who had more severe chronic health issues.

  23 05 24 AI and the end of reason.dc SUGGESTED READING AI and the end of reason By Alexis Papazoglou

The idea that Black patients are thought to have a higher pain threshold, then, is rooted in the insidious “data” that we inherited from the European enlightenment. Indeed earlier in 2023, a British MP authored a Women and Equalities report which determined that racism is a major cause of massively higher maternity death rates for black and disadvantaged women in the United Kingdom. Further research shows that white employees in the health care sector are less likely to believe reports of pain by black patients and therefore less likely to give them appropriate pain relief, in comparison to white patients with a comparable condition. Another study by the Center for Disease Control and Administration in the United States investigated the medical records of nearly 57,000 adults who had surgery between 2016-2021. It demonstrated that people of colour were 29% less likely to get regional anaesthesia in comparison to white patients.

 

The starting point to have rather more ethical AI algorithms has to be a better understanding of the polluted data enshrined in the edifices of our archives. We have to understand that in the birthplaces of western modernity, certainly also in the United States, medicine evolved in close conjunction with the “science” of racism. In particular, Native American and African American women were victims of that insidious nexus between medical practice and racist abuse. For example, in the early 20th century the eugenics movement that emerged in the United States adopted a favoured policy of European empires, as US eugenicists institutionalised compulsory sterilisation both in the legal statues of the country and as a practice in the medical sector (e.g. the infamous Buck vs Bell case). In other settler-colonial settings such as Peru, Canada, Australia and Brazil, mass sterilisation campaigns were forcibly implemented in order to tip the demographic scale in favour of the white colonialists. This practice continues until today, too.

___

Colonising our bodies, spearheaded by Human-AI interfaces must be understood as an extension of several Enlightenment legacies, in particular an obsession with biological perfection - central to cob-sciences such as phrenology and eugenics.

___

The historical examples for medical racism that continue to have an impact on contemporary society are manifold: In the 19th and early 20th century in the United Kingdom, the United States and elsewhere so-called “resurrectionists” would be employed by medical schools in order to exhume bodies mostly from subjugated peoples for medical examination. When Hitler came to power in 1933, several professorships were endowed at German universities that furthered the ideas of phrenology and human perfection, most infamously at the University of Kiel in the northern county of Schleswig-Holstein. Henceforth, university professors would do their anthropological “field-work” by measuring the cranium of children as a part of their medical investigations into phrenology and in order to establish if those children could be categorised as “Aryan.” In all of these settings, medical racism enabled the theft, anatomical abuse and cruel display of mostly black/coloured bodies.

 

Today, the legacies of medical “categorisation practices” furthered by eugenics and phrenology manifest themselves in biased and inaccurate AI algorithms. The American Civil Liberties Union found that face recognition software such as Amazon’s Rekognition are racially biased, as 28 members of the US Congress, mostly people of colour, were incorrectly matched with mugshot images of criminal offenders. In Britain, dozens of black Uber drivers have been repeatedly prevented from working due to what they say is “racist” facial verification technology. Uber uses Microsoft Face API software on its app to verify the identity of their drivers. The algorithm underlying the software has difficulty properly recognising individuals with darker skin tones. Companies such as Microsoft and Amazon aggressively market such face recognition software, not only to law enforcement in the United States, but also to the medical sector where it is used for a range of health-care domains, from diagnosing diseases and conditions, to so called “emotion detection” in mental therapy.

  SUGGESTED VIEWING AI consciousness cannot exist With Markus Gabriel

Part of the problem is the lack of diversity of the medical data as the nefarious historical legacies summarised above beget wide-spread mistrust of the health-care sector by minorities who are that much more hesitant to volunteer for medical trials. As a result, we are literally dealing with “whitewashed” algorithmic data. Another example for the linkage between a polluted past and a problematic present: fair skinned people are at the highest risk for contracting skin cancer. However, the mortality rate for African-Americans is considerably higher, largely also because of a lack of experience diagnosing skin conditions in minority/coloured strata of society, as the Association of American Medical Colleges establishes. Therefore, melanoma for Black patients may be left untreated for longer  than when it’s diagnosed for patients categorised as “white”.

 

___

Bad data produces bad AI algorithms. If AI remains largely unchecked and unregulated, it will further entrench xenophobia and discrimination, especially where we can least afford it - the medical sector.

___

GOOD DATA = GOOD AI

AI-based technology has already shown that it has the potential to disrupt the social order beyond the medical sector. Moreover, given the self-improving nature of the technology which is incomparable to anything we have encountered before, AI is the only advance in world history that may do away with human supervision and control. Therefore, we have to educate ourselves and act now.

The so-called tech-giants are part of the problem. I am not saying that Elon Musk, Mark Zuckerberg and Bill Gates are personally responsible and I don’t at all adhere to the comical, yet prevalent view on social media, that they are a part of a global conspiracy to rule the world. However, the mission statement that their companies adhere to has a problematic “colonial impulse”, i.e. the agenda is all about expansion. It is just that this type of expansion is different to previous forms, as it does not target physical territory. Instead it usurps our personal space and penetrates our bodies like no other technology before. Consider, Elon Musk’s new company Neuralink. As I am writing these cautionary lines, Neuralink is developing implantable brain-computer interfaces which are implanted by a so called “surgical robot”. The post-human society, then, is already upon us. Colonising our bodies, spearheaded by such Human-AI interfaces must be understood as an extension of several Enlightenment legacies, in particular an obsession with biological perfection - central to cob-sciences such as phrenology and eugenics. The fact that Elon Musk supports extremist right-wing parties such as the German Alternative fűr Deutschland, which has openly expressed racist ideas and cultivates ties with Neo-Nazi movements, should alert us to the profound dangers of this nexus between politics and AI applications, especially in the medical sector.

 

Bad data produces bad AI algorithms. If AI remains largely unchecked and unregulated, it will further entrench xenophobia and discrimination, especially where we can least afford it - the medical sector. But there is hope. Once we trace and understand the historical roots of our polluted data, especially in Europe and North America, we can try to connect some of the problems of the past to solutions for the future, with particular reference to global institutions and civil society activism.

 

This view was echoed by the House of Lords in the United Kingdom which urged that the 'prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning.' [1]

Whist the House of Lords report cautioned against over-regulation, the World Health Organization clearly priorities setting global standards because in order ‘to protect human autonomy and to ensure ‘privacy and confidentiality by providing patients with valid informed consent through appropriate legal frameworks.’ In 2019, a major conference organized by UNESCO in Sao Paulo calibrated a response from Latin America and the Caribbean echoing humanistic approach to AI-technology and its usage, in particular in the medical sector. One of the key policy take-aways of these efforts has been to ensure even at this early stage that AI applications remain under human supervision and that the onset of Artificial General Intelligence --- celebrated by transhumanists as the moment of “Singularity” when machines can think and act autonomously – does not yield a domino effect that removes human agency.

  SUGGESTED VIEWING The future is quantum AI With Gerard Milburn

All the reports that have been surveyed for the present article clearly show that proper representation in research and data collection has a positive impact on policymaking, as the ethnicity data held by medical and other institutions are prone to be biased. Furthermore, crucial policy areas that need to be scrutinised in the medical sector include: Ensuring that AI algorithms and corresponding datasets are auditable in accordance with communal, national and international human rights legislation; safeguarding patient’s privacy rights when AI applications are used during screening, diagnosis and treatment and that the results are clinically explainable to the patient throughout the medical process; AI developers must ensure transparency, inclusivity and utilise ethical standards codified by human rights institutions and must be able to clearly document their methods and medical results. Here, as everywhere else, then, we need more dialogue, more inclusivity and above all, better education and knowledge. Only in this way, can we envision a future freed from the shackles of our insidious past.



[1] [1] ‘AI in the UK: ready, willing and able? - government response to the select committee report’, UK House of Lords, Report of sesion 2017 -19, 16th April 2017, p. 5

Latest Releases
Join the conversation