To what extent should the distinct environments of scientific knowledge creation and dissemination be free from their implications, and should they have some duty of care to the publics or societies that will be impacted by their research? Melanie Challenger wrangles with the dilemmas of knowledge production, misinterpretation, meaning and objectivity.
We demand a lot from scientists. They are required to be objective, rigorous, and accurate, and to conduct their work free from the constraints of religion or politics. Few other areas of human endeavour are expected to be or valued as being so free from human error. At the same time, scientists are tasked with assessing and considering the potential consequences and applications of their work, and to act responsibly to maintain public trust in their whole system of knowledge. That is a burden that scientists must feel acutely today, as they come under attack from the instruments of misinformation.
So valued is science as an objective arbiter of reality that the freedoms of scientists are seen as a measure of an enlightened society. And yet scientific knowledge has given the world Zyklon B as well as Penicillin. As such, publics often fear science as much as they demand its boons. So, how can we better steer science towards social good? And should scientists be held responsible for the implications of their work?
The risks of prejudiced exploitation of scientific knowledge are increased by a pernicious myth about science that scientists would do well to acknowledge. It is the myth that science is objective and value-free. Conceptualising science as a bias-free form of knowledge gives scientific research a powerful kind of authority. And this characterization is indispensable to those that wish to exploit or weaponize it.
Let’s consider the history of eugenics. Charles Darwin, the central pioneer of evolutionary theory, did not believe that the evolutionary process of natural selection necessarily equalled evolutionary progress. Nor, at the time he wrote On the Origin of Species, did he believe that organisms are comprised of biological essences of lesser or greater quality. Indeed, in his marginalia, he counselled himself never to use ‘higher’ or ‘lower’ in his descriptions of natural fact. Yet Darwin was a product of his time, and espoused condescending and outright prejudiced ideas about fellow humans and their lifeways that are shocking to us today. And his defensive follow-up, The Descent of Man, gave a scientific gloss to the ideas of those with more overtly racist agendas.
Every stage of the scientific process, from study choices to who receives funding, involves value judgements.
The field of eugenics, founded by his cousin, Francis Galton, applied natural selection to society, and asserted that humans should be perfected through the control of their reproductive behaviours. Galton and his contemporaries suggested that there are “more suitable races or strains of blood,” and to improve society, we should apply science to increase the prevalence of these “more suitable races.”
By the turn of the twentieth century, eugenics societies began cropping up around the world, including the Eugenics Education Society, founded in 1907, and supported by no less than two British prime ministers (Churchill and Chamberlain), to exploit the science of heredity for greater “responsibility” in human parenthood. Two-thirds of the members of the EES were scientists.
These ideas had major policy implications. In the US, for instance, Indiana became the first US state to pass a compulsory sterilization law for those deemed “imbeciles” by medical professionals. In time, more than thirty states adopted laws of this kind, resulting in tens of thousands of individuals being sterilized, before the laws were finally rescinded in the 1980s.
Yet the ideas didn’t die out. Richard Hernstein and Charles Murray’s 1994 book The Bell Curve published their data on the distribution of intelligence (measured as IQ) across American society. The book caused an outcry by suggesting a correlation between race and IQ. According to their findings, Black and Hispanic Americans were more commonly distributed at the bottom of the curve. The absence of any nuance or clarification allowed for the data to imply causation.
Simultaneously, one of the fathers of genomics, James Watson, took his own beliefs in the essentialism of genes to absurd levels. Although he claimed to have been misquoted by the mainstream press, it is difficult to give the benefit of the doubt to someone who argues that “stupidity” is a “disease” that genetic technologies ought to “cure”. “The lower ten per cent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn’t. So I’d like to get rid of that, to help the lower ten per cent.” It hardly needs saying that “stupidity” and its impacts on the world are not quantifiable by science – they are value claims and should be interrogated ethically and not statistically.
Still today, especially in discussions on transhumanism or human enhancement, the idea of the lowest ten per cent lives on. New biotechnologies are viewed as valedictory tools to perfect humans, allowing them to live better and longer. Innovations like precision gene-editing and CRISPR/Cas9 have led to debates as to whether one has a moral duty to use such technologies on those that may be the future “lowest” ten.
Biases and values tend to leak into the gaps in our research questions, data, or conclusions, most especially when the research involves high-stakes human interests.
The point here is that interpretation of science is everything. And the interpretation, from start to finish, unfolds in a subjective landscape. As the philosopher of science Alfred Bohm put it, science is “saturated” in values. Every stage of the scientific process, from study choices to who receives funding, involves value judgements about what project is more meaningful than another. And so, while the scientific method, in its demand for the replication of results, offers a source of reassurance that we are discovering facts about the world, the process of generating scientific knowledge is much less balanced.
Then there is the problem of the “underdetermination” of scientific theory by evidence. This is the notion that the evidence available at any given moment may be too limited or incomplete to determine with certainty what beliefs we should hold about it. This idea was given a different spin by the philosopher Helena Longino, who called it “the gap”. Biases and values tend to leak into the gaps in our research questions, data, or conclusions, most especially when the research involves high-stakes human interests.
Yet the myth of value-free science is seductive because it offers the promise of authority wrapped up in the idea that science might be free from the biases of the human mind. But no human endeavour is neutral and objective in this way. This is not a comment about the ability of empirical enquiry to establish facts, but on the wider process of scientific discovery and our search for what these facts mean to us. And while it is important that we trust the methods of science, it is a mistake to believe that scientists or their research can exist in a rarefied vacuum of objectivity.
Of course, the field of science is not alone in its impulse to seek this kind of authority. Think of the idea of justice. Like science, conceptions of justice are mired in dreams of neutrality. For John Rawls, justice was the “first virtue of social institutions”. Bruce Ackerman speaks of the “perfect technology of justice” and of “neutral” dialogues. Yet no conceptions of justice have ever ascended cleanly from the highly conditional human world. When it comes to human affairs, neutrality is an impossible dream.
But before we wag our fingers at scientists, we, too, must accept some responsibility. Throughout their work, the scientist is expected to remain in a state of objectivity, yet our subsequent interpretation of science is woefully value-laden, and inherently skewed by biases about our own knowledge. In a 2017 study in PLOS by Kevin Elliot and others, the researchers found that scientists who address their own values are perceived as less credible by the public, but this effect differs depending on whether we share the scientist’s values. Unsurprisingly, we tend to rubbish someone’s credibility if we don’t align with them and yet we are less suspicious of scientists whose research findings contradict their own values. In other words, we’re blind to our own bias but alert to bias in others.
In 2023, a survey conducted by Oxford University found that trust in scientists to work for societal good increased during the pandemic, especially in genetic technologies, where the figure stood at forty-five percent. At first glance, this was put down to the visibility of scientists and their skill in translating their work into public good. But Professor Alison Woollard, a co-author of the study, pointed out something significant in the findings. Extreme attitudes, whether pro or anti-science, correlated with a tendency to have a high belief in one’s understanding of science, despite a relatively low overall textbook knowledge of science. For this reason, Woollard points out, “working to address the discrepancies between what people know and what they believe they know may be a better strategy.”
What lessons can we learn from the history as we face some ethically challenging innovations, from CRISPR/Cas9 to Artificial Intelligence? If scientists cannot be wholly objective, what duty of care do they have towards those that might be affected by their research? The trouble we face is that, given the human propensity for bias, it is as likely to cause harm as benefits if we police the questions that scientists can ask or expect them only to work within the parameters of a given era’s values. Yet science is, by its nature, an incremental expansion (and augmentation) of pre-existing knowledge. It is inherently “gappy”. The risks that bias or prejudice might fill the void remain high. So what might be done?
In the case of eugenics, the greatest danger lay in the biased interpretation of an incomplete theory. It is important to note that much of that filling-in happens externally to the originating scientist. Perhaps, then, safeguards might more fruitfully be located around commercial or political environments that stand to gain from distorting scientific ideas. Some of the responsibility for application lies directly with scientists, but much of the harm emanates from an under-regulated private sector.
Meaning is not the domain of science, and it is a mistake to expect scientific methodologies to offer complete answers to their work.
What we should expect of our scientists and their institutions is that there are as many opportunities as possible throughout the process of enquiry to acknowledge and recognize the presence of uncertainty, and to subject the process to scrutiny. Many good scientists do this already – they are alert to what they don’t know and the risks of misinterpreting their results. But it should be imperative that these gaps are communicated transparently. And we do have resources available to us to scrutinize the values and biases that are found within science: the field of ethics is one; deliberation and participatory democracy are others. These are tools to keep scientific endeavour in check. They should be more widely supported. This might be easier to facilitate if we resist scientism or what John Dupre calls “scientific imperialism’” – the tendency to “push a good scientific idea far beyond the domain in which it was originally introduced.” I have lost count of the number of times that science is purported to tell us “what it means to be human”. Meaning is not the domain of science, and it is a mistake to expect scientific methodologies to offer complete answers to their work. That is precisely what we need ethics for, for example.
But important work also needs to be done in encouraging the rest of us non-scientists to acknowledge and recognize our own gaps too, and the obvious dangers that follow from wadding those gaps with our prejudices or dreams. For it is not just the scientists who should be responsible for the creation of knowledge; we too should be responsible for how we interpret it.