We think of science as being an objective account of the world, free from the influence of political and other biases. But things aren’t that simple. Evidence alone doesn’t tell you when you’ve had enough evidence to support a claim, so scientists sometimes have to make judgements that rely on ethical and political values. This realisation shatters our understanding of scientific objectivity as value-free. But not all is lost, argues Stephen John.
"Social distancing and face masks should stay FOREVER says Communist SAGE committee member Professor Susan Michie". The Daily Mail's implication was clear: because she's a card-carrying Communist, Susan Michie can't be trusted. When her politics were raised by an interviewer, Michie's response was simple: "you don’t ask other scientists about politics so I’m very happy to speak about science, which is what my job is, and to limit it to that". For Michie, politics is one thing, science another.
One way to sum up this mini-controversy is in terms of objectivity: the Mail is claiming that Michie can't be objective, but she says she can be. We often think that scientists should be objective, and we associate scientific objectivity with "value-freedom". We think that some scientific claim is objective when the justification for that claim doesn't depend on any ethical or political values. We call a scientist objective when she hasn't allowed her values to influence her reasoning or arguments. These ideas of good science as value-free relate to a broader notion of science as telling us about the way the world really is, stripped of comforting illusions. This image can also justify why science should play an important role in policy. Effective policy needs to be based on our best understanding of the world. Good, objective scientists tell us how the world is. Precisely because they achieve this perspective by ignoring values, they can act as neutral arbiters in politics. So, there's a nice package of ideas, linking politics, science, objectivity and value-freedom, which explain why we care whether Michie can leave her communism at the door.
There's only one problem: that picture is false.
There is no way at all of doing science which doesn't somehow prejudge or assume some ethical or political or economic viewpoint.
How science is value-laden
The example above neatly sums up a general problem in thinking about science, politics and policy. Scientific advisors play a central role in policy-making. In the Covid-19 pandemic, UK government leaders have stressed that they are "following the science". Given their power, it seems important scientists don't let their own political, ethical or economic values influence their advice. It would be deeply undemocratic if, as the Mail implies, Michie sneaked communism into policy via the backdoor of science advice. And that seems a valid concern even if communism is a good idea. On the other hand, scientists are, of course, humans with needs, wants, passions and interests. Can they avoid letting values influence their claims?
A large number of philosophers of science now say that’s impossible: scientific justification cannot be value-free. Note just how strong these claims are. There are lots of cases where scientists' ethical or political values have influenced their science. One famous example is the Nineteenth Century "discovery" of "drapetomania" - a psychological condition which made slaves predisposed to run away from plantations. From our perspective, the "science" of drapetomania was, consciously or unconsciously, driven by the values and interests of the rich, White establishment. However, the philosophers' worry isn't just that lots of "science" is influenced by values. It's not even that it is difficult to avoid having science influenced by values, or difficult to tell whether science is influenced by values, or that choices about what to research might reflect economic values. Rather, it is that the very core of scientific justification must be influenced by values. There is no way at all of doing science which doesn't somehow prejudge or assume some ethical or political or economic viewpoint. It's not just "drapetomania causes slaves to run away" which is value-laden; so, too, is "smoking causes lung cancer" or "Covid-19 is airborne". Why think that? And what are the implications for the relationship between science and policy?
How much evidence that some claim is true should we require before saying that the claim has been justified?
When is enough evidence enough?
Perhaps the simplest and most powerful argument for an inescapable role for values in scientific justification appeals to "inductive risk". All scientific knowledge is inductive; strictly, it goes beyond our evidence. For example, while we have huge amounts of evidence that smoking causes lung cancer, we cannot be 100% certain of that conclusion. Maybe, just maybe, something else explains the patterns we see. This simple fact generates a problem: How much evidence that some claim is true should we require before saying that the claim has been justified? This question of how certain is "certain enough" can't itself be answered by science. There is no fact out there in the world along the lines of "only accept claims when you are at least 95% certain".
The more evidence we demand, the lower our chances of a false positive (accepting a claim which is, in fact, false). That's good, because acting on false claims is, at best, pointless and, at worst, harmful. However, demanding lots of evidence comes at a cost: an increased risk of false negatives (not accepting claims which are, in fact, true). That could be a problem, because not accepting true claims can also be costly. The key claim of much recent philosophy of science, building on arguments from Heather Douglas, is that the only proper way of setting a level of certainty is to think through the ethical and practical costs of these different errors. The higher the costs associated with false positives, the more certainty we should demand. The higher the costs associated with false negatives, the less certainty we should demand. When we do science, we need to think about the non-scientific effects of saying a claim is justified.
Think about an example. In March 2020, the UK government had to decide whether to accept the claim "Covid-19 risks overwhelming the NHS". They had some evidence for this, but that evidence was uncertain, complex and contradictory. How much evidence should they have required before accepting the claim? The less evidence they demanded, the greater the chance of a false positive, leading to massive, unnecessary damage to the economy. The more evidence they demanded, the greater the chance of a false negative, leading to unnecessary loss of life. Many people now think that the government got this balance wrong: they waited too long. I don't know whether that is correct. What is clear is that these concerns express not just a scientific, but also an ethical or political judgment, concerning the proper balance of economic and public health goals, the nature of governmental responsibility, and so on.
So, according to the "inductive risk" arguments, scientific justification must be value-laden because decisions about how certain is "certain enough" must rest on broadly ethical claims. Note that the argument doesn't hold that these decisions must always be conscious or explicit. Rather, most of the time, scientists solve their inductive risk problems by appealing to conventions. For example, statistical approaches which minimise false positives are often built-into into statistical computer programmes. The power of the argument is that those conventions can themselves presuppose or imply contestable ethical values. Someone who has been born and brought up eating meat might reach for the chicken leg without thinking about it, but that doesn't imply that eating meat is an ethically neutral activity.