A restored faith in science can only be a good thing, but nonetheless science must be subject to scrutiny. We must understand that science comes from human scientists, just as fallible and vulnerable to influence as anyone else, and that science can only be understood in the social and political context that surrounds it, writes Jana Bacevic.
Science is in vogue once again. The Coronavirus pandemic seems to have, at least temporarily, displaced the worry over ‘alt-facts’ and ‘post-truth’ into a renewed trust in expertise and the role of scientific evidence in decision-making. Governments across the globe are claiming to be ‘guided by the science’ and competing to show they have ‘the best science available’. Yet science also seems to be used to justify sometimes very different courses of action.
In many ways, the return of trust in science is a good thing. In the last decades, doubting the credibility of scientific knowledge has been used to promote highly problematic ideas or agendas. From ‘climate skepticism’ (fossil-fuel-funded forms of denying human-induced global warming), to ‘anti-vaxxerism’ (disputing the efficiency of vaccinations in preventing communicable diseases, or claiming they have multiple adverse side effects), devastating consequences of these forms of skepticism are becoming more apparent by the day. Yet, that the faith in scientific evidence seems to have been restored does not mean that science – in particular, how it becomes policy – should not be open to scrutiny.
SUGGESTED READING Cutting edge science at HowTheLightGetsIn Global By This scrutiny, however, is often complicated by how ‘science’ has been framed in public discourses. On the one side, science is portrayed as independent, objective, interest-free and value-neutral. On the other, there is the conspiracy-inflected view of scientists as evil geniuses bent on destroying humanity, akin to the (real) Dr Mengele or (fictional) Dr Strangelove. Both of these, of course, are exaggerations: scientists are no more disembodied creatures engaged in a disinterested pursuit of the secrets of the universe than they are self-aggrandizing maniacs with destructive tendencies (most are not, at any rate). Yet, because of the historical association between scientific authority and objectivity, we often hold ‘science’ and scientists to standards we do not apply to ordinary people.
The standards we apply to scientists are palpably different from those we apply to politicians and policymakers. When scientists disagree, we treat this as a sign of weakness of scientific evidence, rather than as the inevitable outcome of the fact that scientists are human.
Most strikingly, the standards we apply to scientists are palpably different from those we apply to politicians and policymakers. When scientists disagree, we treat this as a sign of weakness of scientific evidence, rather than as the inevitable outcome of the fact that scientists are human, like the rest of us. This means they are also influenced by fears, desires (including sexual ones, to the seeming bafflement of the British public) and drives that include competition for resources, prestige, and recognition. Science does not operate in a disembodied, immaterial or, for that matter, apolitical universe: laboratories run on electricity; imaging technologies depend on international cooperation; in addition to a genuine curiosity and drive for knowledge, scientists also require, shockingly, food and shelter. How all of these are distributed, however, is a political and sociological question.
Twentieth-century social theory and philosophy of science were instrumental in highlighting the social nature of scientific knowledge production. They showed that what we thought of as ‘evidence’, ‘facts’, and the truth itself, are also outcomes of social and political processes. These processes involve discovery, collaboration, and discussion, but also competition, resentment, and jealousy – all, as Nietzsche might have put it, too human. This means that if we want to understand how scientific knowledge comes about, we need to look at the elements we may describe as subjective: values, ideas, and concepts. However, because of the association between objectivity and (value-) neutrality, the ‘social’ side of science was often seen as suggesting that ‘nothing is true’ and that facts are invented. Yet few participants in the discussions about the social side of science thought that reality was entirely mind-dependent. When Bruno Latour wrote about the ‘pasteurization’ of France, he did not suggest that bacteria really did not exist before Pasteur: he meant that they didn’t exist for us, as they were not recognized at the time as something that could have causal powers – and thus could be an object of knowledge or, for that matter, socio-political intervention like sanitization.
In this sense, scientific research is more often about trying to establish what is true (at a specific time, in a particular place, with a sufficient level of certainty), than challenging the concept of ‘truth’ itself. Most scientists engage in what philosopher Thomas Kuhn called ‘normal science’: they accept the basic ontological premises about their object of research, and rarely question it as such. When scientists disagree, they often simply mean that there is not enough data to claim something about that object with a sufficient level of certainty. In addition, of course, they may disagree because they think their method is quicker and better, or because their research group has a better proposal. This tells us nothing more and nothing less than that scientists are human, and humans are social beings. Social processes, in turn, are ‘messy’, underdetermined, complex, and often unpredictable.
This shows us the paradox of science-based policy: while science can inform policy, it never mandates policy decisions.
The inevitable ambiguity of scientific facts, data and evidence has become more evident in the Covid-19 pandemic. Can non-symptomatic cases transmit the virus? How long does immunity last post-recovery? What are the long-term effects on health and wellbeing? The absence of certainty can serve as a principle of caution – for instance, limiting social interaction until there is more knowledge about the risk of airborne transmission. But it can also be used to avoid action – for instance, not mandating face masks on the basis of the claim that evidence for their efficiency is inconclusive. This shows us the paradox of science-based policy: while science can inform policy, it never mandates policy decisions.
Much more worrying than scrutiny of science, in this sense, is the lack of scrutiny when it comes to politics. Governments have repeatedly made claims that are backed up by partial data. Even a figure as important as the number of Covid-19 deaths has been calculated differently across countries and scales, depending on whether official count included deaths in care homes or those from secondary complications, including lack of access to treatment because of overburdened hospitals and emergency services. Yet which of these figures is presented as evidence matters. It is not irrelevant whether people are dying due to the defunding of public services, or whether they are less likely to get adequate care because they are from ethnic minority backgrounds. These facts, of course, change little about SARS-CoC-2 itself; but they change everything about what it means for us.
This is precisely what social scientists and social theorists mean when we say that science is political. It is not meant to deny the reality of climate change, or the infectivity of the Coronavirus. On the contrary: it means that how these – and many other facts – are framed, addressed, or ignored, depends on social and political relations. Opening the ‘black box’ of science, therefore, means nothing if we are not prepared to do the same with politics.