The bias paradox

We are biased about bias itself.

While we think of ourselves as being the rational animal, we humans falll victim to all sorts of biases. From the Dunning-Kruger Effect to Confirmation Bias, there are countless psychological traps waiting for us along the path to true rationality. And what's more, when attributing bias to others, how can we be sure we are not falling victim to it ourselves? Joshua Mugg and Muhammad Ali Khalidi ask, might we be biased about bias itself?

 

 

How rational are we? And if we are not rational, how could we tell, since we would have to rely on our reasoning to make that determination? Over the last few decades, psychologists have uncovered numerous ways that humans fail to live up to our own ideal of rationality. We are overconfident of our performance (‘Dunning-Kruger Effect’), we seek out information to confirm what we already believe instead of thinking about what would challenge our beliefs (‘Confirmation Bias’), the theories we hold seem to influence what we observe (‘Theory-Laddeness of Observation’), and we think we are less biased than others (‘Bias Blind Spot’).

Worse still, many of these cognitive biases are supposed to influence scientists and experts. Charles Darwin recounts that he hadn’t noticed the glacial effects on the landscape on his first trip to Wales, even though the evidence was staring him in the face. Why? Because he had not been taught glacial theory—his theory (or lack thereof) biased his observation. Today doctors interpreting medical test results often fail to consider how common the disease is, a phenomenon known as the ‘Base-Rate Neglect Fallacy.’ These and many other systematic reasoning errors have been widely shown to affect the practice of scientific practitioners when it concerns the types of reasoning that are employed in the course of their scientific research.

___

having a tendency towards confirmation as opposed to disconfirmation can be healthy in many contexts

___

As many researchers have pointed out, some of these apparently irrational inferences can actually be seen to be more reasonable in certain contexts. Sometimes the very same pattern of inference can be viewed as a bias in some contexts and a heuristic in others, where a heuristic is often defined as a shortcut in reasoning—a quick and dirty rule-of-thumb. Think of confirmation bias again. It can be debilitating to constantly seek to refute one’s own hypotheses, and so having a tendency towards confirmation as opposed to disconfirmation can be healthy in many contexts, even if it leads us astray in many others. It is at once a bias and a heuristic depending on the context in which you are using it.

The research on heuristics and biases has by now posited hundreds of such reasoning biases, ranging from the “confirmation bias” and the “conjunction fallacy” to the “cheerleader effect” (the supposed tendency for people to appear more attractive to us in a group than in isolation) and the “IKEA effect” (the tendency to place a disproportionately high value on objects that they partially assembled themselves).  At the time of writing, the Wikipedia entry “List of cognitive biases” enumerates over 175 distinct cognitive biases.

This proliferation of biases raises a question: are we overly prone to attributing biases to ourselves? When we see a pattern of reasoning error in humans, do we have a systematic tendency to posit a new bias, even if there might be alternative explanations? Given the proliferation of biases and the fact that more are ‘discovered’ each year, we seem to have strong evidence that we are biased toward explaining failures of human reasoning by positing biases. Let’s call this the ‘Bias Bias’.

But positing a Bias Bias seems to lead us into paradox. To explain why, consider again the alleged confirmation bias mentioned earlier. Suppose psychologists or cognitive scientists are interested in determining whether we have a confirmation bias. Now suppose that they conduct an experiment that purports to demonstrate that human reasoners do indeed have a confirmation bias. Imagine that one of the researchers working on the project, call her the “pesky post-doc,” raises the following inconvenient possibility. She reasons that if humans have a widespread tendency to confirm rather than refute their hypotheses, and that tendency afflicts scientists and non-scientists alike, then members of her research team will also be affected. This means that they should doubt their own results, given that they will have a tendency to confirm rather than refute their initial hypothesis.

The very same evidence that leads them to posit a confirmation bias is also evidence that would tend to cast doubt on their conclusions! However, suppose that another member of the research team, call her the “pragmatic professor,” says: “Wait, if we are rejecting our research because we were subject to the confirmation bias, then the confirmation bias does exist because we were subject to it!”

The pesky-post-doc and the pragmatic professor’s reasoning both seem reasonable, but they lead to a paradoxical conclusion. The pesky post-doc reasons that if the research team is right about the existence of the bias, then they have reason to reject their research (and so the existence of the bias too), but the pragmatic professor reasons that rejecting the research on these grounds admits that the bias does exist. If they’re right they’re wrong, but if they’re wrong they’re right. What are we to say?

___

If one can show that confirmation bias afflicts primarily laypersons or those who are reasoning in experimental conditions, then one might be able to argue that the researchers themselves are not subject to that bias.

___

Maybe we can resist the paradoxical conclusion. One way is to distinguish the context of the experiment from the context of the inquirers. If one can show that confirmation bias afflicts primarily laypersons or those who are reasoning in experimental conditions, then one might be able to argue that the researchers themselves are not subject to that bias.

In many cases, this strategy works well: in a scientific context researchers can ensure that they are avoiding the bias they are studying. However, it doesn’t seem like we can do that when we posit a Bias Bias. Anyone positing a Bias Bias is likely doing so with reference to an established body of scientific research—not just lay thinking. In other words, it is the reasoning of experts that is at issue. So any experts who come up with such a critique would need to demonstrate that they themselves are not victims of the same bias when they attribute it to others. Why are some experts subject to the bias and not others? What makes the context of the researchers being criticized different from that of their critics?

Early critics of the heuristics and biases research program once cast doubt on its very cogency. They raised questions about the very possibility of using our own frail, limited, and flawed reasoning capacities to test for biases in our own reasoning capacities. The very idea was incoherent according to them. It would be like using a scale to weigh itself—the self-reflexivity makes it impossible. That seems to be an overreaction, since human reasoning is not a monolith, and we can certainly use some aspects of human reasoning to examine other aspects without raising problems about self-reflexivity. But in some cases, we do seem to be led into a self-reflexive paradox, in particular when it comes to positing a Bias Bias, as we have argued in a recent journal article.

At least at first sight, given the multiplication of biases in the recent scholarly research and in popular culture, it seems reasonable to conjecture that humans are subject to a Bias Bias. Yet positing such a thing seems to lead us into an inevitable paradox. What looked on the face of it like a reasonable hypothesis is, in fact, a claim we seem not to be able to make without contradiction.

This post is part of a partnership between The Institute of Art and Ideas and 
the Blog of the American Philosophical Association. The article was first published here

 

 

Latest Releases
Join the conversation

Roger Thach 31 March 2022

Depending on how we define ‘paradox’, this is either not a paradox or not a paradox that is fatal. The article conflates the existence of a bias with an inability to overcome that bias.
A trivial, uncharged example: many people are biased in favor of the New York Yankees, many others are biased in favor of the Boston Red Sox. In an important game, the fans may respond in opposite ways to the same pitch. (This will be confirmation bias, or expectation bias.) But, when shown a slow-motion replay, many fans may admit that they were wrong, and that the pitch should actually be called in favor of the other team. They don’t become unbiased, they simply admit their error.
(Of course, many fans will also double down, but I am not arguing against the existence of bias, just the straw man that biases are one-way tickets into parallel existences.)
Even if scientists are ‘Yankees fans’ or ‘Red Sox fans’, there are potential correctives, including the broader scientific community being a ‘slow-motion replay’.

Frank Smith 1 30 March 2022

Seems to me this boils down to: humans sometimes make errors. Those checking others for error also make errors. This does not mean there is no point in checking for errors or searching for truth. It simply means that the search for truth must be collaborative, so that we can check each other and check the checkers, check the checkers of the checkers, and so on ad infinitum. In other words, the search for truth always remains a search, never an absolutely perfectly certain arrival at destination. This does not mean there is no truth. If I say, "There is no truth," that itself would be a truth claim, and therefore would cancel itself out. No, there is truth, but we see it "through a glass darkly." We always see it through a human lens. Therefore, an increasingly close approach to truth, to absolute reality, depends on the quality of the human lens, which is to say, on the total integrity of the human being observing. This is much more than a matter of proper reasoning. It depends also on the degree of self-knowledge a person possesses, the healthy character of the life of feeling, and the ability to obey oneself and to act in such a way that one can authentically love one's actions. All that, and the achievement of something like full self-knowledge requires at least two things: 1) continual ethical development, and 2) development of higher states of consciousness through various concentration and meditation exercises. Three steps of ethical development need to be taken for every one step toward higher consciousness. There is no one source on how to work toward this, but I like Rudolf Steiner's book, Knowledge of Higher Worlds and Its Attainment. Also the little book, Six Steps in Self-Development.