Knowing what to believe

Self-awareness is key to wisdom

Just like the ancient Greek maxim “know thyself”, new cognitive neuroscience suggest that cultivating self-awareness is key to wisdom. Stephen M Fleming outlines new experiments that image the brain as we make decisions and considers how we can know what to believe.

 

As a student on a summer break from my psychology degree, I spent a few weeks traveling alone around China. Much of the trip was spent on long, rambling train journeys where pleasantries with my fellow passengers were exchanged via hand signals and a few halting words of Mandarin. There was plenty of time to think while watching the landscape roll by. The book in my backpack was conducive to this – Daniel Dennett’s Freedom Evolves, all about free will. Dennett’s thesis was that we can retain a sense of being free while still acknowledging the determinism inherent in the brain and mind (a view known as compatibilism in philosophy). To me it made a lot of sense (and still does) to define freedom as the capacity to make decisions – to be able to consciously weigh up what to believe or what course of action to take.

But I remember one thing nagging away at me about this picture. It didn’t seem to explain spontaneous changes of mind – those that seem to emerge out of nowhere, dramatically altering our beliefs about reality. If freedom is having conscious command over what we do and say, then surely such shifts that bubble up unbidden into consciousness should not be considered as part of who “we” are. And yet newfound worldviews, formed in such watershed moments, become central to personality and identity.

Take the case of Mark Lynas, an environmental campaigner who for many years was militant in his opposition to genetically modified (GM) foods. He spent years as a guerrilla activist, using machetes to hack up experimental crops and working to remove what he referred to as “genetic pollution” from farms and science labs around the UK. But in 2013 Lynas stood up at the Oxford Farming Conference and confessed that he had been wrong. The video of the meeting is on YouTube and makes for fascinating viewing. Reading from a script, he calmly explains that he had been ignorant about the science and that he now realizes GM is a critical component of a sustainable farming system – something that is saving lives in areas of the planet where non-GM crops would otherwise have died out due to disease. Lynas had undergone a radical change of mind about an issue that he had been passionate about for many years. In an interview as part of the BBC Radio 4 series Why I Changed My Mind, he reveals that his admission felt “like changing sides in a war” and that he lost several close friends in the process.

In my research group at University College London, we are starting to gain insight into the delicate interplay of the conscious and unconscious processes involved in changes of mind. Of course, changes of heart on emotive issues are relatively rare and are difficult to recreate in the lab. But it is now possible to study the fine-scale neural dynamics that underpin how we make – and revise – simple decisions.

It is now possible to study the fine-scale neural dynamics that underpin how we make and revise simple decisions.

In one of our experiments, people are asked to say which way a cloud of noisy dots are moving. They are then shown another cloud of dots that is always moving in the same direction as the first, but is either more or less noisy, and are asked how confident they feel about their original decision. A computer model of this task solves it by summing up the evidence for different directions of motion streaming into the cortex millisecond by millisecond. If the model makes an initially incorrect decision, the new evidence disconfirms its choice, leading to a (simulated) change of mind. People in our experiments also show similar changes of mind – and by scanning people with functional magnetic resonance imaging (fMRI) while they are making these decisions, we have been able to identify activity patterns in the brain that shows precisely the signature predicted by our equations. In particular, a brain region known as the dorsal anterior cingulate cortex (dACC) tracks how much we should update our beliefs based on new evidence.

But we also know that people deviate from the predictions of these computer models. Humans show a “confirmation bias” – after making a choice, our processing of new evidence that supports our choice tends to increase, whereas evidence that goes against our choice is downweighted. Confirmation bias is rife in settings ranging from medical diagnosis to investment decisions to opinions on climate change.

We have recently discovered that whether or not people show a confirmation bias is affected by how confident they feel in their initial choice. In one experiment, we used a technique known as magnetoencephalography (MEG), which can detect very small changes in the magnetic field around the heads of our volunteers. Because neurons communicate by firing tiny electrical impulses, it is possible to detect the tell-tale signs of this activity in subtle shifts in the magnetic field. By applying techniques borrowed from machine learning, it is even possible to predict features of people’s thinking and decision-making from the fine-grained spatial pattern of changes in the magnetic field. In our experiment, we were able to reliably tell whether people thought a patch of dots was going to the left or right. But we found that this ability to predict people’s choices differed according to how confident people were in their decisions. If they were highly confident, then any evidence that went against their previous decision was virtually un-decodable. It was as if the brain simply did not care about processing new evidence that contradicted a confident belief.

By applying techniques borrowed from machine learning, it is even possible to predict features of people’s thinking and decision-making from the fine-grained spatial pattern of changes in the magnetic field.

This in turn suggests that confirmation bias might sometimes become beneficial – as long as it is paired with good self-awareness of when we might be wrong. The logic is that if being confident tends to promote a bias toward confirmatory information, then this is okay as long as when I am confident I also tend to be correct. If on the other hand I have poor self-awareness – if I am sometimes highly confident and wrong – then on these occasions I will tend to ignore information that might refute my incorrect belief and have problems building up a more accurate picture of the world. Indeed, in our experiments we have found that there is a tight coupling between people’s self-awareness and their ability to reconsider and reverse simple decisions.

In order precisely control the “evidence” people have available when making decisions in the lab, we often need to avoid the emotionally charged issues people care about most, such as Lynas’ views on GM crops. But we have reason to believe that our experiments are uncovering general features of how confidence shapes decision-making and vice-versa, even in real-world settings. For instance, we have found that one of the best predictors of holding dogmatic political views (at both the left and righthand ends of the political spectrum) – is a lack of self-awareness about the kind of simple perceptual choices we have been studying in the lab. Dogmatic people were not any worse at our tasks, but they were worse at knowing whether they were right or wrong about their decisions about the dots on the screen. More dogmatic people were also less likely to seek out new information, and their decisions to seek out new information were less informed by their confidence.  

We are now appreciating that changes in even deeply-held beliefs are decisions like any other. We are constantly buffeted by information, and whether or not we continue to believe in vaccines, genetically modified foods, climate change or a particular political party is affected by the news we read and the people we surround ourselves with. All of these sources of information may be being processed unconsciously and feed into how confident we feel about a particular belief or attitude. Such confidence will then shape how we process and respond to new information, and so on in a delicate interplay between the mental processes we are dimly aware of and those that remain hidden from view.

Dogmatic people were not any worse at our tasks, but they were worse at knowing whether they were right or wrong about their decisions about the dots on the screen.

The possibility for wholesale changes in identity, as experienced by Mark Lynas, is both terrifying and exhilarating. It highlights a fragility in the worldviews that we often assume are core features of who we are. But it also provides a bracing insight into how our minds work, and, especially in an era in which we are deluged by information (both real and fake), it offers new ways of remaining, in Dennett’s sense, free. A key insight here is that we might need a helping hand to sort fact from fiction and prevent a runaway snowballing of confirmation bias. Recall that the science tells us that once we are confident about a particular worldview, it is difficult to process evidence against it. It seems reasonable to want to protect ourselves against getting stuck in such mental ruts.

One promising approach aims to boost metacognition and improve our capacity to know when we might be wrong. We have found that a small amount of daily feedback on whether people’s confidence matches their performance on a simple game can improve people’s metacognition on a range of other tasks. In turn, people who show the most pronounced jumps in metacognitive skill become less likely to fall foul of confirmation bias and more likely to form accurate beliefs about controversial issues such as climate change. There are also promising signs that regular meditation leads to similar improvements in self-awareness. Another approach encourages people to reality check new information. For instance, recent research by the psychologists Gordon Pennycook and David Rand has shown that subtle prompts to reconsider the accuracy of Twitter posts can reduce the sharing of fake news.

It remains to be determined whether these interventions have lasting benefits for the kinds of beliefs we hold. But a long philosophical tradition suggests cultivating self-awareness is a cornerstone of wisdom. For instance, in Plato’s dialogue Charmides, Socrates considers sophrosyne – the living of a balanced, measured life – as being grounded in effective self-knowledge: “Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know.” By enabling a more reflective view of ourselves, we can ensure we actively – and freely – choose what or who to believe, rather than mindlessly allowing our beliefs to choose us.

Latest Releases
Join the conversation