What separates science from pseudoscience? The distinction seems obvious, but attempts at a demarcation criterion - from Popper's 'falsifiability' to Langmuir's 'pathological science' - invariably fail, argues Michael D. Gordin.
A good place to start is a scholarly urban legend whose provenance is uncertain: “Nonsense is nonsense, but the history of nonsense is a very important science.” This statement is attributed to Saul Lieberman, a legendary Talmudic scholar, ostensibly when he was introducing the even more legendary Gershom Scholem. Maybe Lieberman said it; maybe he didn’t. Regardless, the content of the statement is far from being nonsense and opens up both a puzzle and a clue to its solution.
The puzzle is how we define what nonsense is in the first place. This is a central question of our moment, beset as we are with conspiracy theories and allegations of “fake news” and miracle cures. Indeed, it has been a central question ever since humanity began organizing its beliefs about knowledge: once you start that process, you need a way of dividing reliable claims from dubious ones. It’s not a solely academic problem. Everyone routinely sifts incoming information into (at least) two piles — we just don’t agree on how to do it. There are countless domains in which the puzzle is confronted.
Here, I will focus on determining what counts as a “pseudoscience.” Since being scientific is arguably the highest status our culture can assign to a knowledge claim, the contested boundary between things that we consider science and those other things that look like sciences but just don’t quite make it is especially fraught. The name for the puzzle in this context is the “demarcation problem,” a term coined by philosopher Karl Popper, and his proposed solution — the “falsifiability” demarcation criterion — remains the most famous.
This has been a central question ever since humanity began organizing its beliefs about knowledge: once you start that process, you need a way of dividing reliable claims from dubious ones.
Here is how he formulated it in 1953, at its first public outing: “One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.” That is, if you propose that a contention is scientific, but you are unable to formulate a test that it could fail — not that it does fail the test (then it would be false), but that it is possible to fail it — then the claim merely seems to be scientific but isn’t, i.e., it is pseudoscientific. You have to risk failure in order to be scientific. This definition has made it into middle-school textbooks and is very widespread among those who have opinions about the subject. It is pretty good on some counts: Freudian psychoanalysis and Marxist “scientific socialism” don’t do very well in terms of falsifiability, as every imaginable data point can be assimilated by advocates into confirming the theory. Totalizing theories don’t risk anything.
Otherwise, though, Popper’s demarcation criterion doesn’t work especially well. Besides some technical epistemological problems, the biggest concern is whether it parses the sciences in the right way. Indeed, this is a test we want any conceivable demarcation criterion to pass. We want our criterion to recognize as scientific those theories which are very generally accepted as hallmarks of contemporary science, like quantum physics, natural selection, and plate tectonics. At the same time, we want our criterion to rule out doctrines like astrology and dowsing that are almost universally labeled pseudosciences.
Popper’s falsifiability standard is not especially helpful on either count. It is difficult to present the “historical” natural sciences, such as evolutionary biology, geology, or cosmology — those fields where we cannot “run the tape again” in the laboratory — exclusively in terms of falsifiable claims. Those sciences provide persuasive explanations of nature through the totality of a narrative chain of causal inference rather than a series of empirical yes-no votes. Popper inadvertently excludes important domains of contemporary science. The situation with inclusion is even worse: it is trivially easy for creationists or Bigfoot searchers or UFOlogists to formulate falsifiable claims and propose tests to evaluate them. Yet most mainstream scientists would still not consider these to be sciences.
Fortunately, Popper is not the only game in town. Many other demarcation criteria have been proposed over the years. In 1953, Irving Langmuir (1932 Nobel Laureate in Chemistry) proposed a checklist of characteristics to determine whether a doctrine is a “pathological science.” You had reason to be suspicious, Langmuir said, when someone proposed a highly surprising result based on experimental findings that were at the very edge of the sensitivity of current measuring apparatus, and yet which were ostensibly confirmed to very high accuracy. This was the case for the card-guessing statistical tests for extra-sensory perception (ESP) conducted by Joseph Banks Rhine at Duke University, for example. “Pathological science” works fine for ESP and a few other canonical examples (cold fusion, N-Rays), but it fairly impotent before the Loch Ness Monster or eugenics. It also would rule out some mainstream theories in cosmology and biomedicine that we have no other reason to doubt.
We see something similar with the demarcation criterion that insists that we follow the scientific consensus in a field and rely on administrative procedures like peer-reviewed publication in order to weed out the good from the bad: if it is not peer-reviewed in a reputable journal, you should be very wary of the claims. This is excellent if you are looking to combat those who deny anthropogenic climate change or the safety of vaccines — those claims tend to appear in reports of industry-backed think tanks or press releases. However, not every science relies on peer review. For decades, physics has released its findings through an unrefereed preprint server, arXiv.org. Physicists reason that if the field picks up the claim and develops it, it’s good; if the article languishes, it probably isn’t. A peer-review demarcation criterion would sink a whole discipline we otherwise want to keep around.
We will have this problem no matter which demarcation criterion we pick: it will “fringe out” some undesirables, but it will also drag areas of knowledge we consider legitimate along in their wake. Perhaps such collateral damage is acceptable, because a certain criterion (say, Popper’s or peer review) really does get rid of your proposed villain (Freud or climate denial). Shouldn’t we be impressed at the successes, instead of just lamenting the failures?
The more you study the history of these doctrines, the more you see that the label “pseudoscience” is a cultural marker, attached by some mainstream scientists to a particular claim that vexes them at the moment.
I would suggest not, and the reason why hearkens back to the apocryphal Lieberman quotation and the clue it gives us. We reveal something important when we focus not on the question of nonsense, but on its history. Typically, the question of determining nonsense has been left to philosophy, but adopting a historical approach instead can yield real benefits. The history of the set of doctrines that have been labeled pseudosciences teaches us two things, one general and one specific.
The first, general point is that the set of alleged pseudosciences is very large. There is a tremendous diversity to these ideas, and that diversity resists being cabined in by a single, simple criterion. The more you study the history of these doctrines, the more you see that the label “pseudoscience” is a cultural marker, attached by some mainstream scientists to a particular claim that vexes them at the moment. Over the centuries of scientific inquiry, the vexations (and the reasons behind the vexations) have varied enormously, which is why we find a motley crew of disgraced positions out there.
The second, specific point applies to our three sample demarcation criteria: each has its own history. Popper developed his ideas about demarcation in Vienna in the 1920s, after disillusioning experiences with both psychoanalysis and Marxism. He literally designed his falsifiability criterion to exclude those areas, therefore it is not much of a surprise or a point in its favor that it does so — it’s a tautology. Langmuir hated the ESP work Rhine was doing, and he crafted the category of pathological science to kneecap it. The focus on peer-review was specifically mooted to deal with industry-sponsored denialism. I do not necessarily disagree with any of these goals, but the explicit targets revealed in the historical origins of each criterion ought to be factored in to our evaluations of their plausibility. They also help account for their blind spots.
Demarcation criteria can be helpful: they can tell us where to look and when to adopt stricter scrutiny to incredible claims — and perhaps then determine they lack credibility. They are a laudable attempt to rationalize the ordinary process of sifting the good from the bad, but to date they have not worked. I am in no position to claim that they can never work, but the historical trajectory is not promising. Until a rock-solid one comes along, we are in the same position as the scientists: we use the epistemic tools we have at hand to make decisions on a case-by-case basis, sifting through the deluge of information as best we can.
Note: This corrected article was originally published under the title 'Making Sense of Nonsense'.
Join the conversation