The Big Myths In And On Science: Why do bad scientific theories live for so long? With Philosopher of Science Massimo Pigliucci The Big Myths In Science | Massimo Pigliucci » IAI TV

The Big Myths In And On Science

Why do bad scientific theories live for so long?

myths in science massimo pigliucci

If there are two concepts that ought to be antithetical those are science and myths. Science, after all, began with the Pre-Socratic philosophers, who made the very conscious move of rejecting the worldview of “the poets,” that is, people like Homer and Hesiod, in favour of looking at the cosmos as the result of natural phenomena that could, at least potentially, be understood by the human mind.

Xenophanes (c. 570 – c. 475 BCE), for instance, was the first philosopher to explicitly attack the authority of the poets, as nicely recounted in Peter Adamson’s Classical Philosophy, volume 1 of his History of Philosophy Without Any Gaps. Far from being the movers and shakers of the universe, Xenophanes recognised that the gods were made in human image. The Trojans did not lose their war against the Greeks because the gods were divided by Paris’ justified but incautious choice of Aphrodite as the most beautiful of the goddesses (thus ticking off the other two competitors, Hera and Athena). They lost because of economic and military reasons having to do with the strategic, and therefore highly enviable, geographical position of Troy in Anatolia. 

Today, of course, we know that there wasn’t a single war waged on Troy, but several, as archeologists have unearthed the remains of nine different layers of the city, spanning from the third millennium BCE Troy-I to 85 BCE’s Troy-IX. The legendary events to which Homer refers supposedly took place at the time of Troy-VII, around 1,200 BCE. 

If myths are about legendary events that never happened — at least, not as described — and the supernatural forces that made them happen, science is all about the actual facts of the world and the exceptionless natural laws that govern it. At least, that’s the story the most scientists will tell you.


"Science makes progress one funeral at a time."


But science is a human activity, and as such it needs heroes and villains just like the ancient Greeks did. In other words, science comes with an embedded mythology. 

Take Isaac Newton, arguably one of the most celebrated scientists of all time, a bona fide scientific hero. Most people heard about how he had a fundamental insight into the nature of gravity when an apple falling from a tree hit him on the head. The thing is, that story is apocryphal, although it is true that Newton wrote to his friend William Stukeley that he was in a contemplative mood, after dinner, by the shade of an apple tree, and that he started thinking about gravity when he saw one of the apples fall to the ground. Allegedly. 

More problematically, Newton was a fairly nasty person. He was embroiled in a long controversy with Gottfried Wilhelm Leibniz, concerning the discovery of infinitesimal calculus. In 1711, the Royal Society published a report accusing Leibniz of plagiarism. It turned out later that the report was authored by Newton himself, a rather unethical move. Modern historians agree that Leibniz and Newton arrived at the idea of infinitesimal calculus independently.

Oh, and you know his famous phrase, “If I have seen further it is by standing on the shoulders of giants”? It appears to be a profession of modesty, but he wrote it in a letter to another of his rivals, Robert Hooke, who was short and hunchbacked, possibly to make fun of Hooke’s physical stature.

None of the above, of course, negates Newton’s spectacular achievements. It just puts the activity of science in a more human, less mythological, perspective. Neither heroes nor villains, but human beings.

Remaining within the realm of physics, another myth concerns the famous experiment that Galileo Galilei performed from the leaning tower of Pisa. The story goes that he dropped two spheres of different mass to show that the time it took them to reach the ground was independent of mass, thus contradicting the long standing Aristotelian theory. Except that Galileo probably never did carry out the deed, as most historians agree that it was a thought experiment.

The “experiment” was eventually carried out, by Apollo 15 astronaut David Scott, in 1971, on the Moon, which has the advantage of lacking an atmosphere, and therefore friction, which gets in the way of precise measurements. But by that time it was a demonstration, not an experiment, as nobody expected Galileo to be shown wrong.

A more recent myth of science has its roots in philosophy, of all places. An astounding number of high profile scientists seem to think that what distinguishes good from bad scientific theories is a property known as falsifiability — a term introduced by philosopher Karl Popper in the early part of the 20th century.

Popper was concerned with the so-called demarcation problem, how to distinguish science from non-science, and particularly from pseudo-science. He figured he had arrived at a neat and workable answer: for a theory or hypothesis to be considered scientific it would have to be falsifiable, meaning that there would have to be empirical ways in which it could be shown to be false if, in fact, it were false. In a sense, according to Popper, scientific theories can never be demonstrated to be true, like mathematical theorems. One can only say that a given theory has not been falsified. Yet. And that may change any day. 

Newton’s mechanics, for instance, stood in place from the 17th through the 19ty centuries. But then Einstein’s theory of general relativity was shown to be a far better approximation to reality, and Newton went the way of the dodo. (Well, not really, since Newtonian approximations are more than good enough to still be used in many practical applications.)

The problem with Popper’s solution to the problem of demarcation was that while it was indeed neat, it was not at all workable, at least in its simplest form — as he and other philosophers quickly realized. Take astrology, for instance, a quintessential example of pseudoscience. Its claims are eminently falsifiable, and indeed have been falsified. Over and over again. Nevertheless, astrology is still pseudoscience, not science. 

Contrariwise, string theory — the currently leading and highly controversial candidate to unify general relativity and quantum mechanics — is not falsifiable. Or at least, nobody knows how to falsify it. That, however, doesn’t make it a pseudoscience (though, arguably, it makes it questionable science, or better yet, scientifically-inspired metaphysics).

The so-called “string wars” between those physicists who support string theory as the most promising way forward and those who think it is a blind alley hinge, in part, on scientists’ partial, and astoundingly out of date, notion that what philosophers call naive falsifiability is a good criterion to separate good from bad science. It isn’t a strong clue being embedded in the term “naive.”

Here is another myth that is very popular among scientists. More than once I’ve heard my colleague Richard Dawkins say that a good scientist wishes nothing better than to be shown wrong, because that’s how humanity makes progress in its quest for the truth. My guess is that Dawkins hasn’t hung out around a lab for a while, because this is most definitely not what scientists do, most of the time. On the contrary, there is plenty of anecdotal as well as systematic evidence (from the sociology of science) that scientists are just like every other human being: stubborn, in search of glory, money and sex (not necessarily in that order), and most certainly extremely unhappy whenever they are shown to be wrong. So much so that physicist Max Planck famously said: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it” (in: Scientific Autobiography, and Other Papers, 1948, translation by Frank Gaynor). In other words, science makes progress one funeral at a time. Especially if the funeral is that of a highly prominent, and therefore influential, scientist.

Which brings me to the biggest, and arguably most dangerous, myth of them all: that science is not socially constructed. Before you reach for your anti-postmodernist gun, let me add that I’m not going the way of some radical philosophers and sociologists of the 1990s, who engaged in the so-called science wars by claiming that scientific knowledge is largely or entirely socially constructed. 

Marcus Aurelius SUGGESTED READING Marcus Aurelius – the Unemotional Stoic? By Massimo Pigliucci Rather, I’m with much more thoughtful philosophers like Helen Longino and Ronald Giere, who have argued that science has never achieved, and will never achieve, a point of view “from nowhere,” that is, universal objectivity. That’s because science is a human activity, carried out by human beings, affected by human desires and foibles, and pursuing human interests. Not only there is nothing wrong with this, there just is no way to avoid it, regardless of the pretense to dispassionate analysis by many scientists and science textbooks. 

Acknowledging the human component of the scientific enterprise has a number of practical consequences. Let me dwell briefly on two. The first one is pointed out by Longino: science benefits from diversity. It was only when women and minorities became a significant fraction of the medical research community, for instance, that the emphasis shifted away from assuming that the health of middle aged white men was ipso facto representative of that of women, blacks, children, and so forth. This is a good example of the fact that science — being done by human beings — reflects the concerns and assumptions (and biases and preconceptions) of the humans who carry it out. Broaden and diversify that group, and biases will begin to be balanced by counter-biases, with better results for everyone. 

The second consequence, discussed by Giere, affects the very way we think of what science does, as well as what we mean by truth. Although there is no such thing as a point of view from nowhere, what science does is allowing us to look at any specific aspect of the world from a variety of perspectives, especially in the case of interdisciplinary approaches. I am an evolutionary biologist, so my interest is in the unfolding of life on Earth. A better and better understanding of evolution over the past couple of decades has resulted from the adoption of a number of angles from which we study the phenomenon: the organismal angle afforded by ecology, the bio-molecular angle that is the province of molecular biology, the more basic angle of physics and chemistry, the informational one of computer science, the geological one of paleontology, and so on. 

None of these approaches, by itself, brings us “truth.” Nor do they do so when combined. The thing-in-itself, as Immanuel Kant would put it (ding an sich, in German), is out of reach. And once we accept this notion, it is actually liberating. Science, then, deals with what Kant described as the complex of appearances which are, presumably, the result of the interaction between the world as it is and our various means of empirical investigation. If this doesn’t sound good enough for you, then there is a distinct possibility that you are after science as a myth, not the real thing. But if you take seriously the move made by Xenophanes and the other Pre-Socratics, you’ll be happy to stay away from mythology and get down to do real work.

Massimo Pigliucci is the K.D. Irani Professor of Philosophy at the City College of New York. His books include A Handbook for New Stoics: How to Thrive in a World Out of Your Control (The Experiment, with Greg Lopez), Nonsense on Stilts: How to Tell Science from Bunk (University of Chicago Press), and How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life (Basic Books). More by Massimo at

Latest Releases
Join the conversation