Gravity: The Popper Problem

Does Einstein's theory fall short of good science?

The universe is expanding, and Einstein’s theory of gravity makes a definite prediction about how the expansion rate should change over time: it should decrease, since the gravitational attraction between all the matter in the universe continually opposes the expansion.

The first time this prediction was observationally tested, around 1998, it was found to be spectacularly in error. The expansion of the universe is accelerating, not decelerating, and the acceleration has been going on for about six billion years.

How did cosmologists respond to this anomaly? If they adhered to the ideas of philosopher Karl Popper, they would have said: “Our theory of gravity has been conclusively disproved by the observations; therefore we will throw our theory out and start afresh.” In fact, they did something very different: they postulated the existence of a new, universe-filling substance which they called “dark energy”, and endowed dark energy with whatever properties were needed to reconcile the conflicting data with Einstein’s theory.

Philosophers of science are very familiar with this sort of thing (as was Popper himself). Dark energy is an example of what philosophers call an “auxiliary hypothesis”: something that is added to a theory in order to reconcile it with falsifying data. “Dark matter” is also an auxiliary hypothesis, invoked in order to explain the puzzling behavior of galaxy rotation curves.

Karl Popper first began thinking about these things around 1920, a time when intellectuals had many exciting new theories to think about: Einstein’s theory of relativity, Freud’s theory of psychoanalysis, Marx’s theory of historical materialism, etc. Popper noticed that Einstein’s theory differed from the theories of Freud and Marx in one important way. Freud and Marx (and their followers) appeared unwilling to acknowledge any counter-examples to their predictions; every observed fact was interpreted as confirmation of the theory. Whereas Einstein made definite predictions and was prepared to abandon his theory if the predictions were found to be incorrect.

Popper argued, in fact, that this difference is the essential difference between science and non-science. A scientist, Popper said, is someone who states—before a theory is tested—what observational or experimental results would falsify it. Popper’s “criterion of demarcation” is still the best benchmark we have for distinguishing science from non-science.

At the same time, Popper recognized an obvious logical flaw in his criterion. Theories, after all, are arbitrary; they are created out of thin air. What is to keep a scientist, Popper asked, from responding to an anomaly by saying: “Oh, wait, that is not the theory that I meant to test. What I actually meant to propose was a theory that contains this additional hypothesis”—a hypothesis that explains the anomalous new data. (This is precisely what some cosmologists do when they say that dark energy has been in Einstein’s theory all along.) Logically, this is perfectly kosher; but if scientists are allowed to proceed in this way - Popper realized - there could be no hope of ever separating science from non-science.

So Popper came up with a set of criteria for deciding when changes or additions to a theory were acceptable. The two most important were: (i) the modified theory must contain more content than the theory it replaces: that is, it must make some new, testable predictions; and (ii) at least some of the new predictions should be verified: the more unlikely a prediction in the light of the original theory, the stronger the corroboration of the modified theory when the prediction is shown to be correct. Popper did not simply propose these criteria; he argued for them on logical and probabilistic grounds. Popper was adamant that the total number of verified predictions was irrelevant in terms of judging the success of a theory since theories can always be adjusted to “explain” new data. All that matters, he said, are the novel predictions—predictions that no one had thought to make before the new theory came along.

__

 

"The standard cosmological model clearly fails to satisfy the criteria set by Lakatos for a progressive research program."
___

 

How does the standard cosmological model—which contains Einstein’s theory of gravity as part of its “hard core”—fare according to the standards set by Popper? Here I can’t resist first quoting from Imre Lakatos, a student of Popper who tested and refined Popper’s criteria by comparing them with the historical record. Lakatos distinguished between what he called “progressive” and “degenerating” research programs:

"A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is, as long as it keeps predicting novel facts with some success (`progressive problemshift’); it is stagnating if its theoretical growth lags behind its empirical growth, that is, as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (`degenerating problemshift’)."

 (Lakatos invented the term ‘problemshift’ because, he said, “‘theoryshift’ sounds dreadful”.)

The standard cosmological model clearly fails to satisfy the criteria set by Lakatos for a progressive research program. Dark matter, dark energy, inflation all were added to the theory in response to unanticipated facts. None of these auxiliary hypotheses have yet been confirmed; for instance, attempts to detect dark matter particles in the laboratory have repeatedly failed. And the standard cosmological model is notoriously lacking in successful predictions; it seems always to be playing catch-up. The ability of the model to reproduce the spectrum of temperature fluctuations in the cosmic microwave background is often put forward as a notable success, but as astrophysicist Stacy McGaugh has pointed out, this success is achieved by varying the dozen or so parameters that define the model and some of those parameters are forced to have values that are stubbornly inconsistent with the values determined in other, more direct ways. This does not quite meet the standards for a successful novel prediction.

All of this would be of fairly academic interest, if not for one thing. It turns out that there exists an alternate theory (or “research program”) of gravity, which has been around since the early 1980’s, and which has quietly been racking up successful, novel predictions. As of this writing, about a dozen of its predictions—some quite startling when they were first made—have been verified observationally. And I am not aware of a single prediction from this research program that has been conclusively falsified.

I am referring here to the Milgromian research program. In 1983, Mordehai Milgrom suggested that galaxy rotation curves are flat—not because of dark matter—but because the laws of gravity and motion differ from those of Newton or Einstein in the regime of very low acceleration.  Milgrom’s theory was designed to give flat rotation curves, and so the fact that it does so is not, of course, a novel prediction. But a long list of other predictions follow immediately from this single postulate. Milgrom outlined many of these predictions in his first papers from 1983 and a number of others have been pointed out since. One example: Milgrom’s postulate implies a unique, universal relation between the orbital speed in the outer parts of a galaxy, and the total mass (real, not dark) of the galaxy. No one had even thought to look for such a relation before Milgrom predicted it; no doubt because—according to the standard model—it is the dark matter, not the ordinary matter, that sets the rotation velocity. But Milgrom’s prediction has been splendidly confirmed—a beautiful example of a corroborated, novel prediction.

Milgrom’s theory is successful in another way that the standard model is not. In the early days of quantum theory, Max Planck pointed out that the convergence of various, independent determinations of Planck’s constant on 6.6 x 10-27 erg-sec was compelling evidence for a theory of quantized energy (exactly which theory of quantized energy was not yet clear). It would be almost miraculous, Planck argued, for such convergence to exist otherwise. In the same way, Milgrom has pointed out that the “acceleration constant” a0 that appears in his theory, and that marks the transition from Newtonian to non-Newtonian behavior, can be extracted from astrophysical data in many independent ways, all converging on the value ~ 1.2 x 10-10 m sec-2. As I noted above, nothing like this degree of convergence exists for the parameters that define the standard cosmological model.

What does all this mean? As a non-cosmologist, I have no stake in the correctness of any particular theory of cosmology or gravity. But I am impressed by the arguments of philosophers like Popper and Lakatos, and by the demonstrated power of their criteria to distinguish between successful theories and theories that end up on the rubbish heap. And so I am encouraged by the fact that there is a small, but growing, group of scientists who have chosen to develop Milgrom’s ideas. It is hard for me to believe that these scientists aren’t on the track of something important—quite possibly a new, and better, description of gravity.


 

Further Reading:

The falsifiability (or lack of it) of the standard cosmological model is discussed in more detail in Merritt's Cosmology and Convention.

Latest Releases
Join the conversation