The current orthodoxy of cosmology rests on unexamined assumptions that have massive implications for our view of the universe. From the size of the universe to its expansion, does the whole programme fail if one of these assumptions turns out to be wrong?
There is a great paradox haunting cosmology.
The science relies on a theoretical framework that struggles to fit and make sense of the observations we have but is so entrenched that very few cosmologists want to seriously reconsider it.
When faced with discrepancies between theory and observation, cosmologists habitually react by adjusting or adding parameters to fit observations, propose additional hypotheses, or even propose “new physics” and ad hoc solutions that preserve the core assumptions of the existing model.
Today, there is increasing critical attention on some problematic parts of the Standard Model of Cosmology. Dark matter, dark energy and inflation theory are parts of the standard theoretical framework that remain empirically unverified - and where new observations prompt ever more questions.
However, little questioning is heard of the many unverifiable core assumptions that make up our model of the universe.
Dark matter, dark energy and inflation theory are parts of the standard theoretical framework that remain empirically unverified.
Before any physics or mathematics is involved, the framework is based on a series of logical inference leaps - we count 13 - that works as an invisible premise for the theory. Of these, some are not testable or are barely plausible. But they are necessary as simplifying conditions that enable scientists to articulate a scientifically consistent theory of the universe.
What if any of these hidden inferences happen to be fundamentally wrong?
In this article, we would like to focus on just a few of these unverified core assumptions that make up today's standard cosmology, in order to raise a question:
Has the current standard model become orthodoxy because it is very well-founded and proven - as the consensus view would have it? Or is it rather orthodoxy because it’s become ‘paradigm stuck’ - that is, path dependent and unable to generate a viable alternative?
How do we know the Universe?
Let's first look at this science in the big picture.
No, not the big picture story of the "Big Bang" - the hot and dense state of the universe as it was billions of years ago - but rather the empirical problem of how we as Earth-dwellers come to picture the universe scientifically.
Cosmology is different from other sciences in a fundamental way. The sheer scope of the subject matter covers the largest extent imaginable - literally - and it does so based only on observations from our own local place within it.
Unlike physics in the micro scale, experiments cannot be repeated under controlled conditions. And the macrophysical universe as we know it is at least 30 orders of magnitude higher than that of particle physics.
In examining the unfathomably large universe, astronomers face serious difficulties. How can we, from the very limited region of space that is visible, comprehend the entire universe - let alone measure it with confidence?
As the physicist Sabine Hossenfelder recently pointed out, a key assumption like ‘the cosmological principle’ - that the universe is on average the same in all directions - does not hold up well against observations.
What is today called the Standard Model of Cosmology emerges in the context of these enormous limitations, which in turn require some far reaching simplifying assumptions to make a universal theory possible.
As the physicist Sabine Hossenfelder recently pointed out, a key assumption like ‘the cosmological principle’ - that the universe is on average the same in all directions - does not hold up well against observations (or plausibility). But abandoning the cosmological principle would have enormous consequences and so it is resisted.
Some problematic assumptions run even deeper and may have been forgotten by cosmologists in the historical development of the model.
Cosmic Leap #1: measuring the universe
We measure the universe in billions of light years and megaparsecs with ostensibly astonishing precision. But how do we really know its true scale and how far away distant galaxies are from our own tiny place in the cosmos?
Astronomy has developed brilliant techniques for measuring distances but their validity is assumed to stretch far beyond what we can ascertain. Most of our cosmology is based on things we know with empirical confidence about our own galaxy, then hyperextended outwards toward infinity. In the case of the Big Bang model, this extension goes backward to a hypothetical 'early universe' horizon.
Certainly, within our own Milky Way galaxy we can measure distances quite accurately by triangulating visible stars. This ‘high-confidence zone’ for our empirical measurements corresponds to an estimated 0.00001% of the theoretical observable universe.
Venturing beyond our galaxy with the mathematical framework of General Relativity to guide us, scientists can measure up to about 5% of the theoretical universe on a reasonably convincing empirical basis. Beyond this, however, the choice of cosmological model used begins to impact on both measurement and explanation of what astronomers see. This is because in order to understand observations, relativistic mathematical corrections must be applied. For example, images of galaxies need to be resized and their brightness adjusted to take into account that the universe was expanding while light was travelling towards us. But these recalculations are in turn based on the model that cosmologists seek to confirm in the first place.
At the basis of scientific work a self-reinforcing circularity creeps into the measurements - which in turn is used to reinforce key theoretical assumptions.
Astronomers use a so-called distance ladder to measure much greater distances, up to 30% of the theoretical universe size by some estimates, by using light from supernovae explosions as guideposts. At that distance and beyond, however, model-dependent errors could add up to more than 50% of the measured value. And the further out into the universe we go, the more we rely on the theoretical framework to make any estimations, and the further confidence in the distance ladder accuracy decreases.
At these large distances the astronomer is forced to rely more heavily on the parameters derived from General Relativity and on the redshift-distance inference (more on that below) to interpret observations as distance. But this interpretation is needed to support the Big Bang Model, which is itself the model being used to find the parameters needed for the interpretation of observations.
Thus, at the basis of scientific work a self-reinforcing circularity creeps into the measurements - which in turn is used to reinforce key theoretical assumptions. Measurements corrected using a theoretical distance ladder are routinely taken as evidence for the model itself. Over time, this lends more invisible credence to the hypothetical premise underlying the model, even though this premise is not at all what the observations test.
This logical process is familiar to anyone who has studied myth creation, ideologies or religion, in which there is always a founding fiction. Is it outrageous to think that an advanced science could be based on little more than a continual repetition of the same idea?
Data analysis based on a hypothetical premise over time becomes taken as implicit validation of the same premise, allowing astronomers to speak confidently about 'mountains of data' as 'incontrovertible proof' of the Big Bang theory.
Science is above all a practice, and it is simply very hard to do scientific work without making a set of reinforceable assumptions. What starts as "If A, then B etc." soon becomes "A, therefore B" and the fictional premise hardens into a 'fact' that must be protected in order to continue the line of research.
In other words, data analysis based on a hypothetical premise over time becomes taken as implicit validation of the same premise, allowing astronomers to speak confidently about 'mountains of data' as 'incontrovertible proof' of the Big Bang theory - when in fact no data set on its own or aggregated can test the fundamental assumptions of this model.
In the Standard Model of Cosmology then, almost the entire universe can only be the result of a gigantic inference leap from the only finite neighborhood we know, on a scale that is determined by an unverifiable hypothesis and some form of circular reasoning. Now you might argue that it’s the only or the best option we have under the circumstances - but that is not usually the humility with which this lack of confidence is treated.
Cosmic Leap #2: observing the expansion of space
It is considered a universal fact that space is expanding. But how do we really know this - and how do we infer from this that the universe must have expanded indefinitely from a primordial hot dense state?
While the astronomical distance ladder used to measure large distances leaps outwards with progressively lower confidence the further out we go, some key inferences in the cosmic framework are of a different kind: they leap from what we can observe to universal principles and universal laws.
One such principle is known as Hubble's law, upon which the entire Big Bang hypothesis rests. This 'law' is really a consensus interpretation of an observed phenomenon - it is not based on a demonstrated fact.
In the 1920s, the astronomer Hubble discovered a certain relation between the distance and redshift of galaxies. This redshift, a displacement of spectral lines toward longer wavelengths, appeared larger for galaxies at larger distances. The redshift phenomenon is well-known on Earth as being a result of the Doppler effect: the motion of a light source produces a shift of its apparent colour. When galaxies were seen to have a spectral redshift, this was interpreted as a measurement of their velocities as they move away from us. This was called a ‘recession velocity’.
At the time Hubble and other astronomers noted that although the velocity of a galaxy always causes a redshift, the logic doesn’t necessarily go the other way - observation of a redshift in the light received from a galaxy does not have to imply recession velocity. But with few other plausible explanations for the redshift on hand at the time, the redshift-velocity inference became the accepted interpretation. In the context of General Relativity, space expansion mimics the Doppler effect, which can then explain the redshift observed by Hubble.
Hubble's discovery meant that more distant galaxies appear to have a larger recession velocity. This follows a pattern known from explosions, that things flying apart with larger velocities at larger distances were all clustered together in the past.
The redshift-velocity interpretation is the most fundamental building block of Big Bang theory - and it has its share of empirical challenges.
With something that looked like relativistic expansion, the Big Bang theory made its first appearance. The inference leap cosmologists made was to extrapolate Hubble's redshift-velocity relation to the entire universe. Assuming this expansion is everywhere, they inferred for the mathematical modeling that the universe must have expanded and all observed galaxies must at an earlier time have been compressed together in a hot and dense state.
The redshift-velocity interpretation is the most fundamental building block of Big Bang theory - and it has its share of empirical challenges. The model makes galaxies appear to rotate much faster than should be possible and their motion in galactic clusters faster than allowed by the laws of gravity. If the Doppler effect is the right explanation for the redshift, measurements indicate that more mass is needed to explain the observed velocities.
Based on the redshift-velocity interpretation, a consensus hypothesis arose with the development of Big Bang theory: that these unexplainable observations are caused by “Dark Matter” - an invisible substance for which there is no empirical evidence but which has the important function of keeping the cosmological framework intact.
Moreover, in observations of distant quasars, for example, an association with nearby galaxies is clearly detected in the data - which would make no sense if the model is correct. Cosmologists explain these quasar-galaxy associations as improbable chance alignments, despite thousands of examples found in observational data.
Cosmologists today extrapolate the redshift-distance pattern well beyond observed galaxies on the assumption that "Hubble's Law" is universal. Because they observe a pattern that extends over a certain range, scientists assume this pattern will hold for the entire universe.
To be clear, this is not an unreasonable assumption - but it is one that has enormous implications if it were to be even a little bit wrong. Not least would it have big implications for the framework that scientists rely on.
Protecting the Core
The fundamental uncertainty on scale and the interpretation of redshift in far-away galaxies are only two of many cosmic inference leaps that underpin the Big Bang theory - parts of the theory that are as grounded in metaphysics as in physics.
Over decades of scientific labor the Standard Model of Cosmology has become a multi-layered construction that resembles the children's game of Jenga - where the stability of the upper layers is dependent on the layers below.
The ‘crisis in cosmology’ often referred to today usually focuses on either Dark Matter, Dark Energy or Inflation - all ideas that caught on more than 40 years ago and that have become perpetuated in scientific research. But these are Jenga blocks that rest on the core theories at the base of the structure, where more problems reside.
In this sense, the Standard Model of Cosmology is exemplary of what philosopher of science Imre Lakatos defined as a research program - a better description of Kuhn’s more famous concept of a paradigm in science.
For a scientist doing research, it is more constructive to propose "new physics" that is compatible with the hard core framework than to call fundamentals into question.
A research program consists of a hard core of theoretical assumptions that cannot be abandoned or altered without abandoning the programme altogether - and around this core a set of auxiliary hypotheses that are expandable, that may be altered or abandoned as empirical discoveries require in order to protect the core. Dark matter, dark energy, inflation, are all auxiliary parts of the cosmological research program. The hard core includes General Relativity, the Big Bang model with its afterglow, the expansion of space and the not inconsiderable assumption that the universe is uniform in all directions.
It is common scientific practice to add to or tweak the auxiliary hypotheses rather than question the core. For a scientist doing research, it is more constructive to propose "new physics" that is compatible with the hard core framework than to call fundamentals into question - at least if you want to get funding, publications, graduate students, and tenure.
Because cosmology as a professional discipline really only came about with the invention of the Big Bang Theory in the mid-20th century it has effectively been the only major operative hypothesis for astronomical research. Therefore it has become the only model that cosmologists can get funded to research. The observational evidence it produces and accumulates is usually interpreted in its favour. This gives it the appearance of solidity while giving cosmologists a false sense of security.
However, it would take a lot of scientists, funding and time to be able to produce a reasonable alternative theory that could account for almost nine decades of observations using the Big Bang framework. As a result, cosmology seems locked into a ‘zombie state’ - path dependent and stuck - and too big to fail.
As astrophysicist Stacy McGaugh says in the context of dark matter theory, “like a fifty year mortgage, we are still basically stuck with this decision we made in the 1980s… we’re stuck still pounding these ideas into the heads of innocent students, creating a closed ecosystem of stagnant ideas self-perpetuated by the echo chamber effect.”
McGaugh and Hossenfelder are among a growing group of scientists concerned about the ‘dark stuff’ who are making progress in questioning some of the most critical theories in cosmology.
Their effort may help the new generation of cosmologists realize that if these decade-old theories can be overturned, there is hope in solving cosmology’s deeper problems by re-examining the core principles of cosmology.
Join the conversation