*Everyone from physicists like Michio Kaku to Marvel superheroes propagate the idea of the multiverse - an infinite set of parallel universes. But Jacob Barandes argues that any talk of multiverses is nothing more than wild speculation, be it in quantum mechanics or cosmology, and that physicists and philosophers are not doing the public a service by suggesting otherwise. *

**“Extraordinary claims require extraordinary evidence.” –Carl Sagan**

Our solar system is only a small part of an enormous galaxy, and our universe is filled with many billions of other galaxies. These are extraordinary claims, but we have extraordinary evidence to back them up, including vast amounts of direct photographic data from telescopes.

Readers may have heard that according to a particular interpretation of quantum theory, we’re living in a “quantum multiverse” consisting of parallel realities that exist “in superposition.” And according to certain lines of research coming out of string theory, our observable universe is only a small part of a vast “cosmic multiverse” containing other regions of space in which the fundamental laws of physics are substantially different.

These are extraordinary claims, too, but we don’t have extraordinary evidence for them – not by a long shot. As such, any talk of multiverses – whether the quantum kind or the cosmic kind – is nothing more than *wild* speculation at this point. With all due respect, physicists and philosophers are not doing the public a service by suggesting otherwise, even when they include caveats.

SUGGESTED READING The multiverse under fire By Alexis Papazoglou

**The Incompleteness of "Textbook" Quantum Theory**

Before I can explain the motivation behind the idea of a quantum multiverse, I’ll need to talk about the axiomatic formulation of quantum theory, as introduced by Paul Dirac and John von Neumann in the 1930s. The Dirac-von Neumann axioms are a set of basic principles that form the operational core of most textbook treatments of quantum theory today. Accordingly, I’ll refer to this basic version of the theory as “textbook quantum theory.”

The first half of the Dirac-von Neumann axioms collectively assert that when we model a physical system using textbook quantum theory, we’re supposed to introduce a specific collection of esoteric mathematical entities, which I’ll describe in more detail later. If the physical system is closed off from exchanging information with its environment, then we’re supposed to regard these mathematical entities as functions of time according to the Schrödinger equation.

___

This point merits reiterating: no experiment has ever actually seen an atom in two places at once, let alone a cat being both alive and dead, and the Dirac-von Neumann axioms do not say such things, either.

___

The ingredients of the second half of the Dirac-von Neumann axioms consist of prescriptions for turning these mathematical entities into predictions about measurement outcomes, probabilities of measurement outcomes, and statistical averages of measurement outcomes weighted by probabilities of measurement outcomes.

I keep repeating the term “measurement outcomes” here because the *only* connection the Dirac-von Neumann axioms make between textbook quantum theory and the real world is through measurement outcomes, measurement-outcome probabilities, and statistical averages of the former over the latter. The Dirac-von Neumann axioms don’t make any statements about anything else in the real world.

In particular, the Dirac-von Neumann axioms don’t support oft-heard statements that an atom can be in “two places at once,” or that a cat can be “alive and dead” at the same time. This point merits reiterating: no experiment has ever actually seen an atom in two places at once, let alone a cat being both alive and dead, and the Dirac-von Neumann axioms do not say such things, either.

When physicists and philosophers sometimes speak this way, they’re extrapolating both beyond the axiomatic ambit of textbook quantum theory and also beyond the bounds of present-day empirical science, perhaps to make quantum theory seem more colorful. Again, the textbook theory says nothing about phenomena of any kind happening except for measurement outcomes, their probabilities, and their statistical averages.

This state of affairs is far from ideal. Before the development of quantum theory, science painted a much richer world-picture, one in which all sorts of phenomena really *happen*. Today, scientists talk of, say, asteroids crashing into planets, and birds engaging in foraging behavior, among other examples. Textbook quantum theory, at least by itself, doesn’t permit such talk, which falls categorically outside the scope of the textbook theory’s axioms.

Worse, the Dirac-von Neumann axioms treat measurements as abstract, primordial events, without providing a definition of what actually *counts* as a measurement. The famous “measurement problem” refers to this gap in the textbook theory. Students are usually told they should know a measurement when they see one, and to “shut up and calculate,” in the words of the physicist David Mermin.

Sometimes students are instead told that the Schrödinger equation can *become* a measurement process when there are enough interacting particles in the system, through the invocation of a mathematical transformation called “decoherence.” Attempts to use decoherence to provide an axiomatic definition of a measurement remain controversial to this day.

It’s not hard to explain why. Any such attempt would require augmenting textbook quantum theory with an *additional* set of rigorous axioms connecting decoherence with an acceptable definition of a measurement, a definition ensuring that a *definite* measurement outcome is singled out from the set of *all possible* measurement outcomes. These additional axioms would need to be proven to be internally consistent, as well as consistent with the Dirac-von Neumann axioms.

There have been *many* hand-waving arguments in these directions over the past century. To sidestep the inherently approximate nature of decoherence, these arguments often appeal to unphysical idealizations, like pretending that one can carry out “infinitely many experiments,” or that physical systems have “infinitely many degrees of freedom.” But, so far at least, no one has constructed a robust and compelling axiomatic definition of a measurement that has achieved the desired ends and has won widespread agreement.

Keep in mind that providing an acceptable axiomatic definition of a measurement isn’t even the final goal, but only the first step. After finding such a definition, one would still need to explain how that definition could justify talk of all sorts of phenomena happening, like asteroids crashing or birds foraging, beyond the narrow case of measurements specifically.

The inevitable conclusion here is that as useful and predictive as textbook quantum theory is, it is not a complete physical theory, let alone a theory that can explain or account for why we can credibly talk of phenomena happening all around us.

**Wavefunctions, State Vectors, and the Multiverse **

Fortunately, attempting to use decoherence to build a rigorous definition of a measurement isn’t the only available approach for resolving the textbook theory’s incompleteness. The most obvious alternative is to try to turn textbook quantum theory into a theory that describes phenomena really happening – phenomena that go beyond mere measurements alone. But there is not yet a consensus on how to change the textbook theory in this way.

The Dirac-von Neumann axioms describe definite measurement outcomes that occur probabilistically. The simplest and most obvious class of approaches take this probabilistic behavior seriously. These approaches involve reformulating quantum theory as a *statistical* description of a *stochastic* underlying reality – an underlying reality of phenomena truly happening, in a probabilistic way. (Full disclosure: I work on such approaches myself.)

There is good historical precedent for these statistical-stochastic approaches. Over a century ago, physicists reconceptualized thermodynamics as a statistical description of large numbers of particles bouncing around. Statistical-stochastic approaches to quantum theory would also be in keeping with our human experience of phenomena happening, if sometimes probabilistically.

An alternative class of approaches start by taking the mathematical entities from the first half of the Dirac-von Neumann axioms and *reifying* them – that is, insisting they’re metaphysically real things. These approaches are essentially a form of mathematical Platonism – they’re akin to postulating that the number 7 or the Pythagorean theorem are objects that exist ontologically.

Most of the approaches in either the statistical-stochastic class or the mathematical-Platonist class are intended to make the same empirical predictions as textbook quantum theory, as defined by the Dirac-von Neumann axioms. It follows that as long as experiments continue to show perfect agreement with the predictions of textbook quantum theory, there will be no empirical evidence favoring any one of these alternative approaches over any of the others. Arguments in favor of any one approach will therefore have to be *non-empirical* in character, and will reflect their advocates’ personal philosophical preferences.

___

Although Werner Heisenberg and Albert Einstein were usually on opposite sides in debates over quantum theory, one thing they agreed on was that they both strongly disliked Schrödinger’s view

___

Among the class of approaches that embrace mathematical Platonism, one such approach reifies a specific kind of mathematical entity called a “state vector,” and denies the existence of pretty much anything else. State vectors are abstract mathematical entities that belong to multidimensional vector spaces called “Hilbert spaces.”

One can generally translate a given state vector into an associated mathematical object, called a “wave function,” which is defined on yet another abstract space known as a “configuration space.” A configuration space is a set of *possibilities*.

As an example, a single particle has three coordinates, *x,y,z*, so a single quantum-mechanical particle has a configuration space that mathematically resembles the three-dimensional physical space with which we’re intimately familiar. Consequently, students learning quantum theory for the first time are often led to believe that wave functions “live in physical space.”

However, for a system of *N* particles, each particle contributes three coordinates, so the system’s configuration space is correspondingly *3N*-dimensional. But a *3N*-dimensional configuration space is not the same thing as physical space.

In his early papers on quantum theory in the late 1920s, Erwin Schrödinger took the view that wave functions are physical things, and even intimated that configuration spaces are just as real as three-dimensional physical space. Although Werner Heisenberg and Albert Einstein were usually on opposite sides in debates over quantum theory, one thing they agreed on was that they both strongly disliked Schrödinger’s view.

Many people are familiar with Einstein’s famous statement in a 1926 letter to Max Born about refusing to believe that God plays dice with the universe. Most people don’t know that Einstein’s very next statement in the letter was an expression of dismay over the idea that wave functions in *3N*-dimensional configuration space ought to be treated as physical objects.

Nowadays, many physicists use the terms “state vector” and “wave function” interchangeably, despite the subtle differences between them. I’ll do the same for the rest of this essay.

The Schrödinger equation implies that state vectors change into increasingly ornate blends of other state vectors as a function of time. Eventually one obtains a “superposition” of uncountably many state vectors.

From this starting point, advocates of the state-vector-only approach proceed to construct a tower of speculative hypotheses on top of each other.

Their first speculative metaphysical hypothesis is the mathematical-Platonist claim that we ought to be reifying mathematical things like state vectors to begin with, and essentially regarding them as the only ingredients of physical reality.

Their next speculative hypothesis is that one can extend this state-vector-only approach from atomic systems to the entire universe as a whole – an extrapolation in distance scales of at least a factor of 10^{36}. To be clear, that’s a 1 followed by 36 zeros, or a billion billion billion billion. For purposes of comparison, there have been roughly 10^{17} seconds since the Big Bang.

___

There’s a pop-science version of the many-worlds interpretation in which each time someone carries out a measurement, the universe splits neatly into a countable number of new universes – say, two, or three, or ten. Newcomers to Everettian quantum theory, I beseech you: read the fine print.

___

One of the first people to take this extrapolation seriously was a physics PhD student at Princeton named Hugh Everett, whose 1957 thesis was originally titled “The Theory of the Universal Wave Function.” This overall approach has therefore come to be known as “Everettian quantum theory,” to distinguish it from textbook quantum theory.

One obvious threat to this extrapolation is that on astronomically large distance scales, gravity becomes a major player in the universe, and despite a century of effort, physicists don’t yet have a comprehensive theory that fully unifies quantum theory with gravity. For all we know, quantum theory breaks down on astronomically large scales, taking Everettian quantum theory down with it.

To be fair, this extrapolation hypothesis is at least somewhat physical – it’s a claim about the applicability of a known physical theory to larger-scale regimes.

On top of the speculative hypotheses in the tower so far, Everettian quantum theory adds another – that by an appeal to the notion of “emergence,” one can somehow get uncountably many universes to arise from superpositions of state vectors. These universes make up the purported quantum multiverse, and they’re responsible for Everettian quantum theory’s more familiar name: the “many-worlds interpretation.”

There’s a pop-science version of the many-worlds interpretation in which each time someone carries out a measurement, the universe splits neatly into a countable number of new universes – say, two, or three, or ten. Newcomers to Everettian quantum theory, I beseech you: *read the fine print*.

If you do, you’ll learn that this cartoon version does not accurately reflect the way Everettian quantum theory actually works. The claimed world-picture turns out to be far murkier and less conceptually coherent. According to Everettian quantum theory, each passing moment in time leads to an amorphous profusion of “emergent” universes that can’t be counted in any definitive sense.

The speculative metaphysical hypotheses can’t afford to stop there, because textbook quantum theory makes predictions about probabilities of measurement outcomes. Everettian quantum theory therefore has to find a way to account for those probabilities.

To that end, the next speculative metaphysical hypothesis is that although the many worlds physically coexist simultaneously in an uncountable “blob,” some worlds have “higher probabilities” than others. This supposition bears an uncanny resemblance to the famously paradoxical line from Orwell’s book *Animal Farm* that “all animals are equal, but some animals are more equal than others.”

The claim that probability can be derived *ab initio* from Everettian quantum theory has been the subject of a great deal of controversy. One of the most basic features of probabilities is that we apply them to a set of possible events in which *one* event in the set occurs and the other events in the set *don’t*. For the objective kinds of probabilities typically used in scientific theories, another basic feature is that we can make our experimental results converge to our probabilistic predictions as closely as needed by performing sufficiently many experimental trials.

___

Advocates of Everettian quantum theory have written many papers and long books to try to get their arguments off the ground. But none of that should distract from a basic and rather obvious point: a tower of speculative metaphysical hypotheses does not constitute extraordinary evidence.

___

Everettian quantum theory denies both these features of probabilities. According to Everettian quantum theory, all possible outcomes actually occur with certainty somewhere in the quantum multiverse, and repeating an experiment just leads to more and more worlds that exhibit arbitrarily large deviations from our probabilistic predictions. For instance, according to Everettian quantum theory, no matter how many times you flip a quantum coin, there will always be worlds in which the coin comes up heads every single time.

Advocates of Everettian quantum theory have tried to parry these counterarguments in a variety of ways. One way is to take on yet *another* set of speculative metaphysical hypotheses about what constitutes “rational observers” – how they ought to think, how they ought to behave, and how they ought to place bets on future events, given their “self-locating uncertainty” about where they are in the quantum multiverse. Those familiar with Hume’s “is-ought dilemma” might not be surprised to hear that this strategy requires introducing more axioms.

The 18th-century philosopher David Hume argued persuasively that an “ought” statement cannot be derived solely from “is” axioms, but requires including “ought” axioms. For example, you can’t logically argue that one *ought* not steal based solely on the axiom that stealing *is* the taking of someone else’s property without permission – you also need to add an axiom that one *ought* not take someone else’s property without permission.

Similarly, arriving at statements about how observers *ought* to behave requires augmenting quantum theory with suitably defined “ought” axioms. The standard Everettian proposal is to take background axioms of rational decision-making that were first introduced centuries ago in the context of living in a *single* universe, and re-appropriate all this classical decision theory to the case of a quantum *multiverse*.

The trouble, of course, is that in a quantum multiverse, there are countless observers who act in all sorts of ways – some “rationally” and some “irrationally,” in the sense of the Everettian version of decision theory. The rational ones will assign probabilities in one way, and the irrational ones will assign probabilities in a different way (or not at all), and it’s hard to say much more. There is no mechanism for weeding out irrational observers in a quantum multiverse, or explaining why “rational” observers end up in worlds that line up correctly with their predictions, as opposed to finding themselves in worlds that don’t.

One can’t even make predictions about what “we” would see if Everettian quantum theory were correct, and whether what “we” would see is consistent with present-day scientific observations. In a quantum multiverse, there would be uncountably many copies of “us,” and those copies would see all kinds of things.

SUGGESTED READING The many worlds fantasy By Philip Ball

Advocates of Everettian quantum theory have written many papers and long books to try to get their arguments off the ground. But none of that should distract from a basic and rather obvious point: a tower of speculative metaphysical hypotheses does not constitute extraordinary evidence.

If anything, such a tower of assumptions is a clear sign of *weakness* for a theory. Indeed, to the extent that each speculative metaphysical hypothesis should reduce our overall confidence in the theory, a tall tower of them should make our confidence collapse altogether.

Given Sagan’s credo that extraordinary claims require extraordinary evidence, we should therefore treat the extraordinary claim that we’re living in a vast quantum multiverse of uncountably many universes as the wild speculation that it is.

___

Unless and until we obtain experimental evidence in favor of a theory in which the laws of physics can vary from one region of the universe to another, any talk of other universes with different laws of physics is purely speculative, and should be treated as such.

___

**The Cosmic Multiverse**

At this point in time, our best two theories of fundamental physics are the Standard Model, which provides a precise description of all the known non-gravitational forces within the scope of textbook quantum theory, and general relativity, Einstein’s theory of gravity. These two theories can actually be combined when gravitational fields are weak, but don’t fit neatly together when gravitational fields are very significant.

Physicists have worked for a century to combine these two theories into one self-consistent and fully comprehensive theory of “quantum gravity.” These efforts have led to a number of interesting developments, including some theories, like string theory, in which the laws of physics can vary from place to place in the universe.

However, none of these theories are definitive, and none have been confirmed by experiments of any kind. Unless and until we obtain experimental evidence in favor of a theory in which the laws of physics can vary from one region of the universe to another, any talk of other universes with different laws of physics is purely speculative, and should be treated as such.

The universe is an amazing place. It’s filled with wonders that should generate awe in all those who study it. Again, I have the utmost respect for my colleagues, but when some of them blur the distinction between reliable scientific theories and wild speculations, they risk diminishing the wonders we’re confident are out there. They also risk generating confusion in the public’s understanding of science, and there’s never been a time when we could afford to do that.

## Join the conversation