Cosmology has made extraordinary progress in recent decades. Yet it now faces some fundamental physical problems and mathematical issues. In addition to these scientific challenges, much of modern cosmology relies on philosophical assumptions that are unnoticed, untested or unreliable.
In this far-reaching survey of the philosophical foundations of cosmology, George Ellis highlights the critical issues that underpin physical cosmology before outlining his metaphysical approach to understanding the nature of the cosmos.
Cosmology has made huge strides as a physical science since Einstein proposed the first quantitative cosmological model in 1917 when even the nature of galaxies was unknown.
On the one hand, cosmology has evolved into a mature science with sophisticated mathematical and numerical (computer) models supported by a large array of observations and data analysis. We now understand a great deal about the expansion and evolution of the universe. On the other hand, cosmology necessarily involves pushing the nature of scientific investigation to the limits, where philosophical assumptions rather than experiments and data start to shape theories.
Important philosophical issues in cosmology arise in both a narrower and broader sense. The narrower sense is the domain of Physical Cosmology. The broader sense is about the wider nature of cosmology: its relation to meaning and purpose and its relation to life as understood by societies through the ages. To distinguish this from Physical Cosmology, I’ll call this broader study Cosmologia. I’ll look at them in turn.
Physical cosmology is the study of the nature of the physical universe at the largest scale: what exists? What is it doing? How did it get to be what it is? It is a testable science, supported by many very sophisticated observations. We live in a galaxy (figure 1 left) made of billions of stars, which we see as the Milky Way on a clear night. The universe is made up of billions of galaxies that form massive clusters (figure 1 right) which are themselves structured into vast sheets and walls.
Cosmologists have an extraordinarily successful model of the geometry and evolution of the visible universe, from its initial Hot Big Bang stage until the present era. But the model has problems. A series of philosophical issues underlie these problems. To understand them fully, one must follow the physics and astronomy in some detail.
A series of philosophical issues underlie these problems. To understand them fully, one must follow the physics and astronomy in some detail.
Observations of distant galaxies establish that the universe is changing with time – its overall scale is increasing as galaxies move further and further away from each other. The evidence is clear: galaxies are on average moving away from us, and the speed of recession increases with distance, a result initially established by Georges Lemaître and Edwin Hubble between 1917 and 1929. The universe is a dynamic universe. Its evolution  is governed by the laws of physics, which are the same everywhere and unaffected by our existence. This is the First Principle of Indifference: humans cannot affect the evolution of the universe. It is what it is.
There are two long-range forces: electromagnetism and gravity. But electromagnetism has both positive and negative charges that cancel out on large scales, so it has no long-range effect. Because gravity is the only long-range force that only has positive charge (in the case of gravity, the charge is what we call “mass”), gravitation controls how the universe evolves with time. The cosmological outcomes depend on the kind of matter and energy present. In a closed causal loop, spacetime geometry controls the change of matter density with time and tells matter how to move, and matter tells spacetime geometry how to change with time.
It is noteworthy that all of the greatest minds at the time when cosmology was first scientifically investigated (1917-1931) assumed as an unquestionable fact that the universe must be unchanging with time: it must be static. This was Einstein’s greatest blunder; he told both Alexander Friedmann and Georges Lemaître, who independently discovered the possibility of expansion, that they were wrong. It was, however, Einstein that was wrong – change with time is a core feature of the universe.
The basic model of cosmic evolution
So what is the nature of the Universe’s dynamic evolution? It is not just a geometric issue. It involves a series of key physical interactions that have shaped what exists today. What happened at the origin of the Universe is unknown. Extremely soon thereafter, there began an extremely brief period of extraordinarily rapid accelerating expansion (“inflation”) which diluted and cooled whatever matter was there at that time. This was followed by reheating and then a Hot Big Bang epoch when matter and radiation interacted strongly, so the universe was opaque to radiation. Primordial nucleosynthesis occurred, leading to the existence of helium and traces of deuterium and lithium in addition to hydrogen. Then decoupling of matter and radiation took place, and the Universe became transparent to radiation at the Last Scattering Surface (LSS) which emitted Cosmic Blackbody Radiation (CBR) that we detect today at the extremely low temperature of 2.75K (−270 °C).
Quantum fluctuations during the inflationary era led to very small density fluctuations which formed the basis of the gravitational instability that lead to stars and galaxies coming into existence after about 400 million years. Some large stars evolved very rapidly and then exploded, spreading through space heavy elements such as carbon and oxygen that had formed in their interiors by stellar nucleosynthesis. This formed the basis for the existence of second-generation stars surrounded by planets on which life could form. While the expansion of the universe slowed down during the Hot Big Bang era and for a long while thereafter, in more recent times it has been speeding up due to some form of negative energy referred to as dark energy. Its nature is unknown: it may just be a cosmological constant - a constant repulsive force proposed by Einstein in 1917.
Astronomy and cosmology are observational sciences rather than experimental sciences because we can’t do experiments on stars or galaxies or on the Universe per se. We can’t re-rerun the Universe with the same or different starting conditions to see if things work out differently. It does not even make sense to talk about laws for the universe because the very nature of a law is that it applies to multiple objects and there is no other Universe to which a law can apply. Furthermore, cosmology is unique among observational sciences because we can’t compare the Universe with any similar objects, as we can in the case of mountains or stars or galaxies or elephants. Although we can make many observations of it, there is only one Universe for us to observe.
What we can do is make a variety of mathematical models of the Universe and compare their predictions with astronomical observations of the one Universe which actually exists. We can run an ensemble of such models with different parameters and see which predict outcomes that best fit the observational data. But the issue then is this: if there is a deviation between observations and our suite of models, such as the large low-temperature region in the Cosmic Microwave Background Radiation sky, do we say that our models are wrong in some way and that we must build better models? Or do we say that the model is fine because many of the processes happening are statistical and the deviation of observations from model predictions is acceptable in view of this stochasticity? This is the unavoidable issue of Cosmic Variance.
The universe is vast. The visible part of the universe contains about 200 billion (2 × 10^11) galaxies, typically between 3,000 to 300,000 light-years in diameter. By contrast, the distance to the moon is about 1.2 light-seconds, to the Sun 8 light-minutes, and to the nearest star (Alpha Centauri) 4.36 light-years. Thus, each galaxy is immensely larger than the Earth, and the visible Universe immensely more so. Human beings are totally insignificant compared with the size of the Universe. This is the Second Principle of Indifference.
From the viewpoint of cosmology regarded as an observational science, this is a major problem. We can travel to every part of the surface of the Earth to see what is there, and we have photographed every part of the surfaces of the Earth and Moon from satellites in space. By contrast, we can only observationally access a very small part of the Universe. We can’t travel to distant parts of the Universe to see what is there. Furthermore, the entire existence of humanity (about 1.8 million years) is an instant of time compared with the age of the Universe (14 billion years).
Because light travels to us at the speed of light (3 x 108 m/second), we can’t see things as they are today. For instance, we see the Andromeda galaxy as it was 2.5 million years ago. We observe a snapshot of the distant past, not the present. Thus we can only the view Universe from what is effectively one point in space and one instant of time. We cannot see to earlier times than the Last Scattering Surface (LSS) because the Universe was opaque then.
All the images we see are projected onto a 2-dimensional sphere (“the sky”) . We have to deduce the nature of the entire cosmos from (i) a single 2-dimensional multiwavelength image across the sky of what exists up to the matter-radiation decoupling redshift z*= 1100 and (ii) geological type data about conditions close to our past world line well before matter-radiation decoupling. It is because of this that determining the distances of galaxies and other luminous sources is at the core of cosmology. Hubble’s great contribution was obtaining the first reliable measure of such distance via Cepheid variable stars. Our problem then is separating the evolution of the cosmos from the evolution of source properties. Observational cosmology centres on the search for standard candles – families of sources that can be considered to have standardised properties, such as Cepheid variables and Type Ia supernovae. But the concern is whether those properties could have been different a long time ago when inter alia the local environment metallicity was different.
The starting point for proposing a geometry for spacetime is that there does not seem to be any preferred direction around us on the largest scales. For example, galaxy observations do not indicate a region that might be the Centre of the Universe because there are more galaxies in one direction than others. If we assume that we are not special observers – we are not in a special place in the universe – then this holds for all observers; it is probable that the Universe appears almost isotropic at every location. It is then a mathematical theorem that the universe must additionally be spatially homogenous (it is the same everywhere at the same time). Thus, we can propose Universe models where the universe obeys both spatial homogeneity and isotropy about each point. Such models are the standard background models of cosmology and have no large scale structure. To represent the formation of structure (such as galaxies), we perturb the background models to obtain more realistic models containing small inhomogeneities that grow over time to become galaxies and clusters of galaxies.
Geometrically, these background models are characterised by just a scale factor a(t) that varies with time, and a constant k determining their spatial curvature. The constant k is +1 if the universe has positively curved space sections and hence is spatially closed (you arrive back at the same point if you proceed in an undeviating way in any direction); k is 0 if the universe has flat space sections, obeying the rules of Euclidean geometry; and k is -1 if it has negatively curved spatial sections, with parallel lines deviating ever further from each other the further one goes.
Originally the Copernican assumption underlying these models (“the universe is the same everywhere”) was just a philosophical assumption, adopted because it gave very simple models that worked well. Since then, a variety of observational tests have been developed to determine if the universe actually is spatially homogeneous on the largest scales, and the assumption has now been transformed from an unverified philosophical one to an observationally tested scientific result, which is substantial progress. This confirms the Third Principle of Indifference: we are not the centre of the Universe, it has no centre.
As mentioned in the previous section, to study structure formation, we must consider perturbed models exhibiting small deviations from spatial homogeneity. These models lead to predictions of how structure formation is shaped by the background model evolution. These predictions lead to ways of testing those background models by a range of observations: how galaxy redshifts vary with distance; by Cosmic Microwave Background Radiation observations determining the CMB temperature, spectrum, anisotropy, and polarisation; by galaxy observations giving the matter power spectra (how much structure there is on different scales, which we compare with numerical studies of structure formation); as well as by primordial element abundance measurements.
All of these observations converge to determine the same set of cosmological parameters, which are the basis of what is often referred to as the Standard Model of Big Bang cosmology. These parameters indicate that the universe is nearly spatially flat, with its energy density comprised of about 68.3% dark energy, 26.8% non-baryonic dark matter (not the same as ordinary matter), and 4.9% ordinary (baryonic) matter. It is widely accepted that up to small variations in these parameters, these are excellent models of the Universe in which we live.
Despite the success of the standard model of cosmology, there are a series of things we do not know as regards the underlying physics, as regards cosmology itself, and as regards how they interact with each other. Much work is being done on all of these issues, which are considered below.
Key physics issues of the standard model of cosmology
- What is making the Universe speed up? That is, what is the nature of “dark energy”? It may just be a cosmological constant, but then again, it may be a dynamic field. We do not know. What we do know is that a simple quantum field theory calculation of the vacuum energy density that one would expect to act as an effective cosmological constant gets the answer wrong by 120 orders of magnitude compared to what is observed. Solutions to this drastic problem include supposing that a multiverse exists or that the theory of gravity needs modification (see below for both).
- What is the nature of the non-baryonic invisible dark matter which dominates structure formation and represents 85% of the matter in the universe? Many possibilities have been proposed, and many searches of various kinds have been carried out without success. What we do know is that it is quite unlike the ordinary matter that dominates on Earth because it does not directly interact with light, which is why we cannot see it.
- A possible explanation of both of the above issues is that the apparent existence of dark energy/dark matter is an illusion arising because we are using the wrong law of gravity in those large-scale contexts. General Relativity is very well tested on solar systems scales and verified by the recent set of gravitational wave observations. But do we need some form of modified gravity to explain the apparent dark energy and/or dark matter? Many possibilities are being studied.
- What caused the spectacular exponential expansion that most cosmologists believe took place in the very early universe? What is the inflaton that powered cosmic inflation? There are over 120 models on the market, but none of them is exceptionally convincing because none is based on well-established physics. However, despite this, it is commonly agreed that inflation took place and explains why the universe is as uniform as we observe it to be.
- What is the origin of the matter/anti-matter asymmetry we find locally? Matter completely dominates antimatter, despite them being theoretically on an equal footing. A set of conditions for this asymmetry to arise in the early universe has been proposed by Andrei Sakharov, but we do not know a specific mechanism that works.
These are not philosophical problems per se, but it is a bit unsettling to have such central physical pillars of the dominant observationally well-tested theory so uncertain. Do they possibly point to one or other of the underlying philosophical foundations – such as the tacit assumption that physics is the same everywhere in the universe as it is here on earth – being wrong?
One exploration of that possibility is through the theories proposing that some of the “constants of nature” are in fact not constant but are different in distant regions of the universe. Proposals made in this regard have been made and tested to some extent but are not convincing.
Key cosmology issues of the standard model of cosmology
- Hubble tension. Precise measures of the Hubble constant (the rate of expansion of the universe today) are available from a variety of sources. It appears that estimates from the near (late) universe differ significantly from those obtained from the far (early) universe. The early universe value of about 67 km/s/Mpc and the late universe value of about 74 -75 km/s/Mpc disagree significantly. The source of this discrepancy is unknown.
- Large scale inhomogeneity? A possible resolution is that the universe is spherically symmetric about the earth but inhomogeneous, which could lead to such a prediction. Is the Copernican principle (we are not at the centre of the universe) really true? This relates to the issue of why the universe is so uniform, which is usually taken to be explained by inflation. But to verify if inflation is responsible, one must work out inflationary dynamics in very inhomogeneous spacetimes, whereas this is almost always worked out only in the context of almost homogeneous universes - the basic geometry is assumed to start with.
- The shape of the Universe. A key question is whether the spatial sections have positive, negative, or flat spatial curvature: that is, is k = +1, -1, or 0? This is important because firstly, the dynamics of the universe is far richer when k=+1 than in the other two cases. Only then can there be a static model, a bounce, a re-collapse, or a coasting phase. In addition, if k = +1 the universe is necessarily spatially closed: it has a finite volume at any time and contains a finite amount of matter and a finite number of galaxies - a huge difference from the infinite size of the universe (with an infinite number of galaxies) usually assumed. Unfortunately, current observations do not determine the sign of the spatial curvature. It may never be determined because it is so close to zero. (Note that actually being zero is infinitely improbable.)
- A Small Universe? One also can have spatially closed universes, with a finite number of galaxies and a finite amount of matter, for any curvature if they don’t have the standard topology: their large-scale connectivity is not the same as that of Euclidean space. A torus or a Möbius strip are examples in 2 dimensions. If k = -1, very complex spatial topologies are possible.
The fascinating possibility then is that if spatial sections close up on themselves on a small enough scale, it is possible we have seen right around the entire universe since the time of decoupling. Then there are no visual horizons: we can see all the matter that exists in the universe, which is spatially finite in this case. We can see multiple images of many galaxies, and even perhaps see our own galaxy multiple times as it was at different stages of its history. In such a small universe our observational situation is quite different than in usual models with visual horizons. We can check this possibility by searching for identical circles in different directions in the CMB sky. They have not been discovered in specific small universe topologies tested so far.
- A Multiverse? Are there other universe domains similar to the one that we observe, but with different parameters or even different physics (“a multiverse”)? This has been proposed for a variety of reasons, particularly the Anthropic issue (see below). However, it has the major problem that there is no hint that this is the case in galaxy surveys or in the CMB sky, which is the furthest we can see. If it exists, it lies beyond the visual horizon, so you can claim anything you want about it. I can claim something else: those claims can be neither proved nor disproved. Thus, one can suggest they are not scientific claims. The issue that then arises is whether “non-empirical theory confirmation”, based on Bayesian methods, is a valid scientific method. It is a very slippery road away from usual science.
Key issues at the intersection of physics and cosmology
- What is the fate of the far future universe? We don’t know for sure what will happen in the far future universe - this depends on the nature of matter/fields that will dominate then. As we don’t know the dynamical nature of dark energy, we don’t know the answer. However, if it is indeed a cosmological constant, the universe will simply expand forever at an ever-increasing rate, getting emptier and emptier and cooler and cooler until eventually even baryons decay away. Alternatives have been proposed, including Big Rip models where the universe expands an infinite amount in a finite time, but these depend on highly implausible speculations about the nature of matter; and Roger Penrose’s Conformal Cyclic Cosmology where the present aeon of expansion with energy density dying away is followed by a rebirth of matter in a new expansion phase. However, the mechanism whereby this happens is obscure and is not tested physics.
- Did the universe have a beginning? Despite the famous singularity theorems proved by Roger Penrose and Stephen Hawking giving evidence that there must have been a start to the universe even if it was very inhomogeneous and anisotropic early on, we still do not know if this is indeed the case for two key reasons. Firstly, the inflationary universe early phase that is now a key part of the standard model of cosmology was driven by scalar fields that violate the energy conditions underlying the singularity theorems – that is what enabled the universe to accelerate at that time. Hence the singularity theorems don’t apply to the early universe when scalar fields dominate - their foundations are undone. Secondly, in the very extreme conditions we expect before inflation, what happens probably depends on the nature of quantum gravity, for which we have no good theory. So, we don’t know if the universe had a beginning or not, in physical terms. Some loop quantum gravity and “ekpyrotic” theories suggest there was no spacetime singularity- a start to the universe - then. Some theories suggest that the present expanding universe domain resulted from a previous collapse to a black hole state and the subsequent re-expansion. Uncertainty reigns.
- Underlying this is the Physics Horizon. If we could test the relevant physics in laboratories or particle colliders, we could resolve this issue. But we can’t because there are limits to the energies we can reach in such experiments. The energies we can reach are not high enough even to test the physics of inflation let alone earlier times. The highest collision energies in colliders at present are 14 TeV = 14 x 10 12 eV (at the LHC). If we could go 1000 times higher, we still could not test the energy scales predicted by the simplest models of inflation let alone test the energy scales quantum gravity. So, the relevant physics is untestable.
- Before the beginning. We can say nothing about what existed before the start of the universe, if it did have a start. The point is that if there is a beginning to the Universe, it is a start not just of space and time but of physics itself, for all the laws of physics are formulated as laws that apply within spacetime. The concept of “before the beginning” does not even make sense, for there was no “before” then – indeed there was not even a “then” then! A number of attempts to generate physical theories to account for the creation of the universe are pure speculation: the idea of physics laws holding then and being able to create the universe is not a coherent idea. Such theories always rely on some form or other of assumption of ordinary physics already pre-existing, except for Stephen Hawking’s ingenious “no boundary” idea, where the universe initially had 4 spatial dimensions (and hence no time), which changed to three space dimensions and one time dimension and started expanding. However, this depends on a particular quantum gravity theory (the Wheeler-de Witt equation) which is ill-formulated and untested, and we have no definitive way to test if it is correct or not. We have reached the boundaries of science, assuming as always that “science” relates to testable theories.
- What is the origin of the arrow of time? What is the origin of the arrow of time that dominates everyday life given that the basic physics equations relevant for everyday life are time symmetric? It is usually accepted that this is due to special initial conditions at the start of the universe: it started off in a very special smooth state, which is a priori highly improbable. That then raises the issue of why such special initial conditions should exist. Many people assume that inflation solves this issue by smoothing out the universe by the end of inflation, but Penrose convincingly argues that this is not the case when one takes gravitational entropy into account. Inflation assumes substantial smoothness to start with, as this underlies the thermodynamic assumptions built into that theory. That is Penrose’s motivation for his Conformal Cyclic Cosmology, which has its own problems as mentioned above. Until we have a convincing theory of the origin of the universe, it seems we will just have to take a special starting geometry as a contingent condition (it could have been otherwise)
Treating cosmology as a science, based on tested physics on the one hand and solid observational evidence for its geometry and dynamics on the other, we have a basically solid model but with substantial uncertainty at the largest scales, at the smallest times, and at its foundations. The philosophical issue is to what degree we require that our models be testable. Existence of visual horizons and the physics horizon place strict limits on the extent to which this is possible. Our models are therefore underdetermined, for example with numerous inflationary models proposed and none accepted as pre-eminent, and the same is true for dark energy. We do not have a unique outcome.
Our models are therefore underdetermined, for example with numerous inflationary models proposed and none accepted as pre-eminent, and the same is true for dark energy. We do not have a unique outcome.
Finally, given all this evidence about the nature of physical cosmology, what does all this have to say about the broader nature of cosmology: its relation to meaning and purpose and life?
An important question is, is there is life elsewhere in the universe? If so, is it based on Carbon, like life on earth is? And what will it be like? Will it have evolved to the state of a technological civilisation?
This has been the subject of much speculation for centuries and has been formalised in the Drake Equation which breaks down the probability for such life into a number of different factors. The problem is that we have no idea what some of those probabilities are. In particular, we don’t know how life arose on Earth, for example, whether metabolism or genetic information came first, or whether they developed simultaneously in parallel. The honest answer is that we don’t know the probabilities. However, it is not unreasonable to assume, in view of the vast number of stars and hence planets in our galaxy, that life and even intelligent life is not uncommon in our galaxy and the universe beyond it.
If life is not uncommon, there are huge philosophical implications as regards our relation to the Universe as well as our interactions with such life, should we ever encounter it. It would confirm that we are not in any way special in the universe, as was already fairly clear from its vast size. However, the distances are so large, and the speed of light limit on space travel so significant, that if life is indeed out there, we may never interact with it, except perhaps by receiving or exchanging signals.
What is likely is that if such life does exist, it will be based on organic (carbon-based) chemistry, and because of biological imperatives will have characteristics similar to life on Earth – metabolic networks, gene regulatory networks, homeostasis, sensory systems, brains based in neural networks, and so on. These will plausibly have been developed via multilevel processes of natural selection of an EVO-DEVO nature, preferentially selecting organisms whose developmental systems produce modular hierarchical physiological structures that provide selective advantage relative to other organisms, particularly when gene-culture co-evolution is taken into account.
The universe has a very special nature that allows life to exist. This need not have been the case. The expansion of the universe created galaxies and first-generation stars made of hydrogen and helium. Stellar nucleosynthesis in these stars then created the heavier elements necessary for planets and life to exist, allowing planets around second-generation stars to provide congenial environments where life could evolve. This is a downward process from the state of expansion of the universe to the resulting elements and structures: structure formation depends on cosmological conditions. If either physics or cosmology had been significantly different, this might not have happened. None of the evolutionary processes leading to life as we know it could have occurred if conditions were wrong. Our universe is in this sense a fine-tuned universe. As stated by Stephen Hawking ,
The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.
Martin Rees gives examples of finely-tuned constants in his book Just Six Numbers, including the famous example of the cosmological constant Lambda Λ, discussed by Stephen Weinberg: too large a cosmological constant or too small structure formation seeds can prevent structure formation (galaxies, stars, planets) happening and therefore prevent life existing.
The key question then is why this should be so, why is the universe fine-tuned for life? A commonly accepted answer is because we live in a multiverse: there are a great many universe domains out there beyond our visual horizon, maybe an infinite number of them, with varied physical properties; and if there are enough of them, conditions for life to exist will be fulfilled in some of them just by chance. In other words, the multiverse proposal works by claiming that the existence of a suitable such domain is after all probable. Furthermore, the existence of such domains is claimed to be an inevitable consequence of some specific forms of self-reproducing inflationary universes.
This proposal suffers from the testability issue mentioned above both as regards the alleged spacetime geometry and the physics supposed to underlie the existence of such domains. It is not a scientifically testable hypothesis in the usual sense. It is worth noting that only some of the many forms of inflation result in such a profusion of existence. There is also a problem as to what mechanism is supposed to result in different physics being realised in each supposed universe domain.
But the real issue is that the proposal does not solve the anthropic problem, it just displaces it one level up. If a multiverse exists, why should it be of such a nature as to include any universes at all that are friendly to life? Whatever physical theory determines the existence of the many domains in a multiverse will also involve a set of parameters that in some cases will allow bio-friendly universes to exist in the multiverse and in other cases will not. So how do you justify the anthropic nature of your multiverse theory and its physics? The problem remains unsolved.
Underlying this are the deep inter-related metaphysical issues
- Why does the universe exist?
- What underlies its existence?
- Why does it have the nature it does?
The basic point is that these are not scientific issues. Science takes over once the universe has come into existence with specific laws of physics governing its dynamics, but scientific explanations do not work before the universe exists. Metaphysical explanation is needed. In section 4 I will approach this by an indirect route: namely the nature of possibility spaces.
Science takes over once the universe has come into existence with specific laws of physics governing its dynamics, but scientific explanations do not work before the universe exists. Metaphysical explanation is needed.
I claim that the deep structure of the universe is timeless, eternal possibility spaces of a Platonic nature. These underly the nature of what is possible in the physical universe and what is not. They are of varied nature. Possibility spaces describe facts like it is not possible to move faster than light, we cannot see what is happening on the Andromeda Galaxy at this precise instant, and it s not possible for a living being to survive without food.
Physics possibility spaces
These are a way of re-expressing the nature of the laws of physics in terms of a space of possible physical outcomes. They delineate what possibilities physics allow. For every possible combination of a system's parameters, a point is included in a multidimensional possibility space. They depend on the fundamental physical constants and determine engineering and biological possibilities.
In classical physics, they include phase spaces for position and momentum, variables which characterise the possible motions of a pendulum or a spaceship or planet; for example they determine if a spaceship has enough energy to leave the solar system once its fuel is used up. In quantum physics, they are Hilbert spaces for wave functions. which represent for example the possible energy states of the system. [EM1]
Biological possibility spaces
These express possibilities in biology. They include classes of solutions to computational biology at the macro scale, as well as microbiology possibility spaces characterised by Andreas Wagner in his remarkable book Arrival of the Fittest.
There are possibility spaces of huge dimension for proteins, characterising both what kinds of proteins can exist and how they can fold ,M1] and for the genotype-phenotype maps for metabolism and gene regulation as described by Wagner. These depend on the underlying physics and constants of nature. They represent key emergent properties that enable biological functioning at the molecular level, but, in the latter two cases, cannot be deduced from physics.
Most importantly and remarkably, biological possibility spaces include the possibility of agency consciousness, and the symbolic capacity that allows society and technology to flourish. Without all of them, entities such as the computers that shape modern technology would not exist, for they are consciously designed and manufactured to achieve specific chosen goals. Just like the physics possibility spaces, biological possibility spaces are the same everywhere in the universe at all times. The same biological possibilities are available on every planet that may exist anywhere; which of the possibilities are realised depends on local conditions and history.
Exploration of mathematical properties such as the nature of Platonic solids, the value of the number π, the distribution of prime numbers, the nature of discrete and continuous groups, and the existence of Mandelbrot sets 1]can be regarded as exploring a Platonic space of mathematical possibilities. These are all timeless eternal verities that will be agreed upon by all competent mathematicians everywhere in the universe. We determine their nature by logical exploration of mathematical possibilities enabled by the interactive neural network nature of our brain, as described in another remarkable book: Plato's Camera: How the Physical Brain Captures a Landscape of Abstract Universals by Paul Churchland. A key point is that the abstract mathematical possibility space ΩMitself is different from the representation ΩMS of that possibility space occurring in any particular society S at any particular time. While ΩMS (represented in some specific notation) depends on time and place and culture, ΩM does not
There is also an abstract space of all possible thoughts underlying all the thoughts that intelligent beings actually have. You cannot have a thought unless it is possible to have that thought!
Thoughts can be represented in many ways – that is what languages are for. The space of all possible thoughts is a huge space but is of finite dimensions, as meaningful sentences are of finite length. In principle, having chosen a specific language for this purpose, a computer can print out a list of all possible meaningful sentences that are comprehensible because they respect the biological limitations on short term memory; this fact demonstrates that such a space exists.
 It will contain references to as yet unknown physical, biological, and engineering possibilities characterised by the other possibility spaces: it characterizes the fact that it will be possible to refer to them in the future. This space is the foundation of our mental capacities, with some of these possibilities being realised in each individual brain via our brain functioning during our lifetime.
The key methodological question for the broad enterprise of cosmology is, what kind of data is relevant for cosmologia? Should we only take into account the cosmological data summarised in previous sections, together with physical data from laboratory and collider experiments? What about everyday life? Is that data for cosmologia? Should the nature of biological possibility spaces also be considered relevant? What about the mental platonic spaces? Which are relevant to understanding the Universe?
My position is that biological and mental emergent phenomena are not epiphenomenal: they are key irreducible aspects of reality, allowed firstly by the possibility spaces that underpin them, and secondly by the evolutionary and developmental processes by which they are realised in the physical universe. The existence of these possibility spaces is just as much data about the nature of the cosmos as is the physical data. They are key aspects of the way things are. Data about everyday life and associated possibilities is data about the universe. Thus, in my view, we should take into account data not only about physics and the physical universe but also data about life, including our mental and social universes. How we interpret this is up to us.
If we take this viewpoint, the really deep issue in cosmology is this: why do these possibility spaces exist and have the nature they do? Those relating to physical outcomes underly physical cosmology and the nature of emergent physical entities including biological life. Biological emergence allows the existence of consciousness and mental properties. The nature of mental outcomes depends on the possibility spaces for logic and thoughts. All of this forms the big picture of the cosmos, which the family of possibility spaces make possible. So what underlies their existence?
Given this foundation, we finally can turn to the issue: is there meaning in the universe? Physicists who take into account only the physics and cosmological data proclaim that there is no meaning in the universe, as the late Stephen Weinberg did. But this is a selection effect resulting from the scope of the data they choose to take into account. It leads to the paradox of thinking that it is meaningful to make statements that the universe is meaningless.
It is a simple observational fact that the world is teeming with purpose: biological , economic, political, social , scientific. You can, if you wish, not take this into account in formulating your worldview. But if you do take it into account, it raises key issues: why and how does all this purpose exist? At a deep level, it exists because physical, biological, and mental possibility spaces allow it to exist. And here is the thing: possibilities that are realised include visions of great art and scientific achievement, poverty and wealth, compassion and selfishness, and above all concepts of love and hate, of good and evil. Moral and ethical virtues and evils are part of the mental possibility space which can be realised here or on any other planet in the Universe. They can be thought about because they are included in the mental possibility space and have been there since time began.
So how does this all, the physical universe including planets inhabited by intelligent life as well as the underlying possibility spaces, come to be this way? There are four possibilities.
1. There is no explanation or probability, things just happened to be the way they are.
This is logically and philosophically 100% solid. However, it is regarded as so unsatisfactory that almost no one accepts it. It provides no unifying view of the cosmos.
2. Things are inevitable: they could not have been any other way.
This is the fundamental physics project encapsulated in the phrase, "dreams of a Final Theory". But it has failed to deliver. The most sustained attempt by physicists to prove this is true has led to the String Theory Landscape where at the foundations, there are at least 10500 possibilities for physics, most of which do not include the kind of physics we experience and test in laboratories on Earth. The attempt to show that physics is unique is a failed research programme.
3. Things are probable, because of some situation such as that we live in a multiverse.
As pointed out above, the multiverse proposal just postpones the basic issue: so why is the multiverse of such a nature as to admit life in any of its universes? Why does the multiverse have the nature it does? Any such probabilistic theory is either based on a deeper layer that needs further explanation or is pure happenstance, a meaningless uncaused situation as in view 1 above.
4. Things are meant to be that way. In some sense, meaning and purposes underlie the universe.
This can neither be proved nor disproved, as pointed out long ago by David Hume. But this is as coherent a possibility as any other, particularly if one takes into account the mental possibility spaces that relate to purpose and meaning. Humans have demonstrably contemplated purpose and meaning and ethics for millennia and their existence is data on how things are. The existence of these possibility spaces is part of the deep structure of the cosmos, in the way that I have proposed above. In that sense, meaning is built into the foundations of existence.You are not obliged to take them into account in shaping a cosmological theory. If you do not do so, that will affect your view of the nature of the cosmos. If you do, the overall view is different.
 The word “evolution” is commonly used to refer to the change with time of the properties of the observable region of the Universe. This is not the same as “Evolution” in the Darwinian sense, as occurs in biology. (Back to section 2.1)
 Some of which is obscured in a wavelength dependent manner by the disk of our own galaxy. (Back to section 2.3)
 I am taking a specific stance concerning highly contested territory, for example the neutral theory of molecular evolution claims most evolutionary changes are due to random genetic drift that is selectively neutral.
There are also highly acrimonious debates concerning multilevel selection and a proposed Extended Evolutionary Synthesis. None of these changes the basic contention that adaptive selection will be at the root of extra-terrestrial life. (Back to section 3.1)
 Stephen Hawking, 1988. A Brief History of Time, Bantam Books, pp. 7, 125. (Back to section 3.2)