Physics is stagnating. We haven’t had any significant, new theoretical breakthroughs in decades. Do we need a radically new way of understanding the universe? If we treat the world as a neural network which is in the process of learning, then we can better understand quantum gravity, quantum computing and consciousness, writes Vitaly Vanchurin.
Physicists of the 20th century deserve a lot of credit. They came up with not one, but with two ground-breaking discoveries: quantum mechanics and general relativity. Yes, they had a good starting point, thanks to classical and statistical physics, but still the progress made was astonishing. Take for instance cosmology. Who would have thought that you can describe fluctuations during cosmic inflation before the big bang using quantum mechanics and then apply the rules of general relativity to study their evolution after the big bang? But if you do it right (thanks to Alexei Starobinsky, Alan Guth, Andrei Linde, Slava Mukhanov, Gennady Chibosov and others) you obtain very specific predictions that were later confirmed first by The Cosmic Background Explorer and later by Planck experiments. Remarkable, but the fairy-tale ends there. We still have no idea what drove inflation, or what dark energy or dark matter. Clearly, we are missing something big and many researchers (including myself) believe that we need a new theoretical framework which can unify the discoveries of the previous century. Whether it will be based on string theory, on quantum information or on neural networks, the new theoretical framework will likely to be transformational.
Whether it will be based on string theory, on quantum information or on neural networks, the next theoretical breakthrough will likely to be transformational.
But is there anything we can do now to hasten this transformational discovery? I suggest we do exactly what any artificial neural network (or any learning system) would do if it was stuck in a “local minimum” for a very long time; just increase the “step size”. In context of scientific research the “local minimum” represents an inability to make incremental progress and the increased “step-size” represents broadening the scope of scientific research. To some extent this is already happening. Some physicists are introducing new mathematical concepts, bridging different areas of physics and initiating inter-disciplinary collaborations, but this is not yet happening at all levels. Most scientists are not willing to conduct research outside their “comfort zone” for a very simple reason: this would mean a lot more work for a lot less recognition. That is where the real problem lies: the strategy which benefits individual researchers is the opposite to the strategy which would benefit the civilization as a whole.
My own attempt to increase the “step-size” and to find a way out of the “local minimum” employs a rather bold idea that the entire universe is a cosmological neural network. Its purpose is the same as any other neural network: to learn its training dataset or, in other words, to understand its environment. This may be trivial, but what was less trivial was to show that for learning to be effective it must be happening on all scales: from subatomic to cosmological. To check this hypothesis, I first developed a thermodynamic approach to learning (both equilibrium and non-equilibrium) , and then applied it to describe natural phenomena (both quantum and classical) on a wide range of scales. Some of my calculations in this area were published in a recent paper entitled “The World as a Neural Network” . What this suggests is that the quantum, classical and gravitational effects that we observe around us might not be fundamental, but emergent behaviours of a cosmological neural network learning. If correct, then it is telling us something very deep about how nature works.