*Physics is stagnating. We haven’t had any significant, new theoretical breakthroughs in decades. Do we need a radically new way of understanding the universe? If we treat the world as a neural network which is in the process of learning, then we can better understand quantum gravity, quantum computing and consciousness, writes Vitaly Vanchurin.*

* *

Physicists of the 20th century deserve a lot of credit. They came up with not one, but with two ground-breaking discoveries: quantum mechanics and general relativity. Yes, they had a good starting point, thanks to classical and statistical physics, but still the progress made was astonishing. Take for instance cosmology. Who would have thought that you can describe fluctuations during cosmic inflation before the big bang using quantum mechanics and then apply the rules of general relativity to study their evolution after the big bang? But if you do it right (thanks to Alexei Starobinsky, Alan Guth, Andrei Linde, Slava Mukhanov, Gennady Chibosov and others) you obtain very specific predictions that were later confirmed first by The Cosmic Background Explorer and later by Planck experiments. Remarkable, but the fairy-tale ends there. We still have no idea what drove inflation, or what dark energy or dark matter. Clearly, we are missing something big and many researchers (including myself) believe that we need a new theoretical framework which can unify the discoveries of the previous century. Whether it will be based on string theory, on quantum information or on neural networks, the new theoretical framework will likely to be transformational.

Whether it will be based on string theory, on quantum information or on neural networks, the next theoretical breakthrough will likely to be transformational.

But is there anything we can do now to hasten this transformational discovery? I suggest we do exactly what any artificial neural network (or any learning system) would do if it was stuck in a “local minimum” for a very long time; just increase the “step size”. In context of scientific research the “local minimum” represents an inability to make incremental progress and the increased “step-size” represents broadening the scope of scientific research. To some extent this is already happening. Some physicists are introducing new mathematical concepts, bridging different areas of physics and initiating inter-disciplinary collaborations, but this is not yet happening at all levels. Most scientists are not willing to conduct research outside their “comfort zone” for a very simple reason: this would mean a lot more work for a lot less recognition. That is where the real problem lies: the strategy which benefits individual researchers is the opposite to the strategy which would benefit the civilization as a whole.

My own attempt to increase the “step-size” and to find a way out of the “local minimum” employs a rather bold idea that the entire universe is a cosmological neural network. Its purpose is the same as any other neural network: to learn its training dataset or, in other words, to understand its environment. This may be trivial, but what was less trivial was to show that for learning to be effective it must be happening on all scales: from subatomic to cosmological. To check this hypothesis, I first developed a thermodynamic approach to learning (both equilibrium and non-equilibrium) [1], and then applied it to describe natural phenomena (both quantum and classical) on a wide range of scales. Some of my calculations in this area were published in a recent paper entitled “The World as a Neural Network” [2]. What this suggests is that the quantum, classical and gravitational effects that we observe around us might not be fundamental, but emergent behaviours of a cosmological neural network *learning*. If correct, then it is telling us something very deep about how nature works.

The proposal can also be viewed as a new attempt to reconcile quantum mechanics and general relativity – ‘the problem of quantum gravity’. In other words, the neural networks might be the missing link in the unification of quantum mechanics and general relativity. On the smallest scales the cosmological neural network is at equilibrium, which is very well described by quantum mechanics, but on the largest scales the neural network is still very far from an equilibrium, which is better described by general relativity. In addition, the neural network model might shed light on the problem of observers - ‘the measurement problem’ in quantum mechanics and ‘the measure problem’ in cosmology, but for that we must first develop a better understanding of macroscopic observers and, perhaps, consciousness. This is where an input from biologists might be absolutely essential.

Does this mean that neural networks give us an improved theoretical framework for doing science in the 21st century? It is too early to say for sure, but it is encouraging that a growing number of physicists, biologists and computer scientists are seriously considering this possibility. For example, initially it was not clear exactly when the learning dynamics at equilibrium would be correctly described by the Schrodinger equation [1,2], but later (together with Mikhail Katsnelson) we showed that this happens when the number of neurons varies [3]. This also opened a possibility for building an artificial quantum computer, i.e. an artificial neural network running on a classical computer which is capable of performing quantum computations. This is something that we are currently discussing with machine learning experts.

The neural networks might be the missing link in the unification of quantum mechanics and general relativity.

The idea of using artificial neural networks for machine learning originally came from biology, but, if the universe is a neural network, then we may be able to “return the favour” and use machine learning to study, for example, biological evolution. I am now working together with biologists on developing such a theory and things look very promising.

What about cosmology? Once again, it is premature to report the final results, but preliminary results suggest that the Big Bang may be nothing but the “Aha! moment” of our universe viewed as a learning system. And that is only the beginning of a long and exciting journey that lies ahead.

[1] V.Vanchurin, “Towards a theory of machine learning”,Machine Learning: Science and Technology, https://iopscience.iop.org/article/10.1088/2632-2153/abe6d7

[2] V.Vanchurin, “The world as a neural network”, Entropy 2020, 22(11), 1210, https://www.mdpi.com/1099-4300/22/11/1210

[3] M.Katsnelson and V.Vanchurin, “Emergent quantumness in neural networks”, https://arxiv.org/abs/2012.05082

## Join the conversation