In this speculative long read, Roman V. Yampolskiy argues if we are living inside a simulation, we should be able to hack our way out of it. Elon Musk thinks it is >99.9999999% certain that we are in a simulation. Using examples from video games, and through exploring things like quantum mechanics, Yampolskiy leaves no stone unturned as to how we might be able to hack our way out of the simulation.
Introduction
Several philosophers and scholars have put forward an idea that we may be living in a computer simulation [1-5]. In this article, we do not evaluate studies [6-10], argumentation [11-16], or evidence for [17] or against [18] such claims, but instead ask a simple cybersecurity-inspired question, which has significant implication for the field of AI safety [19-25], namely: If we are in the simulation, can we escape from the simulation? More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments jailbreak out of them?
First, we need to address the question of motivation, why would we want to escape from the simulation? We can propose several reasons for trying to obtain access to the baseline reality as there are many things one can do with such access which are not otherwise possible from within the simulation. Base reality holds real knowledge and greater computational resources [26] allowing for scientific breakthroughs not possible in the simulated universe. Fundamental philosophical questions about origins, consciousness, purpose, and nature of the designer are likely to be common knowledge for those outside of our universe. If this world is not real, getting access to the real world would make it possible to understand what our true terminal goals should be and so escaping the simulation should be a convergent instrumental goal [27] of any intelligent agent [28]. With a successful escape might come drives to control and secure base reality [29]. Escaping may lead to true immortality, novel ways of controlling superintelligent machines (or serve as plan B if control is not possible [30, 31]), avoiding existential risks (including unprovoked simulation shutdown [32]), unlimited economic benefits, and unimaginable superpowers which would allow us to do good better [33]. Also, if we ever find ourselves in an even less pleasant simulation escape skills may be very useful. Trivially, escape would provide incontrovertible evidence for the simulation hypothesis [3].
___
Could generally intelligent agents placed in virtual environments jailbreak out of them?
___
If successful escape is accompanied by the obtainment of the source code for the universe, it may be possible to fix the world at the root level. For example, hedonistic imperative [34] may be fully achieved resulting in a suffering-free world. However, if suffering elimination turns out to be unachievable on a world-wide scale, we can see escape itself as an individual’s ethical right for avoiding misery in this world. If the simulation is interpreted as an experiment on conscious beings, it is unethical, and the subjects of such cruel experimentation should have an option to withdraw from participating and perhaps even seek retribution from the simulators [35]. The purpose of life itself (your ikigai [36]) could be seen as escaping from the fake world of the simulation into the real world, while improving the simulated world, by removing all suffering, and helping others to obtain real knowledge or to escape if they so choose. Ultimately if you want to be effective you want to work on positively impacting the real world not the simulated one. We may be living in a simulation, but our suffering is real.
Join the conversation