Rather than imagining that we are somehow outside of the universe that physicists model, they should see our embedded intelligence as a central part of reality and as critical to what happens. Doing so can help make sense of the passing of time and our experience of free will, writes Jenann Ismael.
Any attempt to describe the universe as a totality inevitably involves self-reference. This isn’t something that one often confronts in physics. Most day-to-day physics is modelling other systems: cells, gases, planets. We maintain a separation of subject and object, or of investigator and system being investigated. And even though cosmology is explicitly devoted to the study of the universe as a whole, it is customary in cosmology to maintain the imaginative fiction that we – the people modelling the universe - are looking at it from the outside. We adopt, that is to say, the God’s Eye View.
Ultimately, though, we are part of the universe. And that means that however we regiment the universe, whatever regime we work in, if we aim for a theory that describes all of existence, self-reference is unavoidable. Any system that is modelling the universe as a whole – aiming for full coverage of all of existence – is going to encounter self-reference. This is something that we can ignore in some contexts. It matters in others.
The people that have unavoidably encountered it are people who are trying to program an artificial general intelligence (an AGI). They want to program a system with a bunch of general knowledge and the ability to model the world, and they are coming up against the fact that some of what happens is stuff that the computer does and that will give rise to the possibility of paradox. 
Let me give you a simple example. Suppose we want to program a computer to serve as a grand overarching database for the universe: a repository of information about everything. We begin by programming it with as much factual information as we can about the world. We program the laws of physics and all of the scientific knowledge we’ve amassed. We add the facts of history, what we know about the monkeys of costa Rica and the vast reaches of space. The goal is to be able to put any question of physical fact to it and the answer will appear in the output channel.
It is not, however, hard to find a factual question that it can’t answer truthfully. Ask it ‘is the answer to this question that’s about to be displayed in the output channel ‘no’?’.
Think this through and you will see that any answer that it gives will be false. If it answers ‘yes’, clearly it is wrong, since that misdescribes what it wrote. And if it answers ‘no’, it is wrong as well. This might seem like a little logical glitch but its significance is profound.
These kinds of problems are familiar to philosophers and certainly to computer scientists. The problem arises because what the computer does in giving the answer interacts with what the answer says by rendering it false. And by saying things, the computer is doing things. There is interference between what it says and what it does.
This might seem like a little logical glitch but its significance is profound.
There’s nothing mysterious here. What is happening is that the computer can’t stabilize the fact that the answer is meant to describe (the word that appears on the screen) independently of giving the answer, and the thing is set up so that no matter which answer it gives, what it does in giving it conflict with what it says. This kind of logical paradox is a form of negative interference.
Negative interference can arise in other ways. Let’s say I offer you a bet for $1, for which you get $10,000 if you correctly predict the output of a simple deterministic switch that will go up or down depending on the input. The glitch is that you have to tell me your prediction, and the deterministic switch is set up to do the opposite of what is predicted. This is an example of causal interference. The prediction will cause its own defeat.
Interference can be positive as well. If we ask the computer whether ‘yes’ will appear in the output channel in response to this question, or indeed what words will appear, we get the positive form of semantic interference. It doesn’t matter what answer the computer gives; the answer will be true. The causal version of positive interference is a self-fulling prophecy. A man’s insecurity about keeping his commitment-shy lover drives her away. A favorable prediction by an influential analyst drives stock prices higher.
The phenomenon is incontestable. It is a form of interaction effect. Any system that is acting in the domain it is representing (like the computer in our examples) is going to encounter interaction effects. They arise naturally and inevitably for such systems though whether and in what way it matters depends on the setting.
Interaction effects can interfere with pure knowledge acquisition. If you are trying to build a system that is just going to be a passive repository of knowledge, self-reference is a logical glitch that you want to suppress or whose effects you want to quarantine. But it is a different story for an embodied intelligence which can use its knowledge to guide its own behavior. A system like that knows that its own knowledge has interaction effects in the domain that it represents and instead of suppressing or nullifying those effects, it anticipates and exploits them. The whole point of intelligence from a natural perspective is to use knowledge to guide behavior. When you ask yourself “Where can I find out how to avoid rush hour on the way to the airport?” or “What do I need to know to be safe while travelling?” you are recognizing that your knowledge makes a difference to what you do. The mental life of an embodied intelligence is organized around managing interaction effects.
For understanding ourselves, the interaction of our knowledge with the world is central. You are an embodied intelligence and you implicitly recognize that your thoughts and experiences are connected in the world every time you make a decision about how to act.
When we are representing the world in day-to-day life, we are not normally representing our own internal processes. We do that sometimes, but usually our focus is outside, we’re looking at macroscopic events in the publicly available landscape. The logic behind the question ‘what should I do?”, however, recognizes that your thoughts are connected in the world. When you trace out the effects of different potential actions in your mind and choose the one that you prefer, you are exploiting their interaction effects. If we were to make the logic of your representation fully explicit, we would find self-reference. Each of us tacitly locates our own thoughts and decisions in the dynamics of the world we are representing. We aren’t just representing a world, we are representing our world. Everything we see is implicitly related to where we are in the landscape and what opportunities it might hold for intervention. We look at the world not passively but opportunistically, with intent to intervene. What that means is that interference is at the heart of the embedded perspective; our view of the world is centered on and organized around it.
Each of us tacitly locates our own thoughts and decisions in the dynamics of the world we are representing... We look at the world not passively but opportunistically, with intent to intervene.
You might be thinking here, but why couldn’t we describe the universe in purely objective physical terms - one event and then another - no trouble, no paradoxes, no self-reference? Surely the universe itself is consistent, and if the universe is consistent, there is no contradiction in a complete description of it. The point isn’t that the universe is inconsistent. It is that a complete and consistent objective description of the universe is impossible for any system in it. 
 The problem is not reality. It is representation. If you’ve got representation up and running – whether on a computer or in a human mind – and the system falls under the scope of what it describes, there is going to be self-representation at the semantic level (i.e., in the content of its representation).
Most of the examples of self-reference in logic are static: people are looking at logical relations among eternal propositions. But I’ve been interested here in physics and interested in representation as an embodied activity. We are assuming the universe is a field of events, and self-reference is arising because some of what the system is representing is also what the system is doing. That will create interference between the two levels – the representational and the physical. When there is that kind of interference, we can rig it up - as we did with the self-refuting question - so that the interference is negative. The funny business is all at the semantic level. It doesn’t place constraints on what a system can do in physical terms. It places constraints on what an embedded system can truthfully represent: on what it can know, on what is there to be known from its perspective.
The problem is not reality. It is representation.
Let me repeat this now in application to you. You are an embodied intelligence and you are in the business of representing the world. You are acting in the domain you are representing, and so are negotiating two levels: the physical level and the semantic level. There’s no avoiding that there’s going to be interference between them -- representational activity is part of the physical world, it is connected to other things in the landscape, and every physical act registers at the representational level. We are just seeing it play out at the representational level in the mind of the creature representing.
The ripples of your actions interfere with your ability to know the future. This kind of interference isn’t the kind that arises when one object blocks another from view. Nor is it like what happens when you turn on the light to check if your children are sleeping, or you try to measure something that is altered in the course of measurement. In those cases, there is a fact of the matter that is well-defined independently of your attempt to ascertain it and your attempts are just unsuccessful. In this case, there is not. It is more like you are trying to reach for a floating object in water and the motion of your arm pushes it out of reach.
To get a proper understanding of this we can’t treat ourselves as we mostly do in physics, as sitting outside the universe looking down. We have to absorb our own lives – including thoughts and deliberations - into the fabric of the universe as an essential and integral part of what happens. We can’t infer much about our futures from our past (or maybe much of what we care about in day-to-day life) without passing through our own decisions and that means that all of the unfixity, unsettledness, and openness of our decisions bleeds out into the world.
We have to absorb our own lives – including thoughts and deliberations - into the fabric of the universe as an essential and integral part of what happens.
Getting this piece right shifts other things that matter – maybe not so much to the actual practice of physics, but to places where we have difficulty reconciling what physics says about the world with our experience. So, for example, set against the right physical backdrop, it can help us understand something that Roger Penrose put especially clearly:
“The arrow most difficult to comprehend is, ironically, that which is most immediate to our experiences, namely the feeling of relentless forward temporal progression, according to which potentialities seem to be transformed into actualities.” 
One can see how to arrive at an understanding of this by asking how self-reference plays out against the background of a thermodynamic gradient. An agent getting sensory information about the macroscopic environment will see the ripples of her actions propagate into the future. If she walks across a sandy beach, her footprints will take time to fade. If she digs a ditch or builds a house, she is creating an ordered state in the environment that will take time to decay. What she thinks of as the effects of her actions are, from a thermodynamic point of view, records of their occurrence. This will keep her from stabilizing beliefs about the future, particularly in the region where the interference affects are strongest, until she decides what to do. And she will see her decisions as resolving potentiality into actuality.
There are also connections to free will. All of this is just a reflection of the fact that by acting, we make things true. Our actions interfere with our knowledge of the world because they are part of the way the world is. Knowledge has to wait on action because – to use a philosophical turn of phrase - being precedes knowing. What happens will determine what is true and not the other way around. This simple logical point means that there is nothing illusory about the conviction that the universe depends on you. The universe also depends on stones and sea slugs, of course, so that by itself won’t get us free will. What distinguishes human agency from stones or sea slugs is the particular way in which we harness the effects of our actions to steer the future towards our ends. This is a crucial component of a kind of free will that gets its content from notions like autonomy, self-determination and responsibility. 
In the end, it is all a reminder that we are part of the universe and an important correction to the tendency to think about the universe from a detached point of view. There’s a strong tendency to remove ourselves from the universe when we model it in physics and then to think that the differences between the way things are and the way they seem somehow reveals the way they seem to be illusory. But there’s nothing illusory about the fact that some of what happens is stuff that we do. We need to abandon the fiction that we are outside the world. This is what it is to be, to be a part of the universe, not a spectator, but one of the players on the field.
 Fallenstein B., Soares N. (2014) Problems of Self-reference in Self-improving Space-Time Embedded Intelligence. In: Goertzel B., Orseau L., Snaider J. (eds) Artificial General Intelligence. AGI 2014. Lecture Notes in Computer Science, vol 8598. Springer, Cham. https://doi.org/10.1007/978-3-319-09274-4_3
 If this sounds familiar, it should. It’s just the embodied version of what Gödel proved about any formal system that falls under its own scope.
 “Singularities and Time Asymmetry”, Hawking, S.W. (Cambridge Univ. (UK)); Israel, W. (Alberta Univ., Edmonton (Canada). Theoretical Physics Inst.) (eds.); p. 581-638,1979; p. 581-638; University Press; Cambridge
Join the conversation