The ‘hard problem of consciousness’ (HPC) as originally stated by David Chalmers in The Conscious Mind (1996) is the problem of explaining how experience is possible. How is any experience, anywhere in the universe, possible? From the perspective of the HPC, a single experience of a flash of light is as mysterious as a full human experience of a summer’s day or the most profound mystical experience of the most enlightened monk.
There are three going responses to the hard problem that reflect three different ideas about experience:
1. The possibility of experience (i.e. consciousness) is derived or emergent from something else. The ‘something else’ could be a physical property like quantum entanglement, a general systems property like complexity, a computational property like metaprocessing, a biological property like life, a neuroarchitectural property like having a ‘global neuronal workspace’.
2. Consciousness is not derived but rather fundamental. However, only some entities have it, i.e. only some entities have experiences. It doesn’t matter how often they have experiences or what kind they have, just that they have some experiences sometimes.
3. Consciousness is not derived but fundamental and all entities have it. Again, it doesn’t matter how often or what kinds.
The first kind of response is often called ‘materialist’ or ‘physicalist’ and the second kind ‘dualist’; some call the third kind ‘idealist’ but this label is only correct for certain extreme versions.
So can the hard problem of consciousness be solved?
Let us first assume response #1. We could take either of two tactics. One is to define consciousness as some other property. Conscious could, for example, be an informational-theoretic property, as in Guilio Tononi’s Integrated Information Theory (IIT), or it could be a physical property, as in the Penrose-Hameroff Orchestrated Objective Reduction (OrchOR) theory. Simply defining consciousness as such a property does not, however, explain how or why consciousness and this other property should be correlated. It does not explain how experience is possible. It dodges the HPC instead of solving it.
The other tactic is to try to explain how consciousness is produced by some other kind of process. How is consciousness produced by complexity or metaprocessing? No one has any idea. Correlations are easy to find, but explanation faces a serious problem: consciousness seems nothing like any other property. No ideas other than outright definition of consciousness as something else have ever been proposed. In short, the hard problem of consciousness seems unsolvable using response #1. It can be dodged, but not solved.
What about response #2? Here, the problem is to explain how or why some systems have experiences but others do not. What is the basis for explaining this? What facts could possibly be relevant? It is trivial to program a computer to report experiences – does that mean computers are conscious? Bear in mind that the HPC concerns the possibility of any experience; the complexity or subtlety of the reported experiences is irrelevant; thermostats, for example, ‘report experiences’ by turning on the heat.
If we don’t trust reports and turn to some other property as criterial, we are back to response #1. The problem is that if consciousness is fundamental, no other properties can be criterial. If a system is conscious, it is conscious by its very nature, not for some other reason. How are we to distinguish systems that have this property from systems that don’t? Without independent criteria, there is no way. Hence the HPC cannot even be dodged from response #2: it is simply unsolvable.
Join the conversation