Every living thing responds selectively to its immediate environment. Rocks don’t. One-celled organisms do. Viruses are a borderline case.
To speak of perception is a little more demanding. Do amoebas actually perceive things in their environment? Do stylops? Do ants, for that matter? When we say perceive, we’re thinking of sense organs, inputs and information-processing, however rudimentary. Those criteria are vague and admit many borderline cases; they might even be said to come in degrees.
But when we agree that an animal does perceive, we are attributing to it a kind of consciousness, namely, perceptual consciousness of the world around it. Perception itself certainly admits degrees. Some animals perceive more information per second than others; or they make a greater number of distinctions than others. Likewise, if an animal has a greater variety of senses, it will enjoy a higher degree of perceptual consciousness.
A creature that does perceive the external world to any significant degree can be called a conscious being. Could there be conscious beings other than those of earth’s animal kingdom? Perhaps there are some outside our solar system. Could a robot be a conscious being, just in this modest sense of perceiving its environment? I don’t see why not. Despite appearances, a robot can amass information through its sensors and build a representation of the external world. Granted, there are plenty of arguments purporting to show that no mere robot could be conscious in any much stronger sense.
Of course we can also ask whether a conscious creature in that sense is ‘conscious’ at a particular time, say at this moment, meaning roughly, is it awake, actually doing some perceiving, and in control of its actions? Even that ‘normal waking state’ admits some degrees, since we speak of accident victims and seriously ill patients as ‘semi-conscious.’
"When we agree that an animal does perceive, we are attributing to it a kind of consciousness"
A much rarer form of consciousness is what we refer to when we speak of a ‘conscious memory’ or a ‘conscious decision’—we mean not only being in a mental state but being aware of that very mental state from the inside. A conscious memory is a memory we are directly aware of. The same goes for a conscious emotion, desire, intention, or bodily sensation such as pain. It’s assumed that there are memories, emotions, desires, intentions, perceptions, and even pains that we are unaware of, at least at times. For instance, while driving a car we might be thinking hard about this or that. We will still perceive the road, other cars, stop signs, etc.—otherwise we’d crash—yet we will barely notice our perceivings themselves, our own sensory states. It’s less common to be unaware of our desires or our physical pains, but if we have focused our attention elsewhere, these might go in the background. For example, in the course of playing in a hard-fought championship basketball game, we might not feel any pain. Less dramatically, I may just entirely stop noticing a mild headache while engaged in spirited conversation, though others may see me unconsciously stroking my brow.
Awareness of your own mental states is often called “state” consciousness. Creatures that have state consciousness have the ability to represent their own mental states. It’s an empirical question which organisms do have that capacity. Human beings obviously have it. There is some evidence that gorillas do. We may naturally think that it does not extend very far down the phylogenetic scale, but only clever experiments could verify that. A cat can be hungry, or angry, and behave accordingly in no uncertain terms, but is the cat ever internally aware of the hunger or the anger? Except in a cartoon, can a cat say to itself, “My anger is diminishing [now that I have pooped in the sugar bowl]”?
State consciousness comes in degrees. We can be just barely aware of a desire, say our desire for a less hectic life; or only dimly aware of it, or moderately, or well aware, or vividly, or urgently, longingly aware of it. (That is partly a matter of the strength of the desire, but it has more to do with claims on our attention.)
The distinction between conscious and unconscious mental states raises some interesting moral issues. Is it wrong to cause gratuitous pain, when we know the victim will never be aware of the pain? Suppose a rabbit lacks the capacity to be aware of its own pain. May we cause it pain with impunity—assuming that we are doing it no real bodily harm—knowing that unlike us, it cannot experience or even notice the pain? On the one hand, why does the pain itself matter at all, if the rabbit is entirely unaware of it (and no one else is affected)? On the other, it’s pain, for God’s sake; the poor bunny is whimpering.
In this article, Bence Nanay attacks the idea that animals ‘process’ pain but do not ‘feel’ it: ‘Rats and chickens systematically choose and self-administer painkillers when and only when they are distressed. I am not sure how this finding could be made consistent with the “animals don’t really feel pain” line short of some maneuver worthy of the Flat Earth crowd.’
Nanay may be thinking it pathetically obvious that pain itself and not awareness of it is what matters morally. If so, though of course he’s entitled to his opinion, he is just begging the question rather than addressing the dilemma. More likely, since he emphasises ‘feel,’ he is maintaining that the animals damn well are aware of the pain and do experience it. But the trouble there is that the expression ‘feel pain’ has two different uses. In one sense it’s just a redundant way of saying that a creature is in pain or ‘processes’ pain. In a stronger sense it means that the creature not only has pain but is aware of the pain. It is indeed obvious that rats and chickens can have pain or be in it. But to settle the question, the experiments to which Nanay alludes would have to show that the rats and the chickens internally represent their own pains as such. Were the experiments controlled against the hypothesis that the subjects’ self-medicating behaviour was just a direct response to the pain itself, like my stroking my brow?
Animal Pain and the New Mysticism About Consciousness Read more More recently, philosophers have been concerned with ‘phenomenal’ consciousness. When we are aware of one of our own mental states, there’s ‘something it is like’ for us to experience that state; for example, there is something it is like to experience the sound of a high Bb as played by French horn legend Dennis Brain. ‘Like’ there does not mean resemblance or similarity; the quality, ‘what it’s like’ to have the sensation cannot easily be put into words. Philosopher Thomas Nagel has appealed to the case of whatever sensation may accompany a bat’s use of its sonar echolocation technique:
'Chiropterological (bat) ethology and neuroscience may detail the bat’s sensory system down to the last molecule and bit of information processed, but neither science nor anything else could tell us humans what it’s like for the bat to experience its sonar sensation. You would have to become a bat and have the sensation yourself. Phenomenal consciousness is ‘intrinsically perspectival’—in humans just as in bats.
Even if phenomenal consciousness depends on state consciousness, it does not accordingly come in degrees. Where there is any degree of awareness of one’s own mental state, there is something it’s like for the subject to experience that state.
I’ve distinguished three kinds of consciousness; each is perfectly real and worthy of its name. But for the philosophers Nanay labels as the ‘new mystics,’ only phenomenal consciousness is real consciousness, and it is very problematic for the metaphysics of mind. How are we to accommodate the fact—at least the majority view—that science cannot even describe ‘what it’s like,’ much less explain it? Opinions range from (1) there isn’t really any such thing, through (2) several varieties of ‘here’s how,’ to (3) scientific materialism is just false, and we have special properties that are irreducibly mental.
In support of the first claim, philosopher Daniel Dennett and others argue that if we really do know every detail of both the bat’s sonar sense and the human psychobiology and chemistry, we could work out what it’s like for the bat to have its special sensation, and what it’s like for any human to be in any mental state. The leading version of the second claim is what’s called the ‘phenomenal concept’ strategy: proponents suggest that in introspection we categorise sensations in a unique way, using concepts that cannot be translated into day-to-day language. It is only the untranslatability that prevents explaining ‘what it’s like’ and makes us regard it as ineffable and mysterious; there is nothing metaphysically extraordinary about it. I have defended that view in my book, Consciousness and Experience.
Philosophers go for the third claim when they are unpersuaded by the first two claims and argue on various grounds that those views will never work: The first seems to them clearly false and just a desperate lunge. (I agree.) They are unpersuaded by the phenomenal concept strategy because they find it pallid and just not adequate to the vivid and arresting character of what it’s like.
There is an even more extreme position: (4) ‘What it’s like’ pervades the universe, and panpsychism is true. Ha-haa! What recommends that view to Philip Goff is that although ‘philosophers and scientists have struggled to understand how physical matter produces consciousness,’ those philosophers and scientists have had it the wrong way around: Rather, as Sir Arthur Eddington maintained, physical science merely ‘describes matter ‘from the outside’ by providing mathematical models and can say nothing about matter’s own underlying nature. In Stephen Hawking’s words, ‘it is consciousness that breathes fire into the equations.’ I myself would need to hear stronger argument than has ever been given.
So, does consciousness come in degrees? Perceptual and state consciousnesses do. But phenomenal consciousness is different, and a whole different beast.