Our default intuition when it comes to consciousness is that humans and some other animals have it, whereas plants and trees don’t. But how sure can we be that plants aren’t conscious? And what if what we take to be behavior indicating consciousness can be replicated with no conscious agent involved? Annaka Harris invites us to consider the real possibility that our intuitions about consciousness might be mere illusions.
Our intuitions have been shaped by natural selection to quickly provide life-saving information, and these evolved intuitions can still serve us in modern life. For example, we have the ability to unconsciously perceive elements in our environment in threatening situations that in turn deliver an almost instantaneous assessment of danger — such as the intuition that we shouldn’t get into an elevator with someone, even though we can’t put our finger on why.
But our guts can deceive us as well, and “false intuitions” can arise in any number of ways, especially in domains of understanding — like science and philosophy — that evolution could never have foreseen. An intuition is simply the powerful sense that something is true without having an awareness or understanding of the reasons behind this feeling — it may or may not represent something true about the world.
It’s possible for a vivid experience of consciousness to exist undetected from the outside
And when we inspect our intuitions about consciousness itself — how we judge whether or not an organism is conscious — we discover that what once seemed like obvious truths are not so straightforward. I like to begin this exploration with two questions that at first glance appear deceptively simple to answer. Note the responses that first occur to you, and keep them in mind as we explore some typical intuitions and illusions.
1) In a system that we know has conscious experiences — the human brain — what evidence of consciousness can we detect from the outside?
2) Is consciousness essential to our behavior?
These two questions overlap in important ways, but it’s informative to address them separately. Consider first that it’s possible for conscious experience to exist without any outward expression at all (at least in a brain). A striking example of this is the neurological condition called locked-in syndrome in which virtually one’s entire body is paralyzed but consciousness is fully intact. This condition was made famous by the late editor-in-chief of French Elle, Jean-Dominique Bauby, who ingeniously devised a way to write about his personal story of being “locked in.” After a stroke left him paralyzed, Bauby retained only the ability to blink his left eye. Amazingly, his caretakers noticed his efforts to communicate using this sole remnant of mobility, and over time they developed a method whereby he could spell out words through a pattern of blinks, thus revealing the full scope of his conscious life. He describes this harrowing experience in his 1997 memoir The Diving Bell and the Butterfly, which he wrote in about two hundred thousand blinks.
Another example of bodily imprisonment is a condition called “anesthesia awareness,” in which a patient anesthetized for a surgical procedure experiences only the paralysis without losing consciousness. People in this condition must live out the nightmare of feeling every aspect of a medical procedure, sometimes as drastic as the removal of an organ, without the ability to move or communicate that they are fully awake and experiencing pain. These examples seem to come straight out of a horror movie, but we can imagine other, less disturbing instances in which a conscious mind might lack a mode of expression — scenarios involving artificial intelligence (AI), for example, in which advanced systems become conscious but have no way of convincingly communicating this to us. But one thing is certain: It’s possible for a vivid experience of consciousness to exist undetected from the outside.
Now let’s go back to the first question and ask ourselves: what might qualify as evidence of consciousness? For the most part, we believe we can determine whether or not an organism is conscious by examining its behavior. Here is a simple assumption most of us make, in line with our intuitions, which we can use as a starting point: People are conscious; plants are not conscious. Most of us feel strongly that this statement is correct, and there are good scientific reasons for believing that it is. We assume that consciousness does not exist in the absence of a brain or a central nervous system. But what evidence or behavior can we observe to support this claim about the relative experience of human beings and plants? Consider the types of behavior we usually attribute to conscious life, such as reacting to physical harm or caring for others. Research reveals that plants do both in complex ways — though, of course, we conclude that they do so without feeling pain or love (i.e. without consciousness). But some behaviors of people and plants are so alike that it in fact poses a challenge to our using certain behavior as evidence of conscious experience.
In his book What a Plant Knows: A Field Guide to the Senses, biologist Daniel Chamovitz describes in fascinating detail how stimulation of a plant (by touch, light, heat, etc.) can cause reactions similar to those in animals under analogous conditions. Plants can sense their environments through touch and can detect many aspects of their surroundings, including temperature, by other modes. It’s actually quite common for plants to react to touch: a vine will increase its rate and direction of growth when it senses an object nearby that it can wrap itself around; and the infamous Venus flytrap can distinguish between heavy rain or strong gusts of wind, which do not cause its blades to close, and the tentative incursions of a nutritious beetle or frog, which will make them snap shut in one-tenth of a second.
In his research, Chamovitz discovered which genes are responsible for a plant’s ability to determine whether it’s in the dark or the light, and these genes, it turns out, are also part of human DNA
Chamovitz explains how the stimulation of a plant cell causes cellular changes that result in an electrical signal — similar to the reaction caused by the stimulation of nerve cells in animals — and “just like in animals, this signal can propagate from cell to cell, and it involves the coordinated function of ion channels including potassium, calcium, calmodulin, and other plant components.”  He also describes some of the mechanisms shared by plants and animals down to the level of DNA. In his research, Chamovitz discovered which genes are responsible for a plant’s ability to determine whether it’s in the dark or the light, and these genes, it turns out, are also part of human DNA. In animals, these same genes also regulate responses to light and are involved in “the timing of cell division, the axonal growth of neurons, and the proper functioning of the immune system.” Analogous mechanisms exist in plants for detecting sounds, scents, and location, and even for forming memories. In an interview for Scientific American, Chamovitz describes how different types of memory play a role in plant behavior:
'[I]f memory entails forming the memory (encoding information), retaining the memory (storing information), and recalling the memory (retrieving information), then plants definitely remember. For example a Venus Fly Trap needs to have two of the hairs on its leaves touched by a bug in order to shut, so it remembers that the first one has been touched . . . Wheat seedlings remember that they’ve gone through winter before they start to flower and make seeds. And some stressed plants give rise to progeny that are more resistant to the same stress, a type of transgenerational memory that’s also been recently shown also in animals.' 
The ecologist Suzanne Simard conducts research in forest ecology, and her work has produced breakthroughs in our understanding of inter-tree communication. In a 2016 TED Talk, she described the thrill of uncovering the interdependence of two tree species in her research on mycorrhizal networks — elaborate underground networks of fungi that connect individual plants and transfer water, carbon, nitrogen, and other nutrients and minerals. She was studying the levels of carbon in two species of tree, Douglas fir and paper birch, when she discovered that the two species were engaged “in a lively two-way conversation.” In the summer months, when the fir needs more carbon, the birch sent more carbon to the fir; and at other times when the fir was still growing but the birch needed more carbon because it was leafless, the fir sent more carbon to the birch — revealing that the two species were in fact interdependent. Equally surprising were the results of further research led by Simard in the Canadian National Forest, showing that the Douglas fir “mother trees” were able to distinguish between their own kin and a neighboring stranger’s seedlings. Simard found that the mother trees colonized their kin with bigger mycorrhizal networks, sending them more carbon below ground. The mother trees also “reduced their own root competition to make room for their kids,” and, when injured or dying, sent messages through carbon and other defense signals to their kin seedlings, increasing the seedlings’ resistance to local environmental stresses.  Likewise, by spreading toxins through underground fungal networks, plants are also able to ravage threatening species. Because of the vast interconnections and functions of these mycorrhizal networks, they have been referred to as “Earth’s natural Internet.” 
Still, we can easily imagine plants exhibiting the behaviors described above without there being something it is like to be a plant, so complex behavior doesn’t necessarily shed light on whether a system is conscious or not. We can probe our intuitions about behavior from another angle by asking, does a system need consciousness to exhibit certain behaviors? For instance, would an advanced robot need to be conscious to give its owner a pat on the back when it witnessed her crying? Most of us would probably say the answer is “Not necessarily.” At least one tech company is creating computerized voices indistinguishable from human ones.  If we design an AI that one day begins saying things like, “Please stop — it hurts when you do that!” should we take this as evidence of consciousness, or simply of complex programming in which the lights are off. We assume, for example, that an entirely non-conscious algorithm is behind Google’s growing ability to accurately guess what we are searching for, or behind Microsoft Outlook’s ability to make suggestions about whom we might want to cc on our next email. We don’t think our computer is conscious, much less that it cares about us, when it flashes Uncle John’s contact, reminding us to include him in the baby announcement. The software has obviously learned that Uncle Jack usually gets included in emails to Dad and Cousin Jenny, but we never have the impulse to say, “Hey, thanks — how thoughtful of you!” It’s conceivable, however, that future deep-learning techniques will enable these machines to express seemingly conscious thoughts and emotions (giving them increased powers to manipulate people). The problem is that both conscious and non-conscious states seem to be compatible with any behavior, even those associated with emotion, so the behavior itself doesn’t necessarily signal the presence of consciousness.
Suddenly, our reflexive answers to question #1 — What constitutes evidence for consciousness? — are beginning to dissolve. And this leads us to question #2, regarding whether consciousness performs an essential function in — or has any effect at all on  — the physical system that’s conscious. In theory, I could act in all the ways I do and say all the things I say without having a conscious experience of it, much as an advanced robot might (though, admittedly, it’s hard to imagine). This is the gist of a thought experiment referred to as the “philosophical zombie,” which was made popular by David Chalmers. Chalmers asks us to imagine that any person could, in effect, be a zombie — someone who looks and acts exactly like everyone else on the outside without experiencing anything at all from the inside. The zombie thought experiment is controversial, and other philosophers, notably Daniel Dennett of Tufts University, claim that what it proposes is impossible—that a fully functioning human brain must be conscious, by definition. But the conceivability of a “zombie” is worth contemplating if only in theory, because it helps us put aside our everyday intuitions and pin down which behaviors, if any, we think must be accompanied by consciousness. The goal here is to pry loose as many false assumptions as possible, and this particular mental exercise is useful whether or not a zombie is compatible with the laws of nature. Imagine that someone in your life is in fact an unconscious zombie or AI (it could be anyone from a stranger behind a store counter to a close friend). The moment you witness behavior in this person that you think must coincide with an internal experience, ask yourself why. What role does consciousness seem to play in their behavior? Let’s say your “zombie friend” witnesses a car accident, looks appropriately concerned, and takes out his phone to call for an ambulance. Could he possibly be going through these motions without an experience of anxiety and concern, or a conscious thought process that leads him to make a call and describe what happened? Or could this all take place even if he were a robot, without a felt experience prompting the behavior at all. Again, ask yourself what, if anything, would constitute conclusive evidence of consciousness in another person?
When we trick ourselves into imagining a person who lacks consciousness, then we can begin to wonder if we’re in fact tricking ourselves all the time when we deem other living systems — climbing ivy, say, or stinging sea anemones — to be without it
I have discovered that the zombie thought experiment is also capable of influencing our thinking beyond its intended function in the following way: Once we imagine human behavior around us existing without consciousness, that behavior begins to look more like many behaviors we see in the natural world which we’ve always assumed were non-conscious, such as the obstacle-avoiding behavior of a starfish, which has no central nervous system . In other words, when we trick ourselves into imagining a person who lacks consciousness, then we can begin to wonder if we’re in fact tricking ourselves all the time when we deem other living systems — climbing ivy, say, or stinging sea anemones — to be without it. We have a deeply ingrained intuition, and therefore a strongly held belief, that systems that act like us are conscious, and those that don’t are not. But what the zombie thought experiment makes vivid to me is that the conclusion we draw from this intuition has no real foundation. Like a 3D image, it collapses the moment we take our glasses off.
Based on an excerpt from CONSCIOUS by Annaka Harris Copyright © 2019 by Annaka Harris. Published on June 4, 2019 by HarperCollins Publishers, LLC.
 Daniel Chamovitz, What a Plant Knows: A Field Guide to the Senses (New York: Farrar, Straus & Giroux, 2012), pp. 68-69
 Gareth Cook, “Do Plants Think?” Scientific American, June 5, 2012
 Suzanne Simard, www.ted.com/talks/suzanne_simard_how_trees_talk_to_each_other
 Lauren Goode, “How Google’s Eerie Robot Phone Calls Hint at AI’s Future,” Wired, May 8, 2018; Bahar Gholipour, “New AI Tech Can Mimic Any Voice,” Scientific American, May 2, 2017
 In other words, if consciousness comes at the end of a stream of information processing, does the fact that there is an experience make a difference to the brain processing that follows? Does consciousness affect the brain?
See also: How Could Conscious Experiences Affect Brains? by Max Velmans (Charlottesville, VA: Imprint Academic, 2002), pp. 8-20
 M. Migita, et al., “Flexibility in starfish behavior by multi-layered mechanism of self-organization,” Biosystems, 82(2), pp. 107-15 (2005)