The lunacy of 'machine consciousness'

Conscious AI is a fantasy

Does consciousness only arise in biological beings? Or, is it possible that a computer that observes, interacts, and represents its own internal state to itself might also give rise to consciousness? These were some of the questions posed to Bernardo Kastrup, Susan Schneider and Donald Hoffman in a recent debate for the IAI, ‘Consciousness in the machine’. Bernardo Kastrup reflects on the debate and his disagreement with Susan Schneider.  

 

I recently took part in a debate organised by the IAI—featuring Donald Hoffman and Susan Schneider, next to yours truly—on the question of whether silicon computers running Artificial Intelligence (AI) software will ever become conscious. As a metaphysical idealist, I believe consciousness isn’t generated by any substrate—biological or otherwise—for it is primary. But private conscious inner life, seemingly separate from the rest of nature and delineated by the boundaries of a physical entity, is clearly something that has emerged in conjunction with biology. So, to me the question translates as: can private consciousness potentially occur in association with silicon computers?

This question is very close to my heart, for I’ve been a computer engineer for longer than I’ve been a philosopher. To me, the hypothesis of ‘conscious AI’ is just about as plausible as that of the Flying Spaghetti Monster (FSM). I admittedly can’t categorically refute the hypothesis, for the same reason that I can’t categorically refute the FSM. But just like the FSM, I don’t think we have any good reason to take the hypothesis seriously at all. Here’s why.

Dont ask AI if its conscious SUGGESTED READING Big tech doesn’t want AI to become conscious By Susan Schneider

I can run a detailed simulation of kidney function, exquisitely accurate down to the molecular level, on the very iMac I am using to write these words. But no sane person will think that my iMac might suddenly urinate on my desk upon running the simulation, no matter how accurate the latter is. After all, a simulation of kidney function is not kidney function; it’s a simulation thereof, incommensurable with the thing simulated. We all understand this difference without difficulty in the case of urine production. But when it comes to consciousness, some suddenly part with their capacity for critical reasoning: they think that a simulation of the patterns of information flow in a human brain might actually become conscious like the human brain. How peculiar.

Where does this abandonment of a healthy sense of plausibility come from? Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or similarity—between how humans think and AI computers process data. To find that similarity, however, one must take several steps of abstraction away from concrete, empirical reality. After all, if you lay an actual human brain and an actual silicon computer open on a table before you, you will be overwhelmed by how different they are, structurally and functionally. A moist brain is based on carbon, burns ATP for energy, functions through metabolism, processes data through neurotransmitter releases, etc. A dry computer, on the other hand, is based on silicon, uses a differential in electric potential for energy, functions by moving electric charges around, processes data through opening and closing electrical switches called transistors, etc.

The vague isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction—fairly disconnected from concrete, empirical reality—in which disembodied patterns of information flow are compared. Therefore, to believe in ‘conscious AI’ one must arbitrarily dismiss the overwhelming dissimilarities at more concrete levels, and then—equally arbitrarily—choose to take into account only a very specific, high level of abstraction where some vague similarities can be found. Can this be described as anything other than wishful thinking?

___

By appealing to complexity one is merely engaging in furious hand waving and hiding behind the pinnacle of vagueness

___

You see, everything a computer does can, in principle, be done with run-of-the-mill, off-the-shelf pipes, pressure valves and water. The pipes play the role of electrical conduits, or traces; the pressure valves play the role of switches, or transistors; and the water plays the role of electricity. Ohm’s Law—the fundamental rule that determines the behaviour of electric circuits—maps one-on-one to water pressure and flow relations. Indeed, the reason why we build computers with metal, silicon and electricity—instead of PVC pipes and water—is that the former are much, much smaller and cheaper to make. Present-day computer chips have tens of billions of transistors, and an even greater number of individual traces. Can you imagine the size and cost of a water-based computer comprising tens of billions of pipes and pressure valves? Can you imagine the amount of energy required to pump water through it? You wouldn't be able to afford it or carry it in your pocket. This is the reason why we compute with electricity, instead of water. Beyond it, there is nothing fundamentally different between a pipe-valve-water-based computer and an electronic one, from the perspective of computation. Electricity is not a magical substrate for computation, but merely a convenient one.

Now, do you think we have good reasons to believe that a system made of pipes, valves, and water—such as your home’s sanitation system—might become conscious if there are enough pipes and valves put together in just the right way? If not, then the same goes for AI computers. The only difference between your home’s sanitation system and my imaginary water-based computer is one of complexity—how many pipes and valves constitute it, and how they are interconnected—not of kind or essence. As a matter of fact, the typical home sanitation system implements the functionality of about 5 to 10 transistors.

Some do believe that complexity is the key here. They will maintain that, although a simple home sanitation system is admittedly unconscious, if you keep on adding pipes and valves to it, at some point the system will become conscious. But this is magical thinking, unless one can explain how—explicitly, logically, and precisely, even if just in principle—the mere addition of more of the same pipes and valves can enable the appearance of private conscious inner life when none was there before. Even the information integration scenarios in fashion these days achieve nothing in this regard, as they are mere observations, not explanations. So, by appealing to complexity one is merely engaging in furious hand waving and hiding behind the pinnacle of vagueness.

SUGGESTED VIEWING The AI hoax With Mazviita Chirimuta

Others hear buzzwords like ‘neuromorphic computing’ and think that some fundamentally new, cutting-edge computational substrate—closer in kind to human brains than to traditional computers—can offer a realistic breakthrough towards ‘conscious AI’ in the future. But this, too, encouraged as it is by sensationalist corporate marketing, reflects a profound lack of understanding of how these things work.

There is no fundamental difference between your good-old home PC and complex AI machines; not even those projected for the future. AI algorithms run on parallel information processing cores of the kind we have had in our PC’s graphics cards for decades; they just use more, faster cores. It is hard to see what miracle could make more and faster components of the same kind lead to the extraordinary and intrinsically discontinuous jump from unconsciousness to consciousness. And software won’t do it either: all the talk about ‘artificial neurons’—if you understand what it actually means—is just fashionable shorthand for standard vector operations we’ve been doing for centuries.

At the fundamental level in question here, even neuromorphic computers entail no difference in kind. They, too, are Metal Oxide Semiconducting (MOS) devices like those in your phone, moving electric charges around like their predecessors; they must be so, in order to remain compatible with existing and exorbitantly expensive manufacturing infrastructure and know-how. That neuromorphic processors are analog, instead of digital, doesn’t help either: digital computers move charges around just like their analog counterparts; the only difference is in how information arising from those charge movements is interpreted: the microswitches in digital computers apply a threshold to the amount of charge before deciding its meaning, while analog computers don’t. But beyond this interpretational step—trivial for the purposes of the point in contention—both analog and digital computers embody essentially the same substrate. If moving electric charges around doesn’t make your phone conscious today, neither will it do so in the future.

___

Some readers may think it unbecoming of me to come out so hard against the mere hypothesis of, or discussion about, ‘conscious AI.’ Yet, sometimes, it is imperative to scream ‘lunacy!

___

Magical thinking utterly disconnected from reason seems to prevail among those peddling ‘conscious AI.’ I have heard the following ‘argument’ being put forward by otherwise intelligent, educated people: “If brains are conscious, why can’t computers be so as well?” To this I reply with an equally rhetorical question of my own: if birds can fly by flapping their upper limbs, why can’t humans fly by flapping their arms?

During the IAI debate, Susan Schneider, in one of those rhetorical moves that sound clever in language but lack any force when looked at more closely, gave the analogy above a different spin: she pointed out that, if the Wright brothers had believed that only birds could fly, they wouldn’t have bothered to try to build an airplane. Her point was that one phenomenon—in this case, flight—can have multiple instantiations in nature, in different substrates, such as birds and airplanes. Ergo—or so the thought goes—although silicon computers are different from biology, in principle both could instantiate the phenomenon of private conscious inner life.

The validity of her point is merely logical: indeed, we are not logically forced to limit the instantiations of private consciousness to a biological substrate alone. But this means close to nothing, as there are a great many nonsensical hypotheses that are also logically coherent, such as the Flying Spaghetti Monster: it is logically and even physically sound to imagine that there is a noodly monster floating around in a higher dimension of space—invisible to us flatlanders—moving the planets around their orbits with its invisible noodly appendages. The evidence is consistent with this hypothesis, as the planets do move around their orbits, even though no force is imparted on them through visible physical contact. Even stronger, the hypothesis seems to explain our observations of planetary motion without having to appeal to abstract curvatures of spacetime. So, should we now take the noodly Monster seriously?

Here is another logically and physically coherent hypothesis: there may be a 19th-century teapot in the orbit of Saturn right now. Aliens may have come to Earth in the 19th-century, surreptitiously stolen the teapot from some unsuspecting old lady’s dining room, and then dumped it in the vicinity of Saturn on their way back home, after which the unfortunate teapot got captured by Saturn’s gravity field. Should we thus take our porcelain moonlet as seriously as some take ‘conscious AI’ today?

22 06 14.Ai sentient.ata SUGGESTED READING Google's AI is not sentient. Not even slightly By Gary Marcus

Obviously not, for what matters here is not logical—or even physical—possibility, but natural plausibility. In other words, what we have to ask ourselves is not what is logically possible or what can’t be categorically refuted—there is an infinite amount of nonsense that cannot be categorically refuted—but what we have good reasons to entertain as a hypothesis. Do we have good reasons to believe that a silicon computer running AI software could be conscious like a living brain? None whatsoever. The whole thing is a charade and represents a concerning indulgence in fantasy and magical thinking of a kind that, unfortunately, has been all too common throughout human history—particularly in religious contexts.

It would be admittedly understandable if you were to react to my claim above with an appeal to authority: there are many highly educated computer scientists who don’t just take ‘conscious AI’ seriously, but even make a living talking about it. Does this mean that they may be onto something here, even if we don’t quite understand how?

What most people fail to realise is that many—I even dare say the vast majority of—computer scientists are no experts in computers; they are merely power users of computers, with a vague and very limited understanding of what’s going on under the hood. Indeed, historically speaking, computer science is a branch of mathematical logic, not engineering. Generations of computer scientists have now come out of their training knowing how to use a voluminous hierarchy of pre-built software libraries and tooling—meant precisely to insulate them from the dirty details we call reality—but not having the faintest clue about how to design and build a computer. They think entirely in a realm of conceptual abstraction, enabled by tooling and disconnected from the (electrical) reality of integrated circuits and hardware. From their perspective, since the CPU—the Central Processing Unit, the computer's ‘brain’—is a mysterious black box fuelled by the equally mysterious magic of electricity, it's easy to project all their fantasies onto it. They thus fill the vacuum left open by their lack of understanding with wishful, magical thinking. The psychology here is downright banal.

Those who do know how to build not only a CPU but a computer as a whole—such as Federico Faggin, father of the microprocessor and inventor of the MOS silicon gate technology—tend to dismiss the fad of ‘conscious AI’ just as I do, for they understand that a computer is an automaton, a mere mechanism, not different in kind from your home sanitation system. And yes, I am sure there are notable exceptions to this claim, as unhinged fantasising is a condition that spares no demographics. But those are the proverbial exceptions that prove the rule, for PhDs in conceptual abstraction are far from PhDs in reality.

___

The vague isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction—fairly disconnected from concrete, empirical reality—in which disembodied patterns of information flow are compared

___

Some readers may think it unbecoming of me to come out so hard against the mere hypothesis of, or discussion about, ‘conscious AI.’ Yet, sometimes, it is imperative to scream ‘lunacy!’ when lunacy starts to infiltrate our culture in a seemingly innocent way. Unchecked open-mindedness and political correctness lend undue plausibility to utter drivel. And once that door is open, where does it end? What will be the next outrageous nonsense that we will have to debate with a straight face and a thoughtful hand in our chins, in full view of our children?

So let me try to be as clear as possible: no, we have no good reason whatsoever to take ‘conscious AI’ seriously. This is a fantasy unsupported by reason or evidence; a mere artifact of the shambolic state of our philosophy, in which consciousness itself—the sole empirical given of nature—has somehow become an anomaly.

Entertaining ‘conscious AI’ is counterproductive; it legitimizes the expenditure of scarce human resources—including tax-payer money—on problems that do not exist, such as the ethics and rights of AI entities. It endangers the sanity of our culture by distorting our natural sense of plausibility and conflating reality with (bad) fiction. AIs are complex tools, like a nuclear power plant is a complex tool. We should take safety precautions about AIs just as we take safety precautions about nuclear power plants, without having ethics discussions about the rights of power plants. Anything beyond this is just fantastical nonsense and should be treated as such.

 

Want to hear the other side of the argument? Read Susan Schneider's reply to Bernardo Kastrup's article here.

Latest Releases
Join the conversation

TiborZ Koos 5 July 2023

Yes, Mr. Kastrup, "...sometimes, it is imperative to scream ‘lunacy!’ when lunacy starts to infiltrate our culture in a seemingly innocent way". It applies perfectly to the silly science fiction that you are trying to sell as philosophy or even science. It seems innocent but presenting your views to the public without disclosing the fact that no serious practitioner of theoretical physics, biology or neuroscience considers your ideas to be of any interest to their work is deceptive and disgraceful. It's not that your loony ideas are doing any harm (nobody takes them seriously), what is harmful is your contribution to the normalization of intellectual dishonesty - the culture of believing that being truthful does not require critical evaluation of one's own ideas, just a declaration that: "well this is MY truth!". And I think I can show easily that you are operating in bad faith. If your ideas were correct they would represent the most significant insight into the nature of reality ever made, worthy of many Nobel prices. Do you really claim you made such a contribution ? And if you don't, doesn't that mean that at some level you understand that you are just blowing smoke ?

TiborZ Koos 5 July 2023

The kidney simulation analogy is the kind of silliness that we came to expect from Mr. Kastrup. I wonder what he thinks about a dialysis machine, it works as a kidney without any "biological" substrate, no proteins, no cells etc.

More to the point, it is an undisputed fact that in principle the retina could be replaced by a machine that would respond to visual images with electrical signals that can generate the same action potential patterns in the axons of the optic nerves that occur during normal vision. Since the only way the brain receives information from the retina are action potentials in these axons, the person equipped with this computer eye would have the exact same experience as a person with a biological eye. And yes, that would include the "redness of the red". There is of course zero reason to think that in principle the same process could not be used to substitute successive stages of the visual system, and eventually the entire brain. This machine brain would have the same mental processes as a biological one, and therefore it would have the same experiences, feelings, qualia, perceptions, intentions thoughts and the rest. In case this seems far fetched consider the fact electrical stimulation of specific locations in the temporal lobe in awake humans elicits vivid, fully life-like experiences, like hearing specific pieces of music, etc.