What do you call a network of neurons connected to electrodes that learn to play Pong? Even the scientists behind the experiment don’t know how to describe their creation. But the ethical questions that arise out of this fusion of neurons and silicon, are plenty. Brian Patrick Green takes a first shot at articulating them and suggests this might be the real future of Artificial Intelligence.
On December 3, 2021 the Australian biological computing startup, Cortical Labs, released a pre-print article stating that it had turned a network of hundreds of thousands of neurons into a computer-like system capable of playing the video game Pong. They named this system DishBrain.
The name itself might cause the stirrings of an uneasy feeling. Something as important as a brain seems inappropriate as part of a “dish,” and a certain culinary overtone might come to mind as well. The naming seems perhaps too playful for the subject at hand.
But that raises, of course, a number of questions: what is the subject at hand? What exactly is DishBrain? And, perhaps more importantly, what is the ethical status of this half-living, half machine entity? Is this the future of machine learning?
Neurons and the Free Energy Principle
DishBrain contains living neurons collected from mouse embryos in one case and from human induced pluripotent stem cells (hiPSC – stem cells created from adult human cells, such as skin, that are then differentiated into another type of adult cell, such as neurons) in the other. These cells were then grown and plated onto multielectrode arrays where they settled, grew further, and became part of an electronic system.
Software examined the neuronal activity during this growth period on the multielectrode array. Inputs and outputs were configured for the system to correspond to states in the game of Pong. The DishBrain system—a living computer—was diligently fed with nutrient solution and cared for, eventually displaying “spontaneous electrophysiological activity.”
The organizing premise for the experiment was Karl Friston’s free energy principle, wherein neurons, through active inference, form models of the world to try to minimize their surprise by their environment. The brain acts to match expected perceptions with sensory reality.
With this theory in mind, the experimental setup trained the DishBrain to play Pong. The results of this test were astonishing:
“Applying a previously untestable theory of active inference via the Free Energy Principle, we found that learning was apparent within five minutes of real-time gameplay, not observed in control conditions… Cultures display the ability to self-organise in a goal directed manner in response to sparse sensory information about the consequences of their actions.”
While the researchers at Cortical Labs found the free energy principle fruitful for their research, and though that is an interesting aspect of this story, that is not what I am going to examine here. Instead, I want to think about what the DishBrain is, and, ethically, how one might respond to it.
The researchers themselves wondered the same thing. In their Medium post, the Cortical Labs authors expressed some of the sentiments they were experiencing, given the strange thing they had produced:
“In fact, we don’t know what we’re making, because nothing like this has ever existed before. An entirely new mode of being. A fusion of silicon and neuron. A native to the digital world lit with the promethean fire of the human mind.”
Kagan likens the mini brains to something out of the sci-fi series Matrix. In an interview with the New Scientist, Kagan said that they “... often refer to them as living in the Matrix, when they are in the game, they believe they are the paddle.”
Everything about the neurons in the DishBrain is designed to harness any intrinsic goals that these neurons might have and direct them instead towards the extrinsic goal of playing Pong.
At this point it might be good to remember that The Matrix is a dystopia. When it comes to fiction inspiring scientific research, re-creating dystopias should generally be avoided. And yet, here we are in a dystopia, perhaps, because we still do not know just what the DishBrain is.
This dystopian possibility might justifiably cause us ethical qualms. These gut feelings may or may not be indicative of true moral problems, so a more formal ethical analysis might be of help. But ethical analysis relies upon first getting facts, so that is where we must begin.
What on Earth is DishBrain?
The distance between computers and brains is vast. Brains are alive, wet, and internally goal-directed. Computers are typically electronic, dry, and driven by goals that are assigned externally. DishBrain has both living and non-living, wet and dry, components. But does it have internal or external goal direction? The overall system acts like a computer, driven to serve its creators.
However, according to the free energy principle, it is actually the neurons seeking the goal of minimizing their “surprise” at their environment that is powering this computer. The playing of a game for the sake of human experimenters is just the side effect of that deeper goal.
Everything about the neurons in the DishBrain is designed to harness any intrinsic goals that these neurons might have and direct them instead towards the extrinsic goal of playing Pong. The neurons are removed from their organism and put in an electronic environment to act. They are stimulated and measured. They are a living thing reduced to an electronic part.
In this way, we might imagine the DishBrain as being like a horse walking in circles, turning a wheel to grind grain in a flour mill. Or, in more contemporary terms, it could be like the various animals used to test or demonstrate brain-computer interface technology, such as Neuralink’s monkey playing Pong with its mind.
There are long histories of animals working and animals being experimented upon, and many of these situations might be objected to on moral grounds, but at this point we should recognize that DishBrain is vastly simpler than animal models. It is only one cell type, and not even very many cells (800,000 to a million) at that. It is composed of neurons, which raises the question of sensation and suffering, but compared to a complete animal laboring away in a field or being subjected to brain surgery and worse, the DishBrain would seem to be more desirable.
Of course, in the case of human neurons, the problem becomes one of perhaps causing suffering to neurons of our own species, but this still would seem to be something of a stretch. At least for very small systems, it seems unlikely that it could be subject to suffering… although in the free energy principle framework, “surprise,” the one thing that neurons seek to avoid, might be a sort of suffering.
Exactly what is going on in the DishBrain is one question. Whether that activity can be called good is another. After all, if the neurons in the dish believe that they are the paddle in Pong, we might ask whether this is an appropriate experience for those neurons—because, in their responding to external stimuli, they are clearly experiencing something. This raises another question: What is an appropriate experience for a neuron?
I am, like the authors of the Cortical Labs Medium post, still uncertain about the nature of the DishBrain entity and its ethical status. But I do know that in this experiment a significant threshold has been passed, and one worthy of philosophical analysis. So, I will ask a few questions to at least begin the conversation.
Some questions around the ethics of DishBrain
Does the animal source type matter?
Does it matter, ethically speaking, whether the neurons used in DishBrain are human or mouse? Certainly, our moral intuitions might say so. After all, we kill mice as vermin constantly; their lives seem to be worth quite little, while human life is something many people hold as sacred. Additionally, regardless of personal opinions, public opinion does matter here, as the use of human neurons in the DishBrain is more likely to create a public backlash than the use of non-human neurons. Experiments like this can damage the reputation of the entire field of science in the public eye, so great care is warranted.
Does the cells’ origin matter?
Is it good that the Cortical Labs researchers used induced pluripotent stem cells from donated adult tissue instead of embryonic stem cells gathered from the destruction of human embryos? By avoiding using human embryonic stem cells in the DishBrain itself, Cortical Labs avoided a certain amount of controversy, and possible negative press, not to mention avoiding destroying human embryos. Given the choice between having equally good experiments involving induced pluripotent stem cells and embryonic stem cells, it would seem to make sense to use induced pluripotent cells, even if only for avoiding controversy. Of note, HEK (human embryonic kidney) cells were used as experimental controls at some points, so the cell origin controversy was not completely avoided.
Does size matter?
Another question we might ask would be: How many neurons would be too many? At what point would an experiment such as this go too far?
A few hundred thousand cells seem unlikely to spark much potential for suffering or other violations of morally relevant values. But what if it were ten million cells? Or a billion? Getting into the billions range gets us worryingly close to the number of neurons in a human brain (approximately 86 billion) and therefore ought, I think, to be avoided. Smaller numbers of neurons would seem to be better from an ethical perspective because they would keep questions of sensation and sentience more remote.
However, if there is a boundary that we feel we should never cross, such as creating or placing a full human brain (or more) in a machine, then we should make absolutely sure that we never set ourselves on a path that will result in that outcome. What limitations ought we to invoke now to prevent such a thing?
The prospect of computers powered by neurons raises for us not only the question of suffering, but also of desire and volition.
Are biological neural networks capable of real artificial intelligence?
There may be good reasons to believe that purely machine-based artificial intelligences will never do anything more than simulate forms of living intelligence. Purely machine AI have no self-motivating factor, no freedom or will, no ability to care, or love. But the living DishBrain system seems as though it might be able to have self-motivating factors: the neurons seek out certain states of being. Would that mean, then, that a scaled-up DishBrain might someday permit true artificial intelligence? At least in a hybrid biological-computational form?
The prospect of computers powered by neurons raises for us not only the question of suffering, but also of desire and volition. The reason this question is raised is because we ourselves can experience desires and volition. How small of a level do these experiences exist on? The number of cells in a DishBrain apparently has enough volition to try to play Pong in order to avoid the punishment of surprise.
Could these neurons ever develop truly free will and turn against us, as so many dystopian AI stories go? I do think it possible to imagine a biological-electronic hybrid acting in this way because we already have many examples of natural neural systems (brains) doing just that—lashing out in anger, hatred, or pain.
Can DishBrain be said to be suffering?
Is the mode of training for the DishBrain “unpleasant” to the neurons involved? How could it be so, and additionally, how could we ever know? After all, we are not DishBrains ourselves, so we cannot conceive of what it would be like to be one.
However, the Cortical Labs paper itself indicates that the DishBrain learns through “punishment,” saying: “when the culture fails to line the paddle up to connect with the ball, the ‘punishing’ stimulus was set at 150 mV voltage and 5 Hz.” This should indicate to us that perhaps suffering is involved, even if it is a very primitive form.
What if neurons are just better at this sort of computational activity than computers; is it okay to use living neurons this way?
What about human dignity?
Is it a violation of human dignity to create DishBrain-like systems out of human neurons? Dignity is a complex topic that can only be raised not answered. But there is another approach to the problem from the perspective of the researchers themselves.
Is it beneath the dignity of the human experimenters to treat human neurons this way, to create DishBrain-like systems? Are these the actions of intrepid scientific pioneers or of ghoulish vandals, taking body parts and using them to serve other ends? Again, the name “DishBrain” and its associations comes to mind. At the very least, the researchers might be taking the subject too lightly, when others might be perceiving it as very grave indeed.
Is Synthetic Biological Intelligence a better model for AI?
Lastly, we might ask: What if neurons are just better at this sort of computational activity than computers; is it okay to use living neurons this way? Cortical Labs suggests that synthetic-biological intelligences (SBIs) might be intrinsically more efficient that AI, thus leading to a situation where “SBI may arrive before artificial general intelligence (AGI) due to the inherent efficiency and evolutionary advantage of biological systems.”
This would certainly be a dramatic and strange world, where living neurons are fused with computers into brain-like systems more intelligent that what we ourselves could ever squeeze into our own small skulls.
Alternatively, perhaps DishBrain might reveal to us a course away from using neurons and towards better artificial systems. One thing these experiments help to highlight is that perhaps artificial neural networks might be improved if they are designed to emulate biological neurons more closely. If that is the case, then perhaps this experiment will help to lead us away from a Matrix-like future of biology united with computers, and instead towards a future where artificial neurons have been redesigned to learn as well as biological neurons.
Whatever the outcome of these and other future experiments in this field, philosophy and ethics will need to be deeply involved in this research – hopefully as a partner, and if not, then as a monitor and critic.