AI needs the constraints of the human brain

What we learn from an embodied AI brain

Should the constraints of the human brain inform how we design artificial intelligence? In this essay, Danyal Akarca explores how our growing understanding of neuroscience is inspiring new paradigms at the forefront of AI research following his recent publication in Nature Machine Intelligence.

 

It is now over a century since the father of modern neuroscience, Ramón y Cajal, proposed that animal nervous systems are organised according to three very general principles: material, space and time. These principles tell us that animal brains conserve their material to prevent unnecessary energy expenditure, space to optimise the physical arrangement of cells within a finite volume, and time to enable rapid and effective communication that facilitate action.

Understanding general principles like these within neuroscience gives us context for how, in living organisms existing in the animal kingdom, various flavours of biological intelligence can evolve and develop. Of course, these are not the only principles at play – neuroscience has advanced tremendously the last hundred years – but such principles are useful because they sharpen our understanding of the various physical factors that commonly constrain the emergence of intelligent behaviour in biology. But can principles of brain organisation and function, such as these proposed by Cajal, inspire new advances at the forefront of artificial intelligence (AI)?

___

Under certain conditions, evolution will also construct efficient, economical and energetically resourceful systems of behaviour.

___

 To answer this question, it is first useful to recognise that there is actually deep insight to be gathered by thinking about why such principles may exist at all in the natural world. As you may expect, the core explanation invokes evolution. Evolutionary forces construct organisms that contain many complex interacting internal systems that together control intelligent behaviour. This alone is a truly awe-inspiring and miraculous feat of nature. But evolution doesn’t just do that. Under certain conditions, evolution will also construct efficient, economical and energetically resourceful systems of behaviour. This is for the simple reason that survival – the main driver of change over evolutionary timescales – is best served by organisms appropriately using energy in terms of where, when and how it is stored, mobilised and deployed.

  Racist AI copy SUGGESTED READING Do no harm: AI and medical racism By Arshin Adib-Moghaddam

Evolution will therefore often find highly efficient solutions to some of the most complex problems that we care about in agents capable of intelligence – from learning and memory to value assignment and decision making. This means that Cajal’s principles of material, space and time can each really be thought of as derivative biological quantities that have been optimised gradually over time, through evolutionary pressures, because they confer advantage.

The competitive nature of evolution drives intelligent systems and agents to be efficient by solving problems in highly elegant (and often unpredictable) ways. Biological systems, like the brain, are ultimately influenced by a balance between the energetic costs incurred by their operation and the benefits realised by energy expenditure. This basic fact is not really reflected in modern AI models.

So, evolution shapes systems that are capable of solving competing problems that are both internal (e.g., how to expend energy) and external (e.g., how to act to survive), but in a way that can be highly efficient, in many cases elegant, and often surprising. But how does this evolutionary story of biological intelligence contrast with the current paradigm of AI?

In some ways, quite directly. Since the 50s, neural networks were developed as models that were inspired directly from neurons in the brain and the strength of their connections, in addition to many successful architectures of the past being directly motivated by neuroscience experimentation and theory. Yet, AI research in the modern era has occurred with a significant absence of thought of intelligent systems in nature and their guiding principles. Why is this? There are many reasons. But one is that the exponential growth of computing capabilities, enabled by increases of transistors on integrated circuits (observed since the 1950s, known as Moore’s Law), has permitted AI researchers to leverage significant improvements in performance without necessarily requiring extraordinarily elegant solutions. This is not to say that modern AI algorithms are not widely impressive – they are. It is just that the majority of the heavy lifting has come from advances in computing power rather than their engineered design. Consequently, there has been relatively little recent need or interest from AI experts to look to the brain for inspiration.

___

There are clear and obvious links between computation in the brain and artificial systems that are only just being discovered.

___

But the tide is turning. From a hardware perspective, Moore’s law will not continue ad infinitum (at 7 nanometers, transistor channel lengths are now nearing fundamental limits of atomic spacing). We will therefore not be able to leverage ever improving performance delivered by increasingly compact microprocessors. It is likely therefore that we will require entirely new computing paradigms, some of which may be inspired by the types of computations we observe in the brain (the most notable being neuromorphic computing). From a software and AI perspective, it is becoming increasingly clear that – in part due to the reliance on increases to computational power – the AI research field will need to refresh its conceptions as to what makes systems intelligent at all. For example, this will require much more sophisticated benchmarks of what it means to perform at human or super-human performance. In sum, the field will need to form a much richer view of the possible space of intelligent systems, and how artificial models can occupy different places in that space.

There are clear and obvious links between computation in the brain and artificial systems that are only just being discovered. From principles of network structure, connectivity and local learning rules to economical trade-offs in energetic resources, it will become clear that – as we rely less on increasing computational capacity of hardware – neurobiology has scope to inform radically new advances in AI.

  AI needs to sleep min SUGGESTED READING AI may need sleep too By Darcy Bounsall

And this circles us back to the beginning. Can principles of brain organisation and function inspire new advances in at the forefront of AI? This is what we show in recent work that Jascha Achterberg and I conducted with colleagues at the Cambridge, Oxford and Google DeepMind, published in Nature Machine Intelligence.

Inspired by the brain, we asked: what physical factors shape the combined structure and function of the human brain? We came to realise that two core constraints on the brain are space and communication. This is because real brains are not mathematically abstract entities – they are physical 3D systems where communication occurs locally through the connections between their many neurons. These types of constraints are never typically considered when training AI systems.

___

Work like this is beginning to show us how embodying AI models with constraints from biology can start to uncover new principles of intelligent systems.

___

What we found through our spatially embedded neural networks was surprising – many observable features commonly found in empirical brain networks would arise in these AI systems organically once these constraints were baked in. Specifically, connections arranged their strength depending on the length of their connections, formed modules in their structure and formed a network topology that looks very much like what we see in biology. Not only that, as these networks were also functioning – solving a memory and navigation task – we could test how these networks keep information in their dynamics but also how efficient their energetic usage was relative to more typical AI models. As in the animal kingdom, we found that specialised information had a spatial extent to it (you could observe where information was being used) but additionally we found these networks to be more efficient energetically.

There is a long way to go in this highly interdisciplinary and emerging field at the intersection of neuroscience and AI. But work like this is beginning to show us how embodying AI models with constraints from biology can start to uncover new principles of intelligent systems. It is a moving prospect to think that looking to nature may help us reveal new and exciting discoveries at the forefront of a (fresh kind of) AI.

 

For the Nature Machine Intelligence article see below:

Achterberg, J., Akarca, D., Strouse, D.J. et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023-00748-9

Latest Releases
Join the conversation