Why we have the future of AI wrong

Computers should learn like babies

Artificial intelligence systems have beaten humans at chess, poker, Jeopardy, Go, and countless other games. But machines still falter when it comes to understanding some basic rules about the physical world. Building a machine-learning system based on how babies’ brain works could be a step towards making machine learning systems more efficient thinkers -- like humans, write, Susan Hespos and Brendan Dalton.

 

Computers have come a long way. From punch-card behemoths to hand-held voice-activated smartphones, advances in miniaturisation and computing power have supported the development of Artificial Intelligence (AI) from smart marketing algorithms, incredible image recognition capabilities, operating within the global financial market, efficient search engines and achievements like beating humans at games, considered to represent the apogee of human intelligence like chess or Go. Despite these achievements, AI is falling short.

 

In 1950 Alan Turing threw down a gauntlet when he said, “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” In the seven decades that have passed since this challenge was issued, we have yet to build an artificial intelligence model that can rival the infant cognition of a typically developing 1-year-old.

  22 06 14.Ai sentient.ata SUGGESTED READING Google's AI is not sentient. Not even slightly By Gary Marcus

One place where artificial intelligence has failed is in replicating an infant’s understanding of physics. We are not referring to the difficult physics questions about the nature of the big bang or black holes, but rather simple physics concepts like knowing that an unsupported object falls, knowing that a hidden object continues to exist and expecting that a ball rolling down a ramp will ricochet off a wall at the bottom of the ramp. This asks us to call into question when it was that learnt how objects behave and interact. Did anyone explicitly teach us these notions?

 

___

Research in cognitive development shows that humans have expectations about common-sense aspects of physics in the first months of life.

___

Research in cognitive development shows that humans have expectations about these common-sense aspects of physics in the first months of life. To us, visual perception seems effortless. This is because we have a dedicated low-level neural machinery for processing visual information that occurs unconsciously and automatically without draining our mental energy or conscious attention. This knowledge is evident without anyone explicitly teaching us. Pause and ponder for a moment that underneath all the things that vary across humans, there exists a set of perceptual and conceptual capacities common to everyone. These capacities include expectations that objects have permanence, in that they do not blink in and out of existence (e.g., your lost keys still exist even though you can’t find them), two solid objects do not pass through one another (e.g., the ball will bounce off, not pass through, the wall).

 

Humans aren’t the only smart ones when it comes to physical concepts. Expectations about where an object is and how it moves are apparent to other species as well. Rhesus macaques expect an object to stop when it meets a wall and not pass through it. Humans and chickens have similar expectations about partially hidden objects. To humans and many other animals, knowledge about how objects behave and interact seems ubiquitous, but how we acquire these expectations remains a mystery.

 

___

What we seem to do unconsciously and automatically takes a massive amount of computing power to reconstruct.

___

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation