Companies like OpenAI try to show that AIs are intelligent by hyping their high scores in behavioural tests – an approach with roots in the Turing Test. But there are hard limits to what we can infer about intelligence by observing behaviour. To demonstrate intelligence, argues Raphaël Millière, we must stop chasing high scores and start uncovering the mechanisms underlying AI systems’ behaviour.
Public discourse on artificial intelligence is divided by a widening chasm. While sceptics dismiss current AI systems as mere parlour tricks, devoid of genuine cognitive sophistication, evangelists and doomsayers view them as significant milestones on the path toward superhuman intelligence, laden with utopian potential or catastrophic risk.
Join the conversation