The real significance of new AI technology is not that machines can be like humans but that humans are prone to deception, argues Simone Natale.
In the last few weeks, the official Twitter account of the Perseverance Mars Rover, a car-sized robot designed by NASA to explore Mars, attracted news media attention and a steadily growing number of followers. While many marvelled at the photos of the Red Planet that the account posted, some asked if the fact that the account is tweeting in the first person - as if it was the Rover itself reporting from Mars and not NASA’s PR office – amounts to a form of deception. Is anthropomorphization an honest way to communicate the rover’s functioning to the public? Are we being led to project consciousness and sociality onto a machine that has neither?
SUGGESTED READING
The uncontrollability of Artificial Intelligence
By Roman V. Yampolskiy
Similar questions about Artificial Intelligence (AI) are being posed more and more frequently. As AI technologies become more pervasive and influential, many fear that it will become more difficult to distinguish between “real” AI technologies and blatant frauds. They point to AI and robotics companies using marketing tools and design features that exaggerate the apparent intelligence of robots. This debate, however, misses an important point. If we want to really understand the social and cultural dynamics activated by the new generation of AI and robots, we need to acknowledge that deception is not an incidental feature of these technologies. It is not, in other words, something that only characterizes certain uses and expressions of AI technology. Deception is, instead, ingrained in the very essence of what AI is and how it works. It is as central to AI as the circuits and software that make it run.
AI and deception
Because the term “deception” is usually associated with malicious endeavours, the AI and computer science communities are usually hesitant to discuss their work in terms of deception, unless it is presented as an unwanted, exceptional outcome that only characterises specific objects and situations. Such an approach, however, relies on a rigid understanding of deception that does not sit well with recent explorations of this concept.
AI scientists collected information on how users react to machines exhibiting the appearance of intelligent behaviours and incorporated this knowledge into the design of software and machines.
Scholars in social psychology, philosophy, and sociology have shown that deception is an inescapable fact of social life and plays a key role in social interaction and communication. As philosopher Mark Wrathall put it, “it rarely makes sense to say that I perceived either truly or falsely,” since deception is an integral part of how we perceive and navigate the world. If, for instance, I am walking in the woods and believe to see a deer on my side where in fact there is just a bush, I am deceived. Yet the same mechanism that made me see a deer where it wasn’t – our tendency and ability to identify patterns in visual information – would have helped me, on another occasion, to identify a potential danger. This shows us how deception is functional to our ability to navigate the external world.
Join the conversation