AI threatens elections and accountable governance

Deepfakes are a danger to this year’s elections

With elections in America, the UK, India, and more, this is a huge year for global politics. While bots were a cause for concern as far back as the 2016 election, technology has come a long way since then. With the rise of Generative AI, the world’s elections face an existential threat from deepfakes, disinformation and misinformation. To meet this threat, Nayef Al-Rodhan argues that we cannot rewire our brains to be able to spot misinformation. Therefore, the impetus relies on regulation and societal education.

 

Will artificial intelligence-generated deepfakes provide a “perfect storm” for malicious actors looking to hijack forthcoming local and general elections, as the British Home Secretary James Cleverly warned recently? Sophisticated deepfakes are becoming a global problem, and an urgent one at that. As an estimated 2 billion people head to the polls this year, there has hardly been a worse time to allow harmful content to flourish online.

  related-video-image SUGGESTED VIEWING Capitalism, Big Tech and the nation state With Stephanie Hare, Aaron Bastani, Kenneth Cukier, Anu Bradford

The toxic mixture of increasingly sophisticated AI tools and flimsy prevention measures means that we could soon be faced with a situation where a viral deepfake dismantles democratic and governance processes. Policy and tech circles are starting, slowly, to wake up to the problem. This month, European political parties signed a voluntary code of conduct aimed at preventing the creation and dissemination of unlabeled deepfakes ahead of the European elections in June. Earlier this year, Silicon Valley bosses pledged to prevent AI-generated content from interfering with global elections this year. At this year’s Munich Security Conference, Amazon, Google, Meta, Microsoft, TikTok and OpenAI were among 20 tech companies which agreed to work together to combat the creation and spread of deepfake images, videos and audio designed to mislead voters.

These are much-needed initiatives. But to properly shield ourselves from a disinformation doomsday scenario, we need to grapple with the true reach and effect of these misleading campaigns and develop a deeper understanding of our neuro-behavioural susceptibility to these sorts of attacks. We must also ask ourselves some uncomfortable but important questions, starting with: why are these methods so effective and what can be done to dilute their influence on societies? 

  AI Foresight and GenAI workshops 1024x576 min SUGGESTED READING AI is not intelligent By David J. Gunkel

Online disinformation has been an irritant in elections and political systems for many years. According to the Oxford Internet Institute, social media disinformation campaigns had operated in more than 80 countries by 2020. But rapid advances in AI technology mean that it is now easier than ever to manipulate media and public opinion through both disinformation (fake news that is created and spread deliberately by someone who knows that it is false) and misinformation (fake news that is created and spread by mistake, by someone who doesn’t realise that it is false). This is largely due to the advent of Generative AI, powerful multi-modal models that can combine text, image, audio and video. These new tools have amplified social media disinformation in unprecedented ways. Generative AI models can help bad actors tailor messaging so that it resonates with target audiences. This has transformed the generation and dissemination of very realistic deepfakes (understood as synthetic media that have been digitally manipulated to substitute one person's likeness with that of another).

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Latest Releases
Join the conversation