In March 2018, the Cambridge Analytica scandal brought into mainstream public discourse the impact of social media on electoral politics, and the possibility of manipulating the views of Facebook users using data mined from their private posts. To what extent have things changed in 2024?
The shadow of large language models looms over elections around the world, and stakeholders are aware that even one relatively successful deployment of an artificial intelligence (AI)-generated disinformation tool could impact both campaign narratives and election results very significantly.
Three-way trouble
AI can accelerate the production and diffusion of disinformation in broadly three ways, contributing to organised attempts to persuade people to vote in a certain way.
Story continues below this ad
First, AI can magnify the scale of disinformation by thousands of times. Second, hyper-realistic deep fakes of pictures, audio, or video could influence voters powerfully before they can be possibly fact-checked. Third, and perhaps most importantly, by microtargeting.
AI can be used to inundate voters with highly personalised propaganda on a scale that could make the Cambridge Analytica scandal appear microscopic, as the persuasive ability of AI models would be far superior to the bots and automated social media accounts that are now baseline tools for spreading disinformation.
The risks are compounded by social media companies such as Facebook and Twitter significantly cutting their fact-checking and election integrity teams. While YouTube, TikTok and Facebook do require labelling of election-related advertisements generated with AI, that may not be a foolproof deterrent.
The new frontier
Story continues below this ad
Asked if he was worried about AI’s ability to spread disinformation, OpenAI chief executive Sam Altman said last year: “Right now, it’s like troll farms trying to interfere with elections… They make one great meme and that spreads out… That’ll continue to happen and it’ll get better. But…what happens if an AI reads everything you’ve ever written online, every article, every tweet, everything, and then right at the exact moment, sends you one message customised for you? That really changes the way you think about the world… that’s like a new kind of interference that just wasn’t possible before AI.”
That, really, is the new frontier.
Imminent danger
A new study published in PNAS Nexus predicts that disinformation campaigns will increasingly use generative AI to propagate election falsehoods. The research, which used “prior studies of cyber and automated algorithm attacks” to analyse, model, and format the proliferation of bad-actor AI activities online, predicts that AI will help spread toxic content across social media platforms on an almost-daily basis in 2024. (‘Controlling bad-actor-artificial intelligence activity at scale across online battlefields’: Neil F Johnson and others)
The fallout could potentially affect election results in more than 50 countries. The experience of last year’s elections in Slovakia and Argentina is instructive in this regard. (See box)
Story continues below this ad
Fakes around the world.
The World Economic Forum’s Global Risks Perception Survey, ranks misinformation and disinformation among the top 10 risks, with easy-to-use interfaces of large-scale AI models enabling a boom in false information and “synthetic” content — from sophisticated voice cloning to fake websites. The report also warned that disinformation in these elections could destabilise societies by discrediting and questioning the legitimacy of governments.
Potential displayed
Generative AI companies with the most popular visual tools prohibit users from creating “misleading” images. However, researchers with the British nonprofit Centre for Countering Digital Hate (CCDH), who tested four of the largest AI platforms — Midjourney, OpenAI’s ChatGPT Plus, Stability.ai’s DreamStudio, and Microsoft’s Image Creator — succeeded in making deceptive election-related images more than 40% of the time.
The researchers were able to create fake images of Donald Trump being led away by police in handcuffs and Joe Biden in a hospital bed. According to a report by the BBC quoting a public database, users of Midjourney have created fake photos of Biden handing wads of cash to Israeli Prime Minister Benjamin Netanyahu, and Trump playing golf with Russian President Vladimir Putin.
Story continues below this ad
Regulatory tightrope
The Indian government has asked digital platforms to provide technical and business process solutions to prevent and weed out misinformation that can harm society and democracy. Minister for IT and Communications Ashwini Vaishnaw has said that a legal framework against deepfakes and disinformation will be finalised after the elections.
Earlier this month, the IT Ministry had issued an advisory to companies such as Google and OpenAI, and to those running foundational models and wrappers, that their services should not generate responses that are illegal under Indian laws or “threaten the integrity of the electoral process”. The advisory had faced a backlash from some generative AI space startups, including those invested in the ecosystem abroad, over fears of regulatory overreach that could throttle the fledgling industry.
Aravind Srinivas, founder of Perplexity AI, said the advisory was a “bad move by India”, and Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, described it as “anti-innovation”.
Story continues below this ad
While, the government clarified that the advisory was directed only towards “significant” platforms, and not startups, the episode underlines the need for regulators to tread with caution on the fine line between countering AI-linked misinformation and being seen as stifling AI-led innovation.