A popular Indian actor entering an elevator in revealing clothes. Football fans in a stadium in Madrid holding an enormous Palestinian flag. A video of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons. The pope wearing a Balenciaga puffer jacket. These unrelated events have something in common: they never happened. And yet, they were some of the most viral pieces of content on various social media platforms. Thanks to artificial intelligence (AI), which has improved greatly over the past year, there are now platforms that allow nearly anyone to create a persuasive fake by entering text into popular AI generators that produce images, video or audio. The repercussions of the menace of AI-generated fake content, colloquially known as deepfakes, especially in a polarising world and a divided online ecosystem can be far reaching, and have given a new worry to lawmakers around the world. Big tech companies including Meta and Google have announced measures to tackle content produced using the technology, but there are enough cracks in those systems that are being exploited by people who want to disseminate such content. Entire pornographic sites with deepfakes of popular actors have come up. The technology has also raised concerns about election integrity, as researchers believe that it could be used to manipulate the audio or video of politicians to make them seem to say or do something they never did. A recent deepfake of actor Rashmika Mandanna is currently viral on sites like Instagram, where her face has been morphed to a video where a woman can be seen entering a lift wearing revealing clothes. On a closer look, there are moments where one can decipher that the video is not genuine, but for someone not looking closely, it may not be the case. I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused.… — Rashmika Mandanna (@iamRashmika) November 6, 2023 This particular clip also highlights that the problems of deepfake technology are most certainly expected to be bigger for women, for whom online platforms are already a hostile place. Deepfakes add a new dimension to the ways in which they can be harassed on the internet. Even as actor Amitabh Bachchan has called for legal action against the deepfake of Mandanna, Union Minister of State for Information Technology Rajeev Chandrasekhar said on X (formerly Twitter) that “deepfakes are the latest and even more dangerous and damaging form of misinformation and need to be dealt with by (online) platforms”. Trust deficit While deepfakes have not yet reached a level where they look entirely genuine, the possibility of AI-generated misinformation has left a psychological imprint and, in some cases, helped commentators dismiss genuine content as having been altered through artificial intelligence. Perhaps the online commentary around the Israel-Gaza conflict has brought this point to the forefront. Internet platforms like X, Facebook and YouTube have been awash with AI-generated content, propagated by accounts from both sides, showing the destruction that the conflict that has brought out since October 7. Platforms like X, Facebook and Twitter are overrun with falsehoods about the conflict, and even though some of them have banned Hamas-linked accounts, the propaganda continues to garner millions of eyeballs on the internet. As platforms struggle to shield users against graphic and inaccurate content, trust continues to fray. A global regulatory concern A lot of these concerns were on show at Bletchley Park last week, during the world’s first ever AI Safety Summit. Twenty-eight major countries including the United States, China, Japan, the United Kingdom, France and India, and the European Union agreed to sign on a declaration saying global action is needed to tackle the potential risks of AI. The declaration incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI—especially cybersecurity, biotechnology and disinformation risks. The declaration came days after US President Joe Biden issued an executive order aimed at safeguarding against threats posed by AI, and exerting oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google Bard. The order requires AI companies to share the results of tests of their newer products with the federal government before making the new capabilities available to consumers, among other things. Chandrasekhar, who represented India at Bletchley Park, said at the opening plenary session that the weaponisation represented by social media must be overcome and that steps should be taken to ensure AI represents safety and trust. Less than two weeks before the G20 Leaders Summit in New Delhi, Prime Minister Narendra Modi had called for a global framework on the expansion of “ethical” AI tools. This statement put a stamp of approval at the highest level on the shift in New Delhi’s position from not considering any legal intervention on regulating AI in the country to a move in the direction of actively formulating regulations based on a “risk-based, user-harm” approach. Companies respond with tech solutions But while laws could take a long time to bear fruit, online platforms, with hundreds of millions of users, continue to be weaponised for propagating AI-generated misinformation. The menace of the technology has prompted some of them to come up with clear platform policies on how they will deal with deepfakes. Earlier this year, Google announced tools—which rely on watermarking and metadata—to identify synthetically generated content. As per Google, watermarking embeds information directly into content in ways that are maintained even through modest image editing. Moving forward, the company will build its models to include watermarking and other techniques from the start. “Metadata allows content creators to associate additional context with original files, giving you more information whenever you encounter an image. We’ll ensure every one of our AI-generated images has that metadata,” Google CEO Sundar Pichai wrote in a blogpost.