Journalism of Courage
Advertisement

As India looks to mandate AI content labelling, examining the growing menace of deepfakes

India AI Content Labelling: Companies like Meta and Google already have some form of AI labelling on their platforms, and ask the creator at the time of uploading a piece of content whether it was made using AI. But, enforcement remains patchy.

A deepfake is a video that has been digitally altered, typically used to spread false information.India AI Content Labelling: A deepfake is a video that has been digitally altered, typically used to spread false information. (AI-generated photo via Freepik)

India AI Content Labelling: In an attempt to check the “growing misuse of synthetically generated information, including deepfakes,” the Centre has now proposed draft rules that require mandatory labelling of artificial intelligence or AI-generated content on social media platforms like YouTube and Instagram. The social media platforms will be required to seek a declaration from users on whether the uploaded content is “synthetically generated information”.

According to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, platforms that allow creation of AI content will be required to ensure that such content is prominently labelled or embedded with a permanent unique metadata or identifier. In case of visual content, the label should cover at least 10 per cent of the total surface area, and in case of audio content, it should cover the initial 10 per cent of the total duration.

A deepfake is a video that has been digitally altered, typically used to spread false information. In the Indian context, the issue first surfaced in 2023, when a deepfake video of actor Rashmika Mandanna entering an elevator went viral on social media. Close on the heels of that incident, Prime Minister Narendra Modi had called deepfakes a new “crisis”.

What India is proposing

As per the draft amendments, social media platforms would have to get users to declare whether the uploaded content is synthetically generated; deploy “reasonable and appropriate technical measures”, including automated tools or other suitable mechanisms, to verify the accuracy of such declaration; and, where such declaration or technical verification confirms that the content is synthetically generated, ensure that this information — that the content is synthetically generated — is clearly and prominently displayed with an appropriate label or notice.

If they fail to comply, the platforms may lose the legal immunity they enjoy from third-party content, meaning that the responsibility of such platforms shall extend to taking reasonable and proportionate technical measures to verify the correctness of user declarations and to ensure that no synthetically generated information is published without such declaration or label.

The draft amendments introduce a new clause defining synthetically generated information as “information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”.

Some form of labelling already happens online

Companies like Meta and Google already have some form of AI labelling on their platforms, and ask the creator at the time of uploading a piece of content whether it was made using AI. On Instagram, for instance, Meta applies an ‘AI Info’ label to content that is modified or created using AI, although enforcement remains patchy, as several pieces of AI content on the platform don’t seem to carry the label.

Story continues below this ad

Last year, Meta had said that since AI-generated content appears across the internet, they were working with other companies in their industry to develop common standards for identifying it, through forums like the Partnership on AI (PAI). It was also working to build tools that can identify invisible markers at scale, so it could label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

YouTube adds a label called ‘Altered or synthetic content’ to videos that are created using AI and also adds a description on how the video was made, and it can offer insight into the content’s origin and whether it has been meaningfully altered using AI.

But as such, most of these measures, at this point, remain reactive in nature, meaning that the labels often appear after a video has been brought to the platform’s attention in case the creator has not declared that the content was made using AI.

India’s proposed amendments take that a step forward, as companies would have to verify AI content on their platforms without necessarily being intimated about them by deploying suitable technological measures.

Story continues below this ad

India’s entertainment Inc fights against deepfakes

The conversation around the pitfalls of AI generated deepfakes has taken the entertainment world by storm as several prominent actors, including Amitabh Bachchan, Aishwarya Rai Bachchan, Akshay Kumar, and Hrithik Roshan have filed cases to protect their “personality rights,” amid a rapid rise in AI generated videos that steals their likenesses, including their faces and voices.

India’s laws around protecting personality rights are relatively lax compared to other geographies. Experts say that India’s do not explicitly recognise personality rights, with protection only coming from a patchwork of other legislations which may indirectly protect these rights.

This point was particularly highlighted when the production company behind the popular film Raanjhanaa altered the movie’s ending using AI without the consent of the director and actors, much to their dismay.

How other countries are tackling deepfakes

Under the European Union’s AI Act, AI providers must label synthetic audio, images, video, or text in a machine-readable way so it’s detectable as artificial. Deployers of AI systems that create deepfakes or text for public interest content must also disclose when material has been artificially generated or altered.

Story continues below this ad

Last month, China, too, rolled out its AI labelling rules, under which content providers must now display clear labels to identify material created by artificial intelligence. Visible AI symbols are required for chatbots, AI writing, synthetic voices, face swaps and immersive scene editing. For other AI-based content, hidden tags such as watermarks will suffice. Platforms must also act as monitors — when AI-generated content is detected or suspected, they must alert users and may apply their own labels.

Denmark has taken a radically different approach. The country is proposing a legislation that aims to protect its citizens from deepfakes by giving them copyright over their own likeness. If the law were to pass, it would mean that anyone could seek a removal of their digitally altered photos or videos, if it was created without their consent.

From the homepage

Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More

Tags:
  • artificial intelligence Explained Sci-Tech Express Explained
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express InvestigationOne nation, a few parivars
X