Premium
Premium

Opinion On AI content, labelling is a technical solution. But does it empower the user?

Watermarking is one tool, credentialling is another. The real policy question is: What do we want users to be able to do? If the goal is for users to be able to understand the origin or true nature of content, then technical solutions should be judged against that standard

Artificial intelligence, AI. (Image: FreePik)ndia’s draft amendments are a step towards accountability. Yet, if they stop at over-specifying a singular, brittle technical solution, they will have mistaken the tool for the goal (Image: FreePik)
November 28, 2025 03:26 PM IST First published on: Nov 28, 2025 at 03:25 PM IST

A few days ago, a customer used an AI-generated image of cracked eggs to claim a refund from a food delivery app and went viral, thereby sparking debate about AI-assisted fraud. Earlier this week, a deepfake of Aishwarya Rai Bachchan questioning Prime Minister Narendra Modi on India’s alleged losses to Pakistan was widely circulated before fact-checkers debunked it.

These incidents highlight the growing potential of synthetic content to disrupt online information ecosystems and cause tangible harm such as fraud, misrepresentation and harassment. This is pushing policymakers worldwide to search for solutions that could mitigate such risks and empower users. One such solution currently gaining traction is the use of technical measures such as watermarks, detection tools, and content filtering to identify AI-generated material online.

Advertisement

And yet, there is a persistent view within AI policy analyst circles that such solutions cannot resolve the social harms associated with synthetically generated content. The refrain is familiar: Technical solutions are inadequate when elevated to policy. This view is not wrong, but it misses a crucial point. By dismissing technical solutions outright, we risk overlooking the genuine value they can provide when properly conceived and strategically aligned with well-defined policy goals. The question is not whether technical tools are sufficient on their own, but whether they can serve as effective instruments within a broader policy framework. When designed and deployed with clear objectives, technical solutions have a legitimate and important role to play in addressing the challenges posed by synthetic content.

Last month, India released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. If enacted into law, social media intermediaries that enable the creation and dissemination of AI-generated content will be legally bound to take technical measures to label or watermark such content. In the event that they enable the creation of such content, they would need to embed a visible, permanent, and unique identifier on such data to inform users that what they are consuming is synthetically generated. The draft provisions prescribe a 10 per cent threshold for the label, in terms of surface area in case of visual content and initial duration in case of audio content. In the event that they are a significant intermediary — having amassed 5 million registered users in India — they will have to ensure that their users make a declaration every time they upload synthetically generated content. Such user declarations will then need to be verified by the intermediary and reflected in the content before it can be uploaded and shared. If non-compliant, intermediaries will be stripped of their safe harbour protection and may be held legally liable for such content.

According to an explanatory note appended to the draft amendments, the underlying policy objective is to empower users and ensure greater transparency by mandating the disclosure of synthetically generated content. This is a well-intentioned policy move. However, in the absence of trust and confidence regarding their authenticity or permanence, disclosures can do little to empower users. Users are also only meaningfully empowered when the information made transparent to them enables them to exercise their own agency. This is precisely what we saw be operationalised in the regulation of dark patterns, that is, cognitively manipulative UI/UX design that can nudge users into making choices they may not have wanted to make.

Advertisement

The core issue with a legal mandate for labelling synthetically generated content is that it functions as a brittle trust mechanism. Labels can be removed, altered or falsified. Their technical robustness and durability vary drastically, both in the case of visible and invisible marks. For example, there is data to suggest that invisible watermarks in text can be manipulated far more easily than in audiovisual content. There is also the question of whether low-stakes synthetic manipulation of content warrants a disclosure. Above all, transparency is more than a static label, but a process or story that can evolve and be scrutinised by users to make decisions. Simply because something was generated using AI does not mean it is misleading, and simply because something was not generated using AI does not mean it is harmless.

In this context, certain provenance systems and verification standards, some of which are already in adoption, offer a more compelling frame than labels. Unlike the binary information captured by labels, provenance systems can document the history of a piece of content: Its origin, the transformation it has undergone, and the actors involved. They resemble a “chain of custody” in legal practice, where the integrity of evidence does not depend on a single label but a verifiable record of its journey. For example, the Coalition for Content Provenance and Authenticity (C2PA) has created Content Credentials that help identify synthetically generated content by embedding cryptographically signed statements into a digital file that records the origin and editing history of content. When creators or platforms create content using AI tools, they can automatically add credentials stating it as AI-generated. Users can then view such credentials to see a history of content, including whether it was created or significantly edited by AI, allowing them to make decisions about its authenticity and reliability. This, of course, is not an infallible technical solution either, and depends on voluntary opt-ins and user literacy.

However, the implication for law is not that such provenance systems should simply replace watermarking by fiat, but that it should not confuse technical tools with policy goals. Watermarking is one tool, credentialling is another. The real policy question is: What do we want users to be able to do? If the goal is for users to be able to understand the origin or true nature of content, then technical solutions should be judged against that standard.

India’s draft amendments are a step towards accountability. Yet, if they stop at over-specifying a singular, brittle technical solution, they will have mistaken the tool for the goal.

The writer is a research analyst at Carnegie India

Latest Comment
Post Comment
Read Comments