
On October 22, the Ministry of Electronics and Information Technology introduced the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 to regulate the circulation of synthetically generated information (which includes deepfakes) on the internet. In its explanatory note accompanying the draft amendment, the government framed the rationale in these words: “To curb the spread of misinformation, damage reputation, manipulate and influence elections or commit financial fraud.”
The framework envisages labelling of synthetically generated information, which can be defined as any information “artificially or algorithmically created, generated, modified or altered using a computer resource” in a manner that it appears to be authentic. Such content, under the new rules, must be prominently labelled using a permanent unique metadata or identifier, covering at least 10 per cent of the total surface area of visual content and 10 per cent of the total duration of audio content. In case of social media platforms such as Facebook or Instagram, before uploading any content, (a) a user must declare if such content is AI-generated; following which (b) reasonable and appropriate technical measures will be deployed by such platform to verify accuracy of such declaration; and once confirmed, (c) the said content will be displayed with an appropriate label indicating that such content is synthetically generated.
The initial reactions have been largely positive. Experts and stakeholders have described the draft as a timely step to combat rampant misinformation, illegal commercialisation, and misuse of algorithmic creativity. By placing India alongside jurisdictions such as the United States, United Kingdom, European Union, and China, the government’s move signals recognition of a rapidly evolving digital landscape where the distinction between real and artificial has grown dangerously thin. Yet, India’s unique social fabric and digital ecosystem demand that regulation evolve from a harm-based approach, which focuses on preventing damage, to a user-based approach, which protects and empowers those most affected by technology’s misuse.
First, the regulation must draw a clear distinction between content created with the aid of AI and content entirely generated by AI. This differentiation is crucial in an economy increasingly dependent on creative entrepreneurship. Many creators use AI tools to enhance video quality, refine scripts, or improve delivery that augment rather than replace human creativity. A blanket rule without such nuance risks discouraging innovation and penalising legitimate creative use. Saudi Arabia’s Deepfake Guidelines, which distinguish between shallow fakes and deepfakes, offer a valuable precedent in this regard. While the former uses more basic and accessible digital editing techniques to enhance content quality, the latter relies on advanced AI and deep learning to create sophisticated and highly fake media.
Second, the framework must focus on those who bear the brunt of synthetic misuse. Women and children, in particular, remain disproportionately vulnerable to image-based abuse online. It is therefore essential that the law explicitly penalise the creation and circulation of deepfake content without consent. Denmark offers an instructive model. Its legislation grants citizens copyright over their own likeness, allowing them to seek removal of manipulated content created without authorisation. A similar mechanism, combined with user alerts when suspected deepfakes circulate, would make the Indian regime more responsive to user protection.
Third, regulation must be paired with a sustained effort to improve AI literacy. As of March 2024, India had 954 million internet users, with over 95 per cent of its villages connected by 3G or 4G networks. Despite this impressive reach, many users remain ill-equipped to recognise AI-generated content, even if such content carries a visible label. The success of any labelling regime depends not only on the presence of a mark but also on the user’s ability to understand what it signifies.
This makes awareness and sensitisation programmes indispensable. Users need to know how to spot manipulated visuals, verify authenticity, and report potential deepfakes. For this, public information campaigns, school curricula, and digital literacy drives must integrate AI awareness as a core component, much like cybersecurity education in earlier years. Furthermore, labels and warnings must be accessible across India’s linguistic spectrum. If an AI-generated image is labelled only in English, it serves little purpose in a country where most internet users engage in vernacular languages. Translating such warnings into regional languages, supported by simple visual cues, would go a long way in making the system effective.
Most global frameworks — whether in Europe, North America, or East Asia — focus on disclosure, transparency, and takedown mechanisms. India’s draft regulation adopts similar principles, but its challenge is more complex. It must build trust among users while allowing space for innovation; protect the vulnerable without stifling creativity; and balance the promise of technology with the protection of human dignity. Deepfakes may be synthetic, but the harms they cause are painfully real. India’s response must therefore be human-centred, such that digital technologies, including AI, must be embraced by Indians as informed, empowered citizens, instead of passive data subjects.
The writer is a SC lawyer, currently pursuing an LLM at the University of Cambridge