Premium

India’s new 3-hour deepfake removal rule: Experts urge strict compliance

New Deepfake Rules in India: The amendments of IT Rules 2021 shorten deepfakes takedown timelines, introduce compliance obligations for platforms hosting synthetically generated information (SGI), three-months user warnings now mandatory.

The amendments address the rise of deepfakes and AI-generated content, the amendments introduce a detailed definition of 'synthetically generated information' (SGI).Deepfake Law in India: The amendments address the rise of deepfakes and AI-generated content, the amendments introduce a detailed definition of 'synthetically generated information' (SGI). (Image generated using AI)

With inputs from Sumit Kumar Singh

Deepfake Law in India: After a major regulatory overhaul aiming at tackling deepfakes, harmful online content and improving platform accountability, Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 were notified by the Centre on February 10, legal experts welcome the move but call for efficient implementation.

The 2026 amendment aims to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, set to come into effect from February 20, 2026.

Amendments

  • The amendments shorten content takedown timelines, introduce detailed compliance obligations for platforms hosting synthetically generated information (SGI), three-months user warnings now mandatory.
  • Under the amended Rule 3(1)(c), intermediaries (social media platforms like Facebook, Instagram, YouTube, X and other websites) will now be required to inform users every three months, instead of once a year, about the consequences of violating the platform’s terms of service, privacy policy or user agreement.
  • Users must be clearly informed that access or usage rights may be withdrawn or disabled for non-compliance.
  • They may face penalties under applicable laws for creating, generating or modifying unlawful content.
  • Certain offences require mandatory reporting under laws such as the Protection of Children from Sexual Offences (POCSO) Act, 2012 and the Bharatiya Nagarik Suraksha Sanhita (BNSS), 2023.
  • The move is seen as an attempt to strengthen informed digital participation and reduce the circulation of unlawful material including deepfakes.

Takedown timelines slashed drastically

  • One of the most striking changes is the sharp reduction in timelines for content removal including deepfakes and grievance redressal.
  • The amendments mandate that court-ordered or law enforcement-directed takedowns must now be complied with within three hours, as against the earlier 36-hour window.
  • Similarly, platforms must remove non-consensual nudity within two hours, down from 24 hours.
  • Grievance redressal timelines have also been halved to seven days.
  • Legal experts say this compressed timeframe will require platforms to establish round-the-clock rapid response teams and enhanced automated moderation systems.
  • This replaces the earlier more restrictive structure and is expected to expedite law enforcement coordination.
The amendments mandate that court-ordered or law enforcement-directed takedowns of deepfakes must now be complied with within three hours, against the earlier 36-hour window. The amendments mandate that court-ordered or law enforcement-directed takedowns of deepfakes must now be complied with within three hours, against the earlier 36-hour window.

New framework for ‘synthetically generated information’

  • In a significant move addressing the rise of deepfakes and AI-generated content, the amendments introduce a detailed definition of ‘synthetically generated information‘ (SGI).
  • SGI includes audio, visual or audio-visual content that is artificially or algorithmically created or modified in a manner that makes it appear real and indistinguishable from actual persons or events.

What is not SGI

  • The Rules clarify that routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning will not qualify as SGI, provided the substance or meaning of the content is not materially altered.
  • Similarly, routine or good-faith creation, preparation, formatting, presentation or design of documents, presentations, portable document format (PDF) files, educational or training materials, research outputs will be excluded.

Additional compliance burden on SGI platforms

  • Intermediaries offering SGI generation or sharing services must now inform users that punishment may be attracted for directing or causing SGI to be created or shared unlawfully.
  • Warn that violations could result in content removal including deepfakes, suspension or termination of user accounts, disclosure of identity to complainants, and mandatory reporting under POCSO or BNSS.

Mandatory proactive detection and labelling

  • Platforms must implement “reasonable and appropriate technical measures,” including automated tools, to prevent the generation or sharing of unlawful SGI.
  • Prohibited SGI categories include content that contains child sexual abuse material (CSAM), non-consensual nudity, or obscene or sexually explicit material.
  • Creates false documents or electronic records.
  • Relates to procurement of explosives, arms or ammunition.
  • Falsely depicts a natural person or real-world event in a deceptive manner.
  • Where SGI does not fall under prohibited categories, it must be prominently labelled.
  • Labels must be clearly visible in visual displays.
  • Prefixed prominently in audio content.
  • Embedded with metadata or technical provenance markers, including a unique identifier of the computer resource used to generate the content.
  • The rules explicitly prohibit suppression, modification or removal of such labels and metadata.
    Stricter rules for significant social media intermediaries (SSMIs).
  • SSMIs face additional obligations including mandatory user declarations where content is SGI.
  • Verification of the accuracy of such declarations using technical measures.

A regulatory push against deepfakes

  • The amendments represent one of the most comprehensive regulatory responses to deepfakes, AI-generated misinformation and digital harms in India.
  • By sharply reducing takedown timelines, mandating proactive detection, and enforcing metadata-based labelling, the government appears to be signalling zero tolerance for deepfake abuse and unlawful synthetic content.
  • With the rules set to take effect from February 20, 2026, intermediaries now face less than ten days to recalibrate compliance mechanisms and technological safeguards.
  • Industry stakeholders are expected to seek clarifications on implementation logistics, especially concerning the feasibility of the three-hour takedown mandate and permanent metadata requirements.

Experts speak

Advocate Yashaswini Basu, data and energy transition lawyer from Bangaluru said, “The much needed regulatory oversight over synthetically generated information through the new IT rules enables mandatory transparency through permanent metadata and prominent labeling, ensuring users can distinguish AI-generated content from reality. By slashing takedown timelines to just three hours, the rules enforce rapid accountability while requiring platforms to use proactive automated tools against non-consensual imagery,”

Senior advocate Srinath Sridevan, Madras High Court, commenting on the issue said that the regulations cover a few different areas, and it is quite possible that the driving force behind each is slightly different.

“There are exceptions and these are fairly self-explanatory. Implementation is another matter altogether. Synthetically generated realistic content is all-pervasive. It is everywhere. Mandating certain compliances in relation to it, will lead to compliance by a few and violation by most. Unless the government is able to come up with an automated monitoring mechanism, this regulation will remain a well-intentioned but empty rule,” he added.

Advocate Vikash Kumar Bairagi, associate, disputes resolutions team, S&A Law Offices, New Delhi said, “The 2026 amendments reflect an understandable anxiety around AI-generated misuse, but they respond with regulatory overreach rather than calibrated restraint. By mandating near real time takedowns and automated verification of “synthetic” content, the rules risk incentivising intermediaries to err on the side of censorship.”

Story continues below this ad

Advocate Ankit Konwar, principal associate, Hammurabi and Solomon Partners, New Delhi said that the amendment appears to reflect the government’s intent to address the growing risks posed by AI-generated and deepfake content by imposing traceability, labelling, and expedited compliance obligations on intermediaries to safeguard informational integrity and user trust.

He added, “While the regulatory objective is understandable, key challenges may however arise in uniformly identifying synthetic content, balancing compliance with user privacy and free speech concerns, and ensuring technological feasibility across platforms of varying scale. Effective enforcement will therefore depend on clear technical standards, proportional compliance expectations, and consistent regulatory oversight.”

Advocate Suhael Buttan, partner, SKV Law Offices said that all synthetic content that is not outright illegal must be clearly labeled and it must also contain permanent metadata or other provenance identifiers.

“Overall, the legal direction is clear. Platforms must ensure strong accountability, transparency and technical safeguards around AI generated content in order to maintain compliance and retain statutory protections,” he added.

Story continues below this ad

Advocate Huzefa Tavawalla, partner (head – digital disruption), Cyril Amarchand Mangaldas said, “While it remains to be seen how jurisprudence evolves with respect to LLMs and intermediaries, the requirement to implement provenance mechanisms within a 10‑day period appears aggressive. This is particularly given the technical and operational complexity involved in deploying such mechanisms at scale.”

Further, a key practical challenge which could persist is the ability of users/intermediaries to accurately determine which AI datasets are exempt from labelling requirements and which are not, raising questions around consistency and enforceability, he added.

Advocate Arya Tripathy, partner, Cyril Amarchand Mangaldas said that the SGI Rules blur the contours of safe harbor protection.

“Intermediaries are obligated to deploy measures for preventing creation, dissemination, and other forms of dealing with unlawful SGI, which will vest them with direct and actual knowledge, and further, authorises them to take down such content, apart from other actions. This dilutes the principles around safe harbor, exposing them to actual liability for unlawful content,’ she added.

Story continues below this ad

Advocate Rashmi Deshpande, partner, Fountainhead Legal pointed out that there is also a privacy angle to consider and the requirement to embed permanent metadata and unique identifiers improves traceability and can deter impersonation, fake political content, or non-consensual imagery.

“In situations like the Grok episode, where AI-generated responses created global controversy, these Rules would make platforms more accountable. They cannot simply react after something goes viral, they are expected to prevent harmful or misleading synthetic content from being generated in the first place,” said Deshpande.

Advocate Ankit Sahni, partner, Ajay Sahni & Associates said, “Users who generate unlawful synthetic content would remain independently liable under criminal law. For AI users this could mean increased friction, mandatory disclosures and restricted functionalities, while for AI platforms compliance architecture itself becomes central to retaining safe harbour protection.”

Sumit is an intern with The Indian Express

Vineet Upadhyay is an Assistant Editor with The Indian Express, where he leads specialized coverage of the Indian judicial system. Expertise Specialized Legal Authority: Vineet has spent the better part of his career analyzing the intricacies of the law. His expertise lies in "demystifying" judgments from the Supreme Court of India, various High Courts, and District Courts. His reporting covers a vast spectrum of legal issues, including: Constitutional & Civil Rights: Reporting on landmark rulings regarding privacy, equality, and state accountability. Criminal Justice & Enforcement: Detailed coverage of high-profile cases involving the Enforcement Directorate (ED), NIA, and POCSO matters. Consumer Rights & Environmental Law: Authoritative pieces on medical negligence compensation, environmental protection (such as the "living person" status of rivers), and labor rights. Over a Decade of Professional Experience: Prior to joining The Indian Express, he served as a Principal Correspondent/Legal Reporter for The Times of India and held significant roles at The New Indian Express. His tenure has seen him report from critical legal hubs, including Delhi and Uttarakhand. ... Read More

 

Advertisement
Loading Recommendations...
Latest Comment
Post Comment
Read Comments