Advocate Yashaswini Basu, data and energy transition lawyer from Bangaluru said, “The much needed regulatory oversight over synthetically generated information through the new IT rules enables mandatory transparency through permanent metadata and prominent labeling, ensuring users can distinguish AI-generated content from reality. By slashing takedown timelines to just three hours, the rules enforce rapid accountability while requiring platforms to use proactive automated tools against non-consensual imagery,”
Senior advocate Srinath Sridevan, Madras High Court, commenting on the issue said that the regulations cover a few different areas, and it is quite possible that the driving force behind each is slightly different.
“There are exceptions and these are fairly self-explanatory. Implementation is another matter altogether. Synthetically generated realistic content is all-pervasive. It is everywhere. Mandating certain compliances in relation to it, will lead to compliance by a few and violation by most. Unless the government is able to come up with an automated monitoring mechanism, this regulation will remain a well-intentioned but empty rule,” he added.
Advocate Vikash Kumar Bairagi, associate, disputes resolutions team, S&A Law Offices, New Delhi said, “The 2026 amendments reflect an understandable anxiety around AI-generated misuse, but they respond with regulatory overreach rather than calibrated restraint. By mandating near real time takedowns and automated verification of “synthetic” content, the rules risk incentivising intermediaries to err on the side of censorship.”
Story continues below this ad
Advocate Ankit Konwar, principal associate, Hammurabi and Solomon Partners, New Delhi said that the amendment appears to reflect the government’s intent to address the growing risks posed by AI-generated and deepfake content by imposing traceability, labelling, and expedited compliance obligations on intermediaries to safeguard informational integrity and user trust.
He added, “While the regulatory objective is understandable, key challenges may however arise in uniformly identifying synthetic content, balancing compliance with user privacy and free speech concerns, and ensuring technological feasibility across platforms of varying scale. Effective enforcement will therefore depend on clear technical standards, proportional compliance expectations, and consistent regulatory oversight.”
Advocate Suhael Buttan, partner, SKV Law Offices said that all synthetic content that is not outright illegal must be clearly labeled and it must also contain permanent metadata or other provenance identifiers.
“Overall, the legal direction is clear. Platforms must ensure strong accountability, transparency and technical safeguards around AI generated content in order to maintain compliance and retain statutory protections,” he added.
Story continues below this ad
Advocate Huzefa Tavawalla, partner (head – digital disruption), Cyril Amarchand Mangaldas said, “While it remains to be seen how jurisprudence evolves with respect to LLMs and intermediaries, the requirement to implement provenance mechanisms within a 10‑day period appears aggressive. This is particularly given the technical and operational complexity involved in deploying such mechanisms at scale.”
Further, a key practical challenge which could persist is the ability of users/intermediaries to accurately determine which AI datasets are exempt from labelling requirements and which are not, raising questions around consistency and enforceability, he added.
Advocate Arya Tripathy, partner, Cyril Amarchand Mangaldas said that the SGI Rules blur the contours of safe harbor protection.
“Intermediaries are obligated to deploy measures for preventing creation, dissemination, and other forms of dealing with unlawful SGI, which will vest them with direct and actual knowledge, and further, authorises them to take down such content, apart from other actions. This dilutes the principles around safe harbor, exposing them to actual liability for unlawful content,’ she added.
Story continues below this ad
Advocate Rashmi Deshpande, partner, Fountainhead Legal pointed out that there is also a privacy angle to consider and the requirement to embed permanent metadata and unique identifiers improves traceability and can deter impersonation, fake political content, or non-consensual imagery.
“In situations like the Grok episode, where AI-generated responses created global controversy, these Rules would make platforms more accountable. They cannot simply react after something goes viral, they are expected to prevent harmful or misleading synthetic content from being generated in the first place,” said Deshpande.
Advocate Ankit Sahni, partner, Ajay Sahni & Associates said, “Users who generate unlawful synthetic content would remain independently liable under criminal law. For AI users this could mean increased friction, mandatory disclosures and restricted functionalities, while for AI platforms compliance architecture itself becomes central to retaining safe harbour protection.”
Sumit is an intern with The Indian Express