
India has reached a turning point in its digital evolution. The Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the IT Rules, 2021, to regulate synthetic content, including AI-generated videos, images, and voices. The draft, open for public consultation until November 6, aims to make the creators and platforms behind such content accountable and transparent.
If adopted, India would become one of the first democracies to formally address the dangers of AI-driven misinformation. In a year when deepfakes have infiltrated politics, entertainment, and social discourse, the timing could not be more urgent.
At stake is something deeper than data or privacy; it is the integrity of truth itself.
The draft amendments would propose the following. One, to define “synthetically generated information” as content created or altered by algorithms to resemble authentic media. Two, they require platforms that create or host such content to label it clearly, for example, dedicating at least 10 per cent of visual space or the first 10 per cent of audio to disclaimers. Three, they mandate automated detection systems and user declarations for synthetic media uploads. And four, they preserve safe-harbour protection for intermediaries that remove harmful synthetic content, while penalising those that don’t.
The government’s goal is to curb the spread of impersonation, fake news, and deepfake-based fraud without stifling innovation. But translating that intent into effective enforcement will be India’s toughest governance challenge.
Every technological revolution tests society’s immune system. The internet tested privacy; social media tested civility; AI is now testing reality itself. Deepfake tools can empower creators, educators, and entrepreneurs. A small business can use AI to make multilingual ads; a filmmaker can restore lost footage. But the same technology can also destroy reputations, manipulate elections, or incite violence.
The paradox is: We need AI for growth, but we need governance for trust. MeitY’s draft rules are a recognition that truth has become an infrastructure problem, not just a moral one. The question is how to build that infrastructure without choking creativity.
India must resist the temptation to legislate faster than it can enforce. The draft’s proposed “10 per cent visual disclaimer” is symbolically strong but technically weak. A more durable approach would rest on three pillars. First, verification infrastructure: Build a digital provenance framework, akin to Aadhaar, for authenticity, where each piece of content carries an invisible but verifiable signature. Second, tiered accountability: Differentiate between platforms that host, generate, or monetise synthetic media. Responsibility should rise with influence. Third, AI literacy: Equip citizens to detect manipulation. Technology alone can’t defend democracy; informed citizens can.
Such a system would make India not just compliant, but competitive. A model for balancing innovation with integrity.
Globally, regulators are grappling with the same dilemma. The EU’s AI Act mandates watermarking of synthetic content. The US relies on voluntary corporate commitments. China requires government pre-approval for “deep synthesis” media.
India’s challenge is unique. Its digital population is vast, multilingual, and heavily reliant on social media for news. The risk of viral misinformation is therefore exponentially higher. That’s why India must pioneer a third way. Neither laissez-faire Silicon Valley nor state-controlled Beijing. A model that empowers creators, educates users, and enforces accountability.
Democracy runs on trust. And trust is fragile when truth becomes fluid. A manipulated video of a candidate, a cloned voice of a journalist, or a forged government order can undermine public confidence faster than any fact-check can repair it.
The solution isn’t censorship. It’s clarity. Regulate authenticity, not opinion. If India can institutionalise transparency in AI-generated media, it won’t just protect its elections. It will export a model of digital responsibility for the world.
The months that follow will decide whether India builds a framework of trust or a bureaucracy of fear. The government must consult widely, with startups, academics, technologists, and civil society groups, to ensure regulation guides innovation instead of stifling it. This is not a fight against deepfakes alone. It’s a fight for the authenticity of the public sphere. India has a once-in-a-generation opportunity to lead the world in ethical AI governance. The same nation that built Aadhaar to verify identity can now build the rails for verifying truth. Because the challenge of our age isn’t fake news. It’s fake reality. And the solution isn’t fear. It’s trust.
Trust will be the ultimate moat in the age of AI.
The writer is former head of Twitter India