Premium

Opinion Express View: AI disclosure draft rules — a move in the right direction

Platforms have a responsibility, regulation needs to be in sync with rights

AI disclosure draft rules, AI-generated content, AI models, AI-generated deepfakes, artificial intelligence, data privacy, editorial, Indian express, opinion news, current affairsAI now is inextricably embedded in platforms and the sheer volume of content on social media warrant greater urgency in addressing the problem of AI misuse.

By: Editorial

October 27, 2025 07:10 AM IST First published on: Oct 27, 2025 at 07:10 AM IST

AI-generated content has proliferated rapidly in the last few years. Access to free or nominally-priced AI models has made it possible for anyone in any part of the world to create content (video clips, images and audio) and upload it on the net. Indeed, AI fuels the fastest growing YouTube channels. The surge, which will only grow in sophistication, throws up a bewildering array of challenges. AI-generated deepfakes are used to deceive, spread misinformation, and facilitate financial fraud. Governments are now taking steps to address these concerns. These are moves in the right direction. The viewer should be enabled to differentiate between the real and the artificial.

Last week, the Union government proposed draft rules that call for mandatory labelling of AI-generated content on social media platforms in order to check the “growing misuse of synthetically generated information, including deepfakes.” For visual content, the identifier or label should cover “at least 10 per cent of total surface area”, while for audio content, it should cover the “initial 10 per cent of its duration”. Social media platforms will be required to ask users if the content is “synthetically generated information”. However, they will also have to “deploy reasonable and proportionate technical measures” to verify themselves, and thus take a more proactive approach in addressing these issues. This effectively puts the onus on these platforms, as it should. Big Tech must be held accountable. Large platforms such as YouTube and X are, after all, backed by companies with significant AI investments — Gemini is an AI model from Google, while Grok is designed by xAI. Big Tech must, therefore, be part of the solutions to this ever-changing problem. At the same time, in ensuring that only senior government officials can issue takedown orders — “intimation to intermediaries for removal of unlawful information can only be issued by a senior officer not below the rank of Joint Secretary’’ — the government is right to raise the bar for takedowns. However, an avenue for redressal has to be factored in the process.

Advertisement

AI now is inextricably embedded in platforms and the sheer volume of content on social media warrant greater urgency in addressing the problem of AI misuse. AI can be used to spread information, educate, and entertain. But, it can also be a tool for misinformation and polarising pursuits. Deepfakes that clone voices can be misused to defraud people. A distinction, therefore, has to be made in the nature of content and the motives for using AI. AI models themselves need to be trained on models more inclusive and representative — the challenges will only go up with the inevitable increase in their sophistication levels. Policy has always played catch-up with technology so regulation will have to be sharper, smarter and speedier. Rules on disclosure are a good first step but in the long, twisting road ahead, the next steps are crucial.

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express PremiumKillings, surrenders and a divided outfit: End of the road for Maoists?
X