Govt clarifies its AI advisory on elections: How to understand the concerns and criticism around the issue
An earlier government advisory was criticised by some AI startups. Following the clarification, the advisory appears to have been intended more as a tool of political messaging than as a policy statement with consequential legal ramifications.
Rajeev Chandrasekhar, Minister of State for Skill Development and Entrepreneurship and Electronics and Information Technology, posted on Monday about the government's earlier advisory. (Express Photo by Gajendra Yadav)
The clarification has helped calm nerves, but the commentary that followed the publication of the advisory has pointed to some important regulatory concerns.
You have exhausted your monthly limit of free stories.
Read more stories for free with an Express account.
On Monday (March 4), Minister of State for Electronics and IT Rajeev Chandrasekhar said in a post on X: “(The) advisory is aimed at the significant platforms and permission seeking from MeitY is only for large platforms and will not apply to startups… It is aimed at untested AI platforms from deploying on the Indian Internet.”
He added that the process of seeking permission, labelling of platforms that are under testing, and consent-based disclosure to users is an “insurance policy to platforms who can otherwise be sued by consumers”.
What did the government say in its advisory on AI and elections last week?
With Lok Sabha elections set to be announced soon, the IT Ministry on Friday sent an advisory to generative AI companies like Google and OpenAI and to those running such platforms — including foundational models and wrappers — that their services should not generate responses that are illegal under Indian laws or “threaten the integrity of the electoral process”.
Platforms that currently offer “under-testing/unreliable” AI systems or large language models (LLMs) to Indian users must explicitly seek prior permission from the central government, and appropriately label the possible and inherent “fallibility or unreliability of the output generated”.
The advisory was criticised by some AI startups, including those invested in the ecosystem abroad, on grounds of regulatory overreach impacting the still nascent industry. Aravind Srinivas, founder of Perplexity AI, said the advisory was a “bad move by India”, and Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, called it a “travesty”, which was “anti-innovation” and “anti-public”.
How should this advisory and its criticism be understood?
Story continues below this ad
Following the clarification by the government, the advisory appears to have been intended more as a tool of political messaging than as a policy statement with consequential legal ramifications. The government seems to have tried to signal that it was willing to play an arbiter’s role to protect Indian Internet users from some of the pitfalls of generative AI platforms — even if its effectiveness or legal basis remained unclear.
What seems to have set off a lot of people was the requirement to seek government approval before deploying “untested” AI services in India. However, very few of those who questioned the intent of the government publicly acknowledged the problems that often accompany such services — and why missing the regulatory bus in a country of India’s size and complexity could lead to problems.
As often happens in policymaking, the solution lies in the middle — and both lawmakers and companies will have to be ready for tradeoffs.
While seeking the approval of the government before deploying an AI platform with problematic underlying data and algorithmic filters carries the possibility of arbitrary decision-making by officials, a free-for-all legal environment for AI companies with no checks and balances is not desirable either.
Story continues below this ad
The threat from AI-generated deepfakes that can potentially impact elections, become ammunition for revenge pornography, or help create child sexual abuse material, needs to be taken into account as well.
What is the legal basis for the government’s advisory?
Questions have been raised about the legal basis of the advisory — critics have asked under which law the government could issue guidelines to generative AI companies, since India’s current technology laws do not directly cover large language models.
The advisory was sent as a “due diligence” measure that online intermediaries need to follow under the Information Technology Rules, 2021. However, the Rules do not explicitly define LLMs, and the advisory made an attempt to retrofit the law for a use case it was not meant to address.
Story continues below this ad
On Saturday, Chandrasekhar had said the reason the advisory specifically mentions the integrity of the electoral process is because Lok Sabha elections are close. “We know that misinformation and deepfakes will be used in the run-up to the election to try and impact or shape the outcome of the elections,” he toldThe Indian Express.
IT Minister Ashwini Vaishnaw said that the advisory was not a legal framework, but an effort at testing a model which was not yet ready.
“Whether an AI model has been tested or not, proper training has happened or not, is important to ensure the safety of citizens and democracy. That’s why the advisory has been brought… Some people came and said, ‘sorry, we didn’t test the model enough’. That is not right. Social media platforms have to take responsibility for what they are doing,” Vaishnaw said on Monday.
Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More