In India, the tussle began last year, when an AI platform generated responses on a query on various world leaders, including Prime Minister Narendra Modi, which the government felt was uncalled for.
ON MARCH 1, 2024, just months ahead of Lok Sabha elections, the IT Ministry issued an advisory requiring artificial intelligence (AI) platforms to seek government permission before launching “under-testing, unreliable” services, only to go back on it a fortnight later. While the guidance had drawn public criticism from some AI founders and investors abroad, concerns were raised by India’s tech industry privately as well, according to correspondence reviewed by The Indian Express as part of a Right to Information (RTI) application.
Within a week, on March 7, 2024, the National Association of Software and Service Companies or Nasscom, India’s largest tech industry lobby group, which also represents tech companies like Meta, Google, Amazon, Microsoft, and other software as a service (SaaS) companies — wrote to the IT Ministry, and made three key demands: relax applicability norms to limit the scope of the advisory, remove the contentious requirement for AI companies to seek government permission, and drop the need for companies to prepare an action taken and status report on account of compliance, documents inspected under RTI showed.
Nasscom and the IT Ministry did not respond to a request for comment.
Story continues below this ad
The advisory was simultaneously facing public backlash from some startups in the generative AI space. Aravind Srinivas, founder of Perplexity AI, had called the advisory a “bad move by India”, while Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, had termed the move a “travesty”, which was “anti-innovation” and “anti-public”.
With such adverse feedback, the government changed tack, and in a subsequent advisory on March 15, the IT Ministry made changes recommended by the industry body and AI founders – it scrapped the controversial provision to require government permission before rolling out “under-testing, unreliable” AI services, limited the scope of the guidance to some intermediaries, and removed any mention of companies having to prepare a status report.
Explained
Why Govt diluted advisory
AFTER Nasscom told the government its stated aim of users getting more information was “being lost” because of the “approach and drafting” of the advisory, the IT Ministry scrapped the provision to require permission before rolling out “under-testing, unreliable” AI services, limited the scope of the guidance to some intermediaries, and removed any mention of firms having to prepare a status report.
In its correspondence, Nasscom said the government’s stated aim of companies offering more information to users about their AI services was “being lost” because of the “approach and drafting” of the advisory.
At the heart of the debate was a tussle between lawmakers and tech companies over the future of safe harbour protections — which afford legal immunity to platforms from the content they host— to generative AI platforms like Gemini and ChatGPT. In India, the tussle began last year, when an AI platform generated responses on a query on various world leaders, including Prime Minister Narendra Modi, which the government felt was uncalled for.
That prompted a broader debate within the government over the lack of India-specific data in the underlying models of these platforms, which can result in responses which may have biases against the country, its politics and history.
However, its solution of asking companies to seek permission before launching systems was met with a pushback from the industry. “The overall objective of the advisory is to empower end users with adequate information so that they make informed decisions while using synthetic media tools. We find this goal is being lost, however, because the approach and drafting of the advisory has sparked concerns and apprehensions across the country,” Nasscom said in its March 7 correspondence.
Demanding that the government “issues a subsequent advisory,” Nasscom said instead of it applying to “all intermediaries,” the guidance “should apply to people making synthetic media tools”.
“…clarify that no explicit permission is mandated from the government of India to make any AI models/software/ algorithm available for public access. Instead, relevant intermediaries will publish reports on capabilities/ limitations of synthetic media tools, provide users, robust controls for providing real-time feedback,” Nasscom submitted.
Story continues below this ad
“Instead of action taken cum status report, the advisory may state that the government of India wishes to convene dialogues with relevant intermediaries with the goal of securing voluntary commitments from them,” the lobby group suggested. The advisory should also clarify that it is “not meant for start-ups publishers, originators, content creators, B2B providers”.
While the updated guidance did not explicitly state this, former Minister of State for IT Rajeev Chandrasekhar had at the time issued the clarification in media statements.
Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More