With Lok Sabha elections set to be announced soon, the IT Ministry has sent an advisory to generative Artificial Intelligence companies like Google and OpenAI and to those running such platforms — including foundational models and wrappers — that that their services should not generate responses that are illegal under Indian laws or “threaten the integrity of the electoral process”.
Platforms that currently offer “under-testing/unreliable” AI systems or large language models to Indian users must explicitly seek permission from the Centre before doing so and appropriately label the possible and inherent “fallibility or unreliability of the output generated”. The government also wants these platforms to add in a traceability requirement by adding an identifiable marker to content they generate in a way that can be traced back to the person who has instructed the service to create pieces of misinformation or deepfakes.
Google’s AI platform Gemini had recently come under fire from the Ministry of Electronics and Information Technology (MeitY) for answers generated by the platform on a question about Prime Minister Narendra Modi. The Indian Express had earlier reported the government was planning to issue a show cause notice to Google. It also reported about Ola’s beta generative AI offering Krutrim’s hallucinations.
Story continues below this ad
Minister of State for Electronics and IT Rajeev Chandrasekhar said that the advisory is a “signal to the future course of legislative action that India will undertake to rein in generative AI platforms”. He said the requirement for such companies to seek permission from the government will effectively create a sandbox and that the government may seek a demo of their AI platforms including the consent architecture they follow.
The notice was sent to all intermediaries including Google, OpenAI, on Friday evening. The advisory is also applicable to all platforms which allow users to create deepfakes. Chandrasekhar said notice was sent to Adobe too. The companies have been asked to submit an action taken report within 15 days. The advisory, however, is not legally binding.
The escalation is symbolic of the tussle between lawmakers and tech companies over the future of safe harbour protections to generative AI platforms like Gemini and ChatGPT.
Outputs generated by such platforms depend on a number of factors including the underlying training data that has been scraped from the vast swathes of the Internet and algorithmic filters that are added on top of that: a number of instances of errors — called hallucinations — generated by these platforms have been reported across the world can typically be attributed to shortcomings in these factors.
Story continues below this ad
“The use of under-testing / unreliable Artificial Intelligence model(s)/ LLM /Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated,” the advisory said.
The government has also said that AI-generated responses should be labelled or embedded with a permanent unique metadata or identifier to be able to determine the creator or the first originator of any misinformation or a deepfake.
“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake… is labelled or embedded with a permanent unique metadata or identifier… (to) identify the user of the software,” the advisory added.
“All intermediaries or platforms to ensure that their computer resource do not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/ LLM/ Generative AI, software(s) or algorithm(s),” it said.
Story continues below this ad
Chandrasekhar said the reason the advisory specifically mentions the integrity of the electoral process is in the backdrop of the upcoming Lok Sabha elections. “We know that misinformation and deepfakes will be used in the run up to the election to try and impact or shape the outcome of the elections,” he said responding to a question if the advisory went beyond the remit of existing IT Rules.
“This signals that we are moving to a regime when a lot of rigour is needed before a product is launched. You don’t do that with cars or microprocessors. Why is that for such a transformative tech like AI there are no guardrails between what is in the lab and what goes out to the public,” he added.