The New Delhi declaration has attempted to find a balance between innovation and the risks associated with AI systems. While it is largely upbeat about the economic benefits that AI can bring, it also flags issues around fairness, privacy, and intellectual property rights that will have to be taken into consideration.
Story continues below this ad
What does the GPAI New Delhi declaration on AI say?
“We recognise the rapid pace of improvement in advanced AI systems and their potential to generate economic growth, innovation, and jobs across various sectors as well as to benefit societies,” the declaration said.
The declaration said that a global framework for use of AI should be rooted in democratic values and human rights; safeguarding dignity and well-being; ensuring personal data protection; the protection of applicable intellectual property rights, privacy, and security; fostering innovation; and promoting trustworthy, responsible, sustainable, and human-centred use of AI.
GPAI members also promoted equitable access to critical resources for AI innovation including computing, high-quality diverse datasets, algorithms, software, testbeds, and other AI-relevant resources.
The declaration also agreed to support AI innovation in the agriculture sector as a new “thematic priority”.
Story continues below this ad
It said that the GPAI will pursue a diverse membership, with a particular focus on low- and middle-income countries to ensure a broad range of expertise, national and regional views, and experiences based on shared values.
Senegal, a current member of the grouping, was elevated to the steering committee of the GPAI.
How does the New Delhi declaration contrast with the Bletchley declaration?
While the GPAI New Delhi declaration addresses the need to tackle AI-related risks, it largely supports innovation in the technology in various sectors, including agriculture and healthcare. The essence of the declaration can be summed up as follows: AI is inherently good and is a catalyst for economic growth, but some harms need to be mitigated along the way.
By contrast, the declaration that was signed at the UK AI Safety Summit last month put security and safety risks related to AI in the centre of the discussions. At the Bletchley Park meeting, 28 major countries including the United States, China, Japan, the United Kingdom, France, and India, and the European Union agreed to sign on a declaration saying global action is needed to tackle the potential risks of AI.
Story continues below this ad
The declaration noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy. “Frontier AI” is defined as highly capable foundation generative AI models that could possess dangerous capabilities that can pose severe risks to public safety.
So, has India been changing its position on the regulation of AI?
Even as India looks to unlock the potential economic benefits of AI systems, it’s own thinking on AI regulation has undergone a significant change — from not considering any legal intervention on regulating AI in the country just a few months ago, to now moving in the direction of actively formulating regulations based on a “risk-based, user-harm” approach.
At the inaugural session of the GPAI Summit on Tuesday, Prime Minister Narendra Modi flagged the dual potential of AI — while it can be 21st century’s biggest development tool, it can also potentially play a very destructive role — and called for a global framework that will provide guardrails and ensure its responsible use.
In April, the Ministry of Electronics and IT had said it was not considering any law to regulate the AI sector. Union IT Minister Ashwini Vaishnaw had said that although AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.
Story continues below this ad
However, after deepfakes of a number of popular personalities got mainstream traction, the IT Ministry began to talk of a concrete legislative step to tackle AI-based misinformation. Vaishnaw said that it could either be a new law, or an amendment to existing rules.
Part of this shift was also reflected in a new consultation paper floated by the telecommunications regulator Telecom Regulatory Authority of India (TRAI) in July, which said that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a “risk-based framework”. The paper had also called for collaborations with international agencies and governments of other countries to form a global agency for the “responsible use” of AI.