The Global Partnership on Artificial Intelligence (GPAI) Summit kickstarts in New Delhi on Tuesday (December 12), with Prime Minister Narendra Modi inaugurating the event. India is negotiating with the 28 other member countries to arrive at a consensus on a declaration document on the proper use of AI, the guardrails for the technology and how it can be democratised.
“The world’s thinking on AI is converging. People understand the potential, look to the benefits which can come, and understand the dangers and put certain guardrails. There is convergence on how AI should be treated going forward,” IT Minister Ashwini Vaishnaw said Monday.
“There will be regulatory aspects that are in line with past agreements and declarations. The thinking process of GPAI will be in line with global ideas. We are negotiating a declaration document at the end of GPAI 2023, which we hope we’ll be able to arrive at through consensus,” he added.
Story continues below this ad
What is the Global Partnership on AI and what could be in the declaration?
India is a founding member of GPAI, having joined the multi-stakeholder initiative in June 2020. The initiative aims “to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.”
It also brings together people involved in the fields of science, industry, and civil society, along with governments, international organisations and academia for greater international cooperation. The first three GPAI summits were held in Montreal, Paris and Tokyo, respectively.
While details of the final declaration are not completely known, Vaishnaw said that the GPAI declaration will have two aspects highlighting India’s stance around AI.
“The first is to evaluate the use of AI in sustainable agriculture, adding to the previous GPAI themes including healthcare, climate action and building a resilient society. The second is on collaborative AI — in line with our DPI approach.”
Story continues below this ad
The Indian Express had earlier reported how India wants to take the Digital Public Infrastructure or DPI approach with AI, where it aims to build underlying systems – both databases and compute capacity – for facilitating the spread of AI. It is an approach the country has taken with the biometric identity programme Aadhaar and payments solution Unified Payments Interface (UPI).
A ministers’ declaration was signed in Tokyo last year, when Japan was hosting the GPAI. The Tokyo Declaration opposed the unlawful and irresponsible use of artificial intelligence and other technologies.
What has been the global conversation around regulating AI so far?
Recently, the EU passed the AI Act which introduces safeguards on the use of AI, including clear guardrails on its adoption by law enforcement agencies, and empowering to launch complaints against any perceived violations. The deal includes strong restrictions on facial recognition technology, and on using AI to manipulate human behaviour, alongside provisions for tough penalties for companies breaking the rules.
Last month, the UK hosted the AI Safety Summit where 28 major countries including the United States, China, Japan, the United Kingdom, France, and India, and the European Union agreed to sign a declaration saying global action is needed to tackle the potential risks of AI.
Story continues below this ad
The declaration incorporates an acknowledgement of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI — especially cybersecurity, biotechnology, and disinformation risks. It also noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy.
Before that, the United States issued an executive order aimed at safeguarding against threats posed by AI and exerting oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google Bard. The order was seen as a vital first step taken by the Biden Administration to regulate rapidly advancing AI technology.