Premium

Before India-AI Impact Summit 2026, a hard question: Who gets a say in AI governance?

With the India-AI Impact Summit 2026 nearing, early discussions among civil society raise key questions around AI governance, concentration of power, and environmental impact.

The study examined brain changes over a lifetime but did not differentiate between genders, leading to potential questions about factors such as menopause. (Image for representation: Freepik)As AI development accelerates globally, disparities in industrial policy and AI governance are raising complex questions. (Image for representation: Freepik)

Ahead of India hosting the AI Impact Summit 2026 next month, a series of pre-Summit discussions among government officials, policymakers, industry representatives, and civil society groups have surfaced broad questions about how artificial intelligence (AI) should be governed as its economic, geopolitical, and environmental impact comes into sharper focus.

The India-AI Impact Summit 2026 will be held in New Delhi from February 15 to 20, with the forum coming to the Global South for the first time. It will be the fourth iteration of the Summit, which has so far been held in the United Kingdom (Bletchley Park), South Korea (Seoul), and France (Paris). Prime Minister Narendra Modi will inaugurate the event, and is also likely to host a dinner and address a CEO Roundtable.

French President Emmanuel Macron will attend the AI Impact Summit as part of a state visit to India, joining heads of government of nearly 15 to 20 countries as well as key global figures in AI and tech.

Macron’s attendance was confirmed during a closed-door meeting held at the French Embassy in New Delhi last week. The meeting was organised by digital rights non-profit Access Now and the Embassy of France in collaboration with the Embassy of the Kingdom of the Netherlands.

The high-level exchange, covered under Chatham House Rules, brought together experts from policy, academia, and government to reflect on the shortcomings of the previous AI summits while examining broader challenges around multi-stakeholder governance, concentration of power, regulatory gaps, and environmental costs.

Major themes of the discussion

Voluntary AI safety commitments have fallen short

An agreement reached with AI companies at the 2023 Bletchley Park AI Summit was intended to give AI Safety Institutes access to test AI models before they were released, but the commitments were not made legally binding and have since fallen short, according to one speaker.

“The way developers test models is also very different from how independent parties do it because the priorities are different. AI companies are going to prioritise reputational harm over addressing societal harms posed by the model. There is an inherent conflict of interest that was either ignored or proved to be not sufficient enough,” they said.

Story continues below this ad

Another speaker pointed out that global AI safety debates have undergone a structural shift, moving beyond the technical and ethical framing that dominated previous iterations of the Summit to a broader policy conversation shaped by economic strategy and geopolitical competition.

Several participants agreed that the earlier focus on catastrophic risks of AI helped distract from immediate, real-world harms. They also raised concerns that early UK initiatives, such as AI Safety Institutes, may have been shaped too heavily by industry interests.

False dichotomy between AI regulation and innovation

AI is being viewed through the lens of ‘regulation kills innovation’ but that is a false dichotomy, one speaker said. “During the Paris Summit, the dominant political narrative became AI for development and AI for innovation. The Summit brought de-regulation back to the table. We as a society need to ask ourselves what we mean by innovation. How are we defining innovation?” they said.

While AI has increasingly been perceived as a disruptive force and increasing market competition, it will only entrench dominance with three to four big tech players holding all the power and exercising it to the detriment of our fundamental rights.

Story continues below this ad

Others also questioned the purpose of these Summits as they are not part of any rules-based engagement and do not reflect ground realities. In the context of India, there is a lot of focus on building the country as a use case capital for AI without necessarily thinking about the impact of AI healthcare applications, for instance, on its overburdened health systems. Similarly, it doesn’t take into account the reality of setting up data centres in drought-stricken areas of the country.

Balancing innovation and regulation will continue to be tricky as AI is a race whether you choose to run or not, one speaker said, and no one wants to lose the race.

Industrial policy needs to be aligned with AI governance principles

As AI development accelerates globally, disparities in industrial policy and AI governance are raising complex questions. Some countries are moving toward deregulation, while others maintain stricter oversight, creating uneven access to technology, finance, and talent.

When asked how this will play out in the future, one speaker noted, “There has been an extractive model, especially from the Global North, despite the Global South accounting for nearly 85–88% of the world’s population.” China has emerged as an outlier, challenging US dominance in AI, but most countries still have to navigate incentives, tariffs, and industrial licensing. “Technology is global, policy is local. But today, countries themselves are shaping not just policy, but the type of technology that can be developed within their borders,” they said.

Story continues below this ad

The bigger challenge is that, regardless of which country is acting or which company benefits or loses out, these policies are often implemented without transparency and end up effectively picking winners in advance. This approach is unlikely to deliver broad, equitable benefits, the speaker added.

Recommendations: Dos and Don’ts

Dos

– Prevent market concentration across supply chains: A speaker noted that big AI companies were using the same anti-competitive playbook as big tech companies. Another speaker recommended regulating big tech companies at the infrastructure level as sharp-edged competition interventions. This could include the prevention of cloud providers from participating in the market for foundational AI models, requiring big tech companies to divest from their cloud offerings, requiring ex-ante reviews of acquisitions involving hyperscalers, etc.

– Take human rights approach to AI: Citing the 2018 Toronto declaration on protecting human rights in the age of AI, another speaker said most AI ethics frameworks today avoid using the term ‘human rights’. Right-wing actors are also known to attack any conversations around ESG frameworks. Cybersecurity could also provide a useful framework to approach AI regulation as it looks at various elements of the system, responsibilities, and more.

– Lower barriers to participating in AI governance structure: Governments have to make sure that smaller, public interest communities are in the room. They should not only depend on the expertise of the tech industry in framing policies around AI, one speaker said. The reality of multi-stakeholder governance models is that they often ignore that not every stakeholder is the same and do not have the same voice, the speaker added.

Story continues below this ad

Don’ts

– Don’t repeat social media-era mistakes in regulation: 15 years ago, social media platforms such as Facebook and Twitter were seen as tools for democracy. But today, the public debate across Europe, Australia, and other countries is whether these tools should be banned, since they clearly threaten electoral processes and pose threats to children’s mental health. The same situation should not be replicated with generative AI, where regulatory action comes too late and control is lost – leaving no option but bans.

– Don’t revise data protection frameworks: Another speaker recommended: “Don’t let what already exists be washed away.” “Many of us in digital rights and policy are saying, look at what we have already achieved. At least defend it, understand its value. What I mean is that in countries with data protection and privacy regulations, AI should not mean that all of this is wiped away,” the speaker said.

– Don’t freeze out public interest communities: “If you ask social movements today whether they have a voice in AI policy, the answer is no, with only rare exceptions. Even in areas like climate change, there is greater recognition of the need to engage with a wider community than there is in AI,” one speaker remarked. They further said that public interest organisations must ask themselves what they are bringing to the table, especially when every policymaker wants to hear something positive about AI.

Karan Mahadik is a Tech Correspondent for The Indian Express based in Delhi-NCR, specializing in the intersection of technology and public policy. With a focus on how digital infrastructure shapes governance and society, he is a key voice in the publication's coverage of the rapidly evolving tech regulation landscape. Experience & Career Karan brings a robust background in digital journalism to his role at The Indian Express. Before joining the organization, he honed his skills at MediaNama, a premier source for tech policy news in India, and The Quint. This trajectory has equipped him with a deep understanding of both the business of technology and the regulatory frameworks that govern it. Expertise & Focus Areas Karan’s reporting moves beyond product cycles to investigate the broader implications of technology. His work is defined by: Tech Policy & Regulation: In-depth coverage of legal frameworks, government directives (such as SIM-binding mandates), and internet governance. Artificial Intelligence: He authors The Smart Prompt, a weekly newsletter dedicated to demystifying AI developments and their impact on industries and individuals. Privacy & Security: Reporting on digital rights, data protection (DPDP rules), and platform accountability. Complex Analysis: Known for his ability to translate dense policy documents and technical shifts into clear, accessible narratives for a general audience. Authoritativeness & Trust Karan is recognized for his rigorous approach to sourcing and his commitment to digital privacy, evidenced by his accessibility via secure channels like Signal. His work is frequently cited for its detailed examination of regulatory overreach and corporate accountability. By anchoring his reporting in verified data and expert commentary, he provides readers with a reliable compass for navigating the "wild west" of modern technology. Find all stories by Karan Mahadik here ... Read More

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement