Opinion Regulating AI is already proving difficult
To navigate the complexities of the AI regulatory landscape, establishing an international governance and regulatory framework is essential.
Many of the leading countries in AI innovation are focused on securing global dominance in the field. (File Photo/Representational) AI is more than just a technological revolution; it involves global scientific, economic, cultural, political, and civic challenges. Addressing these issues requires extensive international cooperation and dialogue to establish standards and solutions for AI.
At the international level, various initiatives have emerged in the past, such as the AI Action Summit (2025), the G7 Hiroshima AI Process (2023), the United Nations AI for Good Global Summit (2024), the Montreal Declaration on Responsible AI (2018), and the Bletchley Park Summit in 2023. Together, these initiatives contribute to a complex landscape where frameworks from one may potentially undermine the objectives of another.
Many of the leading countries in AI innovation are focused on securing global dominance in the field. The United States passed the National AI Initiative Act in 2020 to maintain its leadership in AI research and development, while promoting widespread adoption across various industries. Similarly, China introduced its Next Generation AI Development Plan in 2017, with the goal of becoming an AI superpower by 2030. In 2021, the UK launched its National AI Strategy, aiming to position itself as a global leader in AI. However, the race to become an AI superpower could overshadow other critical issues, such as the development of ethical frameworks and a universally accepted governance system for AI.
In many developing countries, the focus is not on achieving global dominance in AI, but on leveraging AI to address local challenges, improve public service delivery, strengthen healthcare systems, stimulate economic growth, generate employment, and safeguard the environment. Countries like India, Brazil, and South Africa have also established policy frameworks to govern AI. In India, Niti Aayog has created the National Strategy for Artificial Intelligence, focusing on leveraging AI for economic development, social progress, and to support other emerging economies. Thus, there is a north-south divide regarding the national strategy for AI.
In several advanced countries, laws are being developed to regulate AI. However, these efforts often reflect a fragmented approach. Notably, the European Union and the United States stand out in this regard. The European Union is leading the way with its proposed AI Act. The Act aims to establish a comprehensive framework for the development and free movement of AI-based goods and services while ensuring alignment with core values. To address these concerns, the legislation bans certain AI systems outright and imposes specific risk management protocols on other high-risk AI systems. It prohibits AI technologies that manipulate human behaviour to provoke harmful decisions, use controversial social scoring, or exploit vulnerable groups, among other restrictions. For high-risk AI systems that remain permissible, a detailed risk management protocol is required.
The EU’s AI Act has its own limitations, which suggest it may not serve as a universal standard for other countries. While it addresses issues like biases, it remains unclear how these biases should be defined or measured. For AI systems that fall outside the high-risk category, the Act promotes voluntary compliance based on technical specifications and standards, an area that is still under development.
In the United States, several regulatory initiatives have been introduced to address concerns surrounding AI and its potential misuse. For instance, the proposed Algorithmic Accountability Act aims to address issues related to biases, transparency, and accountability in AI systems. Additionally, the proposed Facial Recognition and Biometric Technology Moratorium Act seeks to safeguard individual rights by imposing restrictions on biometric surveillance by federal, state, and local government agencies.
The regulatory landscape even within the US becomes more complex as many states have also introduced their own frameworks. For example, California has enacted its own Algorithmic Accountability Act, while Illinois has updated its existing legislation to address new developments in AI. However, these laws are not comprehensive and often fail to provide clear answers to critical issues, mainly due to the lack of universally accepted standards.
The lack of universally accepted standards is leading to inconsistencies in judicial interpretations, particularly on issues such as liability (like determining who is responsible for damages caused by AI), antitrust laws, and intellectual property rights. For example, when it comes to whether AI systems can collaborate to manipulate markets or prices, the European Commission and the US Federal Trade Commission have taken different stances. Similarly, there is a global debate over whether AI can be recognised as an inventor for patent purposes. The traditional view, which holds that only humans can be credited as inventors, is being increasingly challenged. If AI were granted inventor status, it would raise further concerns about licensing, transferability, and commercialisation.
To navigate the complexities of the AI regulatory landscape, establishing an international governance and regulatory framework is essential. This remains critical even amid the challenges of deglobalisation, as recent geopolitical trends may lead to reduced global integration in certain areas. AI, however, is a truly global technology that transcends national borders.
Such a framework could play a pivotal role in setting norms and standards for AI. With AI projected to grow at a compound annual growth rate (CAGR) of 29 per cent from 2025 to 2030, potentially contributing up to 14 per cent of global GDP, a global framework can help provide stability in this rapidly growing market. It would offer clearer guidelines to tackle risks such as manipulation, disinformation, lack of trust, invasive surveillance, privacy breaches, and more. Most importantly, it would foster fair competition in the global AI race and address critical issues like control over vital AI technologies and ensuring equitable access to AI for all.
The writer is Chief Controller of Accounts, Ministry of Law and Justice, Supreme Court of India and Ministry of Corporate Affairs. Views are personal