Premium
This is an archive article published on January 4, 2024
Premium

Opinion The politics and geopolitics of AI governance

An international organisation to deal with emerging, strategic technology is bound to come against domestic interests, superpower rivalries. Solution may lie in pre-existing sector-specific standard-setting bodies

Artificial IntelligenceAs per the Artificial Intelligence Index Report 2022, AI-related legislation has jumped from one in 2016 to 18 in 2022, in 25 nations. (Representational Photo)
indianexpress

Nayan Chandra Mishra

January 4, 2024 02:37 PM IST First published on: Jan 4, 2024 at 02:37 PM IST

India recently concluded the fourth edition of the Global Partnership on Artificial Intelligence (GPAI) summit with the unanimous endorsement of the New Delhi Declaration, which underscored the need to balance innovation and risks associated with Artificial Intelligence (AI) systems. This is among several inter-governmental summits that have occurred in the past few months to promote a similar idea.

Meanwhile, experts have started arguing for a single international organisation to move beyond these general partnerships and promote inclusivity in the policies and legislation passed by countries. The debate becomes pertinent in the backdrop of a meteoric rise in the introduction and passing of AI-related legislation in recent years. As per the Artificial Intelligence Index Report 2022, AI-related legislation has jumped from one in 2016 to 18 in 2022, in 25 nations. However, as the application of AI has been proven to transcend national or supranational boundaries, there are questions about the efficacy of domestic legislation. Secondly, the lack of a centralised transboundary governance regime can potentially cause countries, especially underdeveloped ones, to lower their standards in order to remain competitive, causing a downward trend in regulatory standards.

Advertisement

Against this backdrop, Gary Marcus and Anka Reuel recently suggested the creation of the International Agency for AI (IAAI). The idea of IAAI is to find “governance and technical solutions to promote safe, secure and peaceful AI technologies”. It will ensure domestic rules are based on certain fundamental principles accepted by all the nations. These include broad themes of safety and reliability, transparency, explainability, interpretability, privacy, accountability and fairness. Other organisations such as the International AI Organisation (IAIO), Emerging Technology Coalition and International Academy for AI Law and Regulation have also been proposed along similar lines.

However, the feasibility of such an organisation to implement principles on the ground is marred by geopolitical difficulties. Also, given that private industry is the major driver of AI innovation, they are still vying for self-regulation as the best way forward. This article will focus on the state-centric aspects where consensus-building institutions essentially face a dilemma — whether to focus on the inclusion of more members or on the mission/principle it seeks to accomplish.

Serious doubt is cast upon the effectiveness of any such organisation where nations have to get involved in a polarised world. Today, major superpowers have different agendas on AI, and every state seeks to gain a monopoly over it. For instance, it is quite unimaginable to bring Russia, China and the US to a single table along with other nations and build measures that are effective and implemented uniformly. Consequently, all major initiatives are bereft of China, which is a major player in the AI revolution. At the same time, others argue that the inclusion of China in such institutions will ultimately cause disruption in its functioning given that it has a history of violating international commitments given to organisations, including the World Trade Organisation (WTO) and International Telecommunication Union (ITU).

The race for leadership in AI has led to divisions within existing blocs. While it is true that almost all the major initiatives are dominated by Western nations, there is an imminent AI race within the bloc to change the powerplay of the future. For instance, President Joe Biden, while passing an AI Executive Order, said, “America will lead the way during this period of technological change”. Similarly, the EU is aggressively drafting AI regulations while the UK has revealed its own policy to shape AI’s trajectory.

Advertisement

This reflects a concerning trend that creating an organisation with nations of varied interests means slow consensus, and that too is a compromise based on lowest-denominator principles acceptable by all. And if the consensus is not binding, state compliance will be kept on the backburner and the focus will shift to domestic actions. Most of the current initiatives fall under this category. Moreover, as state-led guidelines/rules have not historically kept pace with changing technology, once they are created after a lengthy process, AI innovation might even outpace their effectiveness. A case in point could be arms-treaty regimes, such as the Treaty on Non-Proliferation of Nuclear Weapons and the Missile Technology Control Regime, where they have continuously struggled to maintain pace with technological changes.

It might be plausible that certain principles are recognised, but these are general and ever-transitioning. What is peace, transparency and equality might be read differently, and thus, building interoperability would not be actionable. It will, therefore, lead to the creation of multiple groups, causing conflict among the policies of states and disparity in integrating action plans globally. A case study in point could be the Council of Europe’s (CoE) Cybercrime Convention (Budapest Convention on Cybercrime). It is the only legally binding multilateral treaty coordinating cybercrime investigations between nation-states. It also criminalises certain cybercrimes. However, major countries, including India, China and Russia, declined to sign the convention because data sharing with foreign investigative agencies (Article 32b), particularly Western ones, will directly affect their sovereignty and domestic laws. Though it has proven beneficial to the EU member-states, which already have an established process, there is a substantial distrust among other states as it is not equitable and balanced in its approach.

So, what could be a solution that leapfrogs political dimensions and has a higher acceptance rate among varied interest groups? A plausible one is based on the assumption that AI is applicable to all cross-jurisdictional sectors. Thus, it would be efficient to encourage the existing sector-specific institutions, especially standard-setting, to develop the governing principles of AI in their respective industries.

Also known as the Standard Setting Organisations (SSO), these are established bodies that develop standards/rules that are interoperable, safe and uniform internationally. It includes organisations such as The Institute of Electrical and Electronics Engineers Standards Association (IEEE SA), the International Organisation for Standardisation (ISO), the International Electrotechnical Commission (International Electrotechnical Commission), and the International Telecommunications Union (ITU), among others. Their standards, though voluntary, have been historically adopted by most companies and countries, and they are also leading in the development of AI standards in their respective areas.

For instance, IEEE’s AI standardisation processes are part of its global initiative on the Ethics of Autonomous and Intelligent Systems. Meanwhile, the ITU has a Focus Group on Machine Learning for Future Networks focusing on telecommunications. It even created a Focus Group on AI for Health after the 2018 AI for Good Global Summit, “which aims inter alia to create standardised benchmarks to evaluate Artificial Intelligence algorithms used in healthcare applications.”

Although SSOs are based on a “soft-law” approach with a voluntary compliance mechanism, most of the standards gain legitimacy through support from diverse public and private groups. In the past, for instance, the ISO/IEC 27701 standards on data privacy, though voluntary, have had a global impact with states adopting principles and practices from the standard in their laws while still balancing them to local conditions and policy priorities. This indeed reflects that such standards have a higher chance of actually being incorporated into legislation and the functioning of private companies to maintain coherence across sectors and jurisdictions.

That’s why the application of principles for global AI governance should stem from the existing bodies that have already set a precedent. Eventually, new organisations can incorporate these standards into their frameworks to ensure the harmonisation of AI principles without causing disruptions. It ultimately fulfils the requirement of political flexibility, cross-jurisdictional cohesion of action plans and immediate implementation mechanisms available at the initial stages of the AI spectrum.

The journey toward effective global AI governance is undoubtedly challenging. But the pathway of building upon existing foundations holds promise for a future where principles are not only recognised but also seamlessly integrated into the fabric of technological advancements.

The writer is Research Assistant to C Raja Mohan and is currently pursuing Law from Dr Ram Manohar Lohiya National Law University, Lucknow

Latest Comment
Post Comment
Read Comments