The European Union (EU) carves out a unique niche in the global landscape. While it may not possess the raw military might of the US or the economic muscle of China, the EU exerts significant influence through a phenomenon known as the Brussels Effect. This concept highlights the EU’s ability to shape global standards through its own regulatory framework, bypassing traditional methods of international collaboration and global governance. However, as AI assumes critical importance in the global landscape, there are active efforts to contain the influence of the Brussels Effect, originating from Washington, London, and numerous emerging countries.
The Brussels Effect hinges on the EU’s status as a regulatory power. The EU sets high standards within its vast internal market, which, in turn, creates a powerful incentive for multinational corporations to comply with these regulations across their global operations. Adhering to a single set of standards throughout their business streamlines operations and reduces administrative burdens. This adoption by corporations leads to a gradual Europeanisation of global commerce, reflecting European priorities in areas like data privacy, cybersecurity, product safety, financial services, intellectual property, climate change, and environmental protection. The Brussels Effect operates subtly, relying on market forces rather than geopolitical and economic coercion. It represents a novel form of soft power in the 21st century, where the EU’s influence stems from the attractiveness and effectiveness of its regulations.
The challenging aspect of AI regulation presents a significant test for the influence of the Brussels Effect. The European Parliament voted overwhelmingly in favour of the AI Act, which aims to establish comprehensive regulations covering a wide spectrum of AI applications. It is a direct response to the rapid development and deployment of AI, which has prompted governments worldwide to seek measures to address potential pitfalls. As the AI debate intensifies among intellectuals, policymakers, technologists, and citizens around the globe, concerns grow regarding digital discrimination, job displacement, the future of warfare, and the overall trajectory of humanity. Expanding on the foundational principles of the Brussels Effect, the EU’s AI Act stands out due to its breadth, legal significance, and size of the European consumer market, which underscores its importance in shaping the future of AI development and deployment. Encompassing areas such as facial recognition systems, biometric data collection, large language models, training data, and testing requirements, the AI Act boasts a broad scope and mandate.
The EU envisions the AI Act as another manifestation of the Brussels Effect, asserting the Union’s regulatory influence. Similar to the General Data Protection Regulation (GDPR), Brussels hopes that the AI Act could serve as a blueprint and model for other nations navigating the complexities of AI regulation to mitigate associated political, security, economic, and social risks. Unlike GDPR, the EU’s approach to AI regulation is not a solitary effort. Over the past several years, momentum has been gathering in the international community toward the goal of developing effective and comprehensive artificial intelligence governance standards. From the G7 Hiroshima Process to the UK AI Safety Summit, governments across the world have started to meaningfully discuss what must be done from a regulatory standpoint to mitigate AI’s potential harms and harness its benefits.
Even the historically regulation-averse US has begun making inroads in shaping the evolving regulatory landscape for AI development. On October 30, 2023, the White House issued its AI Executive Order (EO) outlining the Biden administration’s strategy for responsible AI development in the US. This EO, covering areas like safety standards, privacy, equity, innovation, and global leadership, marks a significant step towards solidifying the US position in AI. However, the EO faces limitations due to its lack of legal enforcement power.
Nevertheless, it improves US credibility regarding AI regulations on the international stage and opens doors for collaboration with global partners like the G7 and G20, as well as the broader emerging markets. The EO holds particular significance because it allows Washington, demonstrably taking a thoughtful and deliberate approach, to navigate around more restrictive and faster-moving regulatory frameworks, especially those emanating from Brussels.
Furthermore, the United Kingdom, home to nearly two times as many AI companies as any other European nation, has sought to position itself as a leader in such AI discussions and likely viewed the Summit as an opportunity to do so. In the wake of the 2023 UN General Assembly’s inability to meaningfully discuss broader AI policy, the UK AI Safety Summit proved to be a tangible effort towards developing global AI policy standards. China’s attendance, a rarity for global AI governance meetings, lent the Summit gravity and legitimacy.
The EU AI Act, the US Executive Order, and even the G7 Code of Conduct, along with summits such as the UK AI Safety Summit and others, collectively contribute to shaping the AI landscape. This doesn’t contradict the thesis of the Brussels Effect; rather, it further validates it. Nations, particularly the United States, recognise the power of regulations as integral to 21st-century statecraft.
Leveraging its status as an AI superpower with unparalleled capabilities in AI development and deployment, the US understands the importance of regulation. This slowly emerging realisation could eventually lead to the partial containment of the Brussels Effect of the EU AI Act and even extend beyond AI more broadly in the long term.
The writer is a director at the Middle East Institute in Washington, a member of McLarty Associates, and a visiting fellow at Third Way