After years of discussion and debate, the European Union has finally unveiled its comprehensive rulebook for governing the use of AI technologies. As the first of its kind globally, these regulations are set to have far-reaching implications not just within the EU, but potentially setting a precedent for other countries grappling with the challenges posed by the rapid advancement of AI.
Let’s break the AI Act down and see what its rules entail.
At its core, the AI Act aims to strike a delicate balance between fostering innovation in the field while simultaneously safeguarding fundamental rights, democracy, and environmental sustainability. The approach it has taken to achieve this is a risk-based framework that imposes varying levels of obligations based on the potential impact and risks posed by different AI applications.
For example, creators of high-risk AI products must conduct risk assessments to ensure that they comply with the law before release.
First up, the Act outright bans certain AI practices deemed too risky or unethical. This includes biometric categorisation systems that discriminate based on sensitive traits like race or gender, as well as scraping of facial images from public spaces to create recognition databases (a win for privacy advocates). Also on the chopping block are AI systems used for social scoring, manipulating human behavior, and exploiting vulnerabilities.
Another area that’s receiving attention is the use of AI by law enforcement agencies. Biometric identification systems are generally prohibited “in principle”, according to the Act. However, the Act does allow it in specific cases.
“Real-time RBI [biometric identification] can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack,” states the official press release.
For AI applications deemed “high-risk” due to their potential impact on areas like health, safety, fundamental rights, or the environment, the Act lays out some obligations. These systems must undergo risk assessments, maintain detailed logs, ensure transparency and accuracy, and perhaps most importantly, maintain meaningful human oversight.
And in a nod to consumer protection, the Act grants EU citizens the right to submit complaints about AI systems and receive explanations for decisions that impact their rights.
Under the Act, general-purpose AI systems and the models they’re built on must meet specific transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the training data used. And for the most powerful models with potential “systemic risks,” there are additional safeguards like mandatory model evaluations and incident reporting.
The Act also aims to tackle the issue of deepfakes by making it mandatory to label when content has been artificially generated or altered.
The European Union has pretty much made a habit of being the global trendsetters for regulating emerging technologies and industries, while other countries play catch up. The US and China are already scrambling to come up with their own set of AI rules, which is crucial considering most of the AI tools in use today originate from the two countries.
Last October, US President Joe Biden signed an executive order that’ll likely be followed by actual legislation and international agreements. At least seven US states are already working on their own AI laws. Meanwhile, China’s Xi Jinping has put forward a “Global AI Governance Initiative” for ethical AI development and use.
India is also working on its own AI regulation framework, Minister Rajeev Chandrasekhar recently announced, and they’ll likely be out in June or July 2024. Many more countries like Japan, Brazil, and the UAE have either adopted soft laws or have announced upcoming regulations. Bottom line: Everyone’s looking to establish guard rails for artificial intelligence – and they’re sure to take cues from the EU’s AI Act.
The EU AI Act is expected to officially become law sometime in May or June this year, after clearing a few final hurdles including member country approvals. The provisions will then start kicking in over time.
Systems deploying prohibited AI practices like social scoring will need to be banned within 6 months. Rules for general AI like chatbots apply a year after the law takes effect. Companies can face fines up to €35 million or 7% of global revenue for breaching the rules. By mid-2026, the complete regulatory regime for high-risk AI systems should be enforced. Each EU country will also have its own AI watchdog to investigate citizen complaints of violations.
The EU AI Act is facing some criticism over its scope. The initial aim was to curb the most dangerous uses of AI like biometric surveillance. But experts have argued that the Act isn’t as far-reaching as expected and that there are too many exemptions that could enable harmful AI deployment. A widely shared example as pointed out by Access Now is Article 6(3) that AI developers could use to declare their products as lower risk and skirt some of the rules.
Some believe big tech lobbying has diluted the regulation’s power. For instance, OpenAI reportedly persuaded lawmakers to tweak wording in the Act’s previous draft to be more favorable, according to Time magazine.