Premium
This is an archive article published on November 15, 2024

New EU draft rules require AI giants to disclose partial training data, testing details

The EU’s draft Code also requires companies to bring in outside experts for independent testing and risk assessment of general purpose AI tools.

artificial intelligenceChatGPT maker OpenAI has also called for more government funding for artificial intelligence. (Image: Pixabay)

The European Union (EU) has released draft rules to operationalise its landmark AI Act that went into effect on August 1, 2024.

The draft document published by the bloc on Thursday, November 14, lays out a Code of Practice for companies that are looking to roll out general purpose AI models. It has invited stakeholders to submit feedback on the draft rules, which is expected to be finalised by May next year.

General purpose AI models (GPAIs) are advanced models that have been trained using a total computing power of over 10²⁵ FLOPs or floating point operations per second. The AI models released by OpenAI, Google, Meta, Anthropic, Mistral, and other similar AI players are expected to fall under this category.

What does the EU’s draft AI Code of Practice say?

The draft document is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties.

The 36-page draft focuses on the following core areas for companies developing GPAIs:

– Transparency
– Copyright compliance
– Risk assessment
– Technical / governance risk mitigation.

It lays out guidelines that look to enable greater transparency in what goes into developing GPAIs.

The risk assessment provision of the draft Code focuses on preventing cyber attacks, large-scale discrimination, nuclear risks, and widespread misinformation risks as well as the risk of “losing control” of powerful autonomous AI models.

Story continues below this ad

Provisions related to the safeguarding of AI model data, access controls, and efficiency reassessments are also included in the draft Code.

What are some of the obligations for AI companies?

As per the draft Code, AI companies are required to only use web crawlers “that read and follow instructions expressed in accordance with the Robot Exclusion Protocol (robots.txt).”

This proposed rule comes after reports of AI companies such as Perplexity and Anthropic ignoring the decades-old web standard that is meant to prevent the scraping of data by AI tools or indexing of a site by an AI search engine without permission.

As part of transparency efforts, companies are required to release detailed information about the general purpose AI models, including “information on data used for training, testing and validation” and the results of the testing processes that the AI models were subjected to.

Story continues below this ad

They are also required to set up a Safety and Security Framework (SSF) that “shall detail the risk management policies they adhere to in order to proactively assess and proportionately mitigate systemic risks from their general-purpose AI models with systemic risks.”

The rules state that companies need to update the SSF with systemic risks posed by their general purpose AI models at three stages, namely: before training, during training, and during deployment as well as post-deployment monitoring.

The governance section of the draft Code proposes to place accountability for systemic AI risks on the executive and board levels of companies. It also requires them to bring in outside experts to “enable meaningful independent testing” and “meaningful independent expert risk and mitigation assessment” of general purpose AI models.

Companies that are found to be in non-compliance with the EU’s AI Act could incur heft penalties to the tune of €35 million (currently Rs 312 crore approx.) or up to seven percent of their global annual profits, whichever is higher.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement