Premium

Opinion Express View on Eu’s AI law: Strive for an intelligent balance

EU’s AI Act could guide framing of similar laws elsewhere. For India, challenge is to address risks without stifling innovation

Artificial Intelligence Act, European Parliament lawmakers, AI safety, President Joe Biden, Bidens executive order on AI safet, Japan, Brazil, United States, indian express newsIn this regard, the ministry’s replacement last week of its March 1 advisory, which required generative AI companies to seek government permission for deploying “untested” systems, with a new one that drops this condition, is welcome

By: Editorial

March 18, 2024 08:15 AM IST First published on: Mar 18, 2024 at 08:15 AM IST

Last week, lawmakers in the European Parliament voted overwhelmingly in favour of the Artificial Intelligence Act, putting the landmark legislation on track to take effect by the end of the year. While governments across the world are moving to put up guardrails, including Japan, Brazil and the US, where on October 30, President Joe Biden signed an executive order on AI safety — the European Union’s new law is the first comprehensive framework for governing a technology that has seen explosive growth in recent years, dominating headlines and stoking both excitement and fear about the future.

Taking a horizontal, risk-based approach that will apply across sectors of AI development, the EU AI Act classifies the technology into four categories: Prohibited, high-risk, limited-risk and minimal-risk. Systems that violate or threaten human rights through, for example, social scoring — creating “risk” profiles of people based on “desirable” or “undesirable” behaviour — or mass surveillance are banned outright.

Advertisement

High-risk systems, which have a significant impact on people’s lives and rights, such as those used for biometric identification or in education, health and law enforcement, will have to meet strict requirements, including human oversight and security and conformity assessment, before they can be put on the market. Systems involving user interaction, like chatbots and image-generation programmes, are classified as limited-risk and are required to inform users that they are interacting with AI and allow them to opt out.

The most widely used systems, which pose no or negligible risk, such as spam filters and smart appliances, are categorised as minimal-risk. They will be exempt from regulation, but will need to comply with existing laws.

Like the 2016 General Data Protection Regulation (GDPR) law, which influenced data privacy regulation around the world, the impact of the EU’s AI Act is expected to be felt globally. However, the history of EU technology legislation, including GDPR, which has been criticised for being regulation-heavy and stifling innovation, urges caution.

Advertisement

For India, where the Ministry of Electronics and Information Technology has been working on a framework for responsible AI, the challenge would be to acknowledge and address the risks posed by the emerging technology, such as the proliferation of deep fakes, without hobbling its potential for improving lives or enhancing the promise of India’s start-up ecosystem.

In this regard, the ministry’s replacement last week of its March 1 advisory, which required generative AI companies to seek government permission for deploying “untested” systems, with a new one that drops this condition, is welcome. Going forward, the task for the government would be to safeguard citizens’ rights, while continuing to make room for the transformative possibilities of AI.

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Tavleen Singh writesWhy Sycophants cause more harm than good
X