Premium

Opinion Menaka Guruswamy writes: Why AI must be regulated

The balance between technological gains and the harmful effect of the technology is a policy debate that will challenge governance all over the world

menaka guruswamy writes on the need to regulate artificial intelligencePrime Minister Narendra Modi in a meeting with ChatGPT maker OpenAI CEO Sam Altman. (PTI)
June 10, 2023 09:22 AM IST First published on: Jun 10, 2023 at 07:30 AM IST

Sam Altman, CEO of OpenAI (the company that developed ChatGPT), was in India. While here, he had meetings with political and business leaders on a variety of issues, including the need to regulate AI. On May 16, Altman had testified before the US Senate. He urged that a new agency be formed to licence AI companies. Further, Altman identified three specific areas of concern. First, that artificial intelligence (AI) could go wrong.

ChatGPT, for instance, often gives inaccurate or wrong answers to queries. Second, that AI will replace some jobs leading to layoffs in certain fields and that it “needs to be figured out how to mitigate that”. Finally, he also testified that AI could be used to spread targeted misinformation, noting that the United States will have its presidential elections in 2024. I should add here — India also goes to the polls next year.

Advertisement

What is AI? As I’ve written in an earlier column, in 1956, John McCarthy explained, “artificial intelligence is allowing a machine to behave in such a way that it would be called intelligent if a human being behaved in such a way.” Siri, which Apple consumers are dependent on, is an example of artificial intelligence. It is human-like reasoning displayed by computer systems.  Today, applications of AI include natural language processing, speech recognition, machine vision and expert systems. Examples include manufacturing robots, self-driving cars, marketing chat bots, etc.

The areas of concern that Altman highlights are of significance to all countries. A few weeks after Altman testified before the US Senate, a statement caught my eye. It was signed by over 350 persons, including Altman; Mira Murati, the chief technology officer of OpenAI; Kevin Scott, Microsoft’s chief technology officer; top executives from Google AI; leaders from Skype and Quora and even a former United Nations High Representative for Disarmament Affairs. The statement they have all signed notes that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Subsequently, The Wall Street Journal reports that AI experts and tech honchos including Elon Musk signed a letter in March 2023 “to temporarily stop training systems more powerful than GPT-4”, the technology released that month by a Microsoft-backed startup. This includes the next generation of OpenAI’s technology GPT-5.

Advertisement

The need for regulation will be tempered by how profitable and efficient AI is. How profitable, you ask? Nvidia, a semiconductor company described by The Wall Street Journal as being “at the heart of the artificial intelligence revolution” has recently become a one trillion-dollar company. This puts it in the elite company of Apple, Microsoft, Amazon and Alphabet (the parent company of Google) — all trillion-dollar companies. This at a time when the tech world by and large has been shedding employees and has a gloomy outlook. Nvidia provides the chips and software for the computing-intensive demands of generative AI. ChatGPT is an example of generative AI. As Techopedia.com explains, generative AI can produce a variety of content including text, imagery, audio and synthetic data.

The use of AI has increased manifold over the last few years. It’s now used in guiding weapons, driving cars, conducting medical procedures, and even writing legal memos (often inaccurately). In February, the US launched an initiative to promote international cooperation on the responsible use of AI and autonomous weapons by militaries. As The Independent reports, this was done recognising that AI has the potential to change the way war is waged.

For instance, the war in Ukraine has accelerated the deployment of drones that “will be used to identify, select and attack targets without help from humans” powered by AI. Ukraine already has semi-autonomous attack drones. Ukraine’s Digital Transformation Minister Mykhailo Fedorov observed that fully autonomous killer drones “are a logical and inevitable next step” in weapons development.

Whether fully autonomous killer drones will be programmed to work according to the Geneva Conventions that prohibit the targeting of civilians and non-combatants is an important concern. At present, drones still require a human to choose targets over a live video feed while AI completes the job. But that may soon change. The drone may pick its own target.

The balance between technological gains and the harmful effect of AI is a policy debate that will challenge governance all over the world. Generative AI is the real danger since its content can be misleading. The fascinating Senate conversations of May 16 (available on YouTube) between the tech industry and senators are educational. As one senator noted, what to define and how to define the technology is an important policy dilemma.

Senator Jon Ossoff from Georgia pressed Altman on his thoughts on this definitional conundrum. Altman helpfully suggests various regulatory thresholds based on “how much compute goes into a model”. A model that can “persuade, manipulate and influence a person’s beliefs will be one threshold”. A model that can “create novel biological agents” is another threshold. He suggests that each capability threshold ought to have a different level or regulation and that models of low capability should be kept as open-use.

Regulation has implications for constitutional rights like privacy, equality, liberty and livelihood. It leads to that very significant constitutional debate between state intrusion and the privacy claims of the individual. After all, AI is only possible because our data is being harvested. In India, while the state has unilateral rights to collect and use our data, it has also given itself the ability to regulate private parties. Private parties and individual citizens could use some protections and rights. For now, the uses and dangers of AI will seize the imagination of policy makers. To make thoughtful and constitutionally tenable regulations, they must educate themselves on the technology.

The writer is a Senior Advocate at the Supreme Court

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express PremiumIn UP’s Bahraich, villagers cry wolf – only, the fear is real
X