Premium
This is an archive article published on April 23, 2023

Legal challenge: ChatGPT’s explosive debut sends policymakers scurrying to regulate AI tools

With the US and EU’s AI law in the works, India doesn’t have such plans yet

ChatGPT, AI tool, OpenAI, OpenAI’s ChatGPT, artificial intelligence, artificial intelligence tool, Express Premium, Ashwini Vaishnaw, Business news, Indian express, Current AffairsThe concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.
Listen to this article
Legal challenge: ChatGPT’s explosive debut sends policymakers scurrying to regulate AI tools
x
00:00
1x 1.5x 1.8x

“Every eighteen months, the minimum IQ necessary to destroy the world drops by one point,” AI theorist Eliezer Yudkowsky and co-founder of the Berkeley-based Machine Intelligence Research Institute propounded in an apparent improvisation of Moore’s Law. While the degree of existential risk posed by AI, a topic of renewed debate since the explosive debut of OpenAI’s ChatGPT, may seem overblown for now, policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools. The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.

The policy response has been different, too, with the European Union has taken a predictably tougher stance by proposing to bring in a new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is on the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to foster, and not stifle, innovation in this nascent field. The US approach falls somewhere in between, with Washington now setting the stage for defining an AI regulation rulebook by kicking-off public consultations earlier this month on how to regulate artificial intelligence tools. This ostensibly builds on a move by the White House Office of Science and Technology Policy in October last year to unveil a Blueprint for an AI Bill of Rights. China, too has released its own set of measures to regulate AI.

India has said that it is not considering any law to regulate the artificial intelligence sector, with Union IT minister Ashwini Vaishnaw admitting that though AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.

Story continues below this ad

“The NITI Aayog has published a series of papers on the subject of Responsible AI for All. However, the government is not considering bringing a law or regulating the growth of artificial intelligence in the country,” he said in a written response to the Lok Sabha this Budget Session.

The American Approach

The US Department of Commerce, on April 11, took its most decisive step in addressing the regulatory uncertainty in this space when it asked the public to weigh in on how it could create rules and laws to ensure AI systems operate as advertised. The agency flagged the possibility of floating an auditing system to assess whether AI systems include harmful bias or distort communications to spread misinformation or disinformation.

According to Alan Davidson, an assistant secretary in the US Department of Commerce, new assessments and protocols may be needed to ensure AI systems work without negative consequences, much like financial audits confirm the accuracy of business statements. A catalyst for all of this policy action in the US seems to be an October 2022 move by the White House Office of Science and Technology Policy (OSTP), which published a Blueprint for an AI Bill of Rights that, among other things, shared a nonbinding roadmap for the responsible use of AI. The 76-page document spelt out five core principles to govern the effective development of AI systems, with particular attention to the unintended consequences of civil and human rights abuses. The broad tenets are:

Safe and effective systems: Protecting users from unsafe or ineffective systems

Story continues below this ad

Algorithmic discrimination protections: Users not having to face discrimination by algorithms

Data privacy: Users are protected from abusive data practices via built-in protections and having agency over how their data is used

Notice and explanation: Users know that an automated system is being used and comprehend how and why it contributes to outcomes that impact them

Alternative options: Users can opt out and have access to a person who can quickly consider and remedy problems they encounter.

Story continues below this ad

The blueprint explicitly states it has set out to “help guide the design, use, and deployment of automated systems to protect the American Public”, with the principles being non-regulatory and non-binding: A “Blueprint,” as advertised, and not yet an enforceable “Bill of Rights” with the legislative protections.

The document includes multiple examples of AI use cases that the White House OSTP considers “problematic” and goes on to clarify that it should only apply to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services, generally excluding many industrial and/or operational applications of AI”. The blueprint expands on examples for using AI in lending, human resources, surveillance and other areas, which would also find a counterpart in the ‘high-risk’ use case framework of the proposed EU AI Act, according to a World Economic Forum synopsis of the document.

But analysts point to gaps. Nicol Turner Lee and Jack Malamud at Brookings said that while the identification and mitigation of the intended and unintended consequential risks of AI have been widely known for quite some time, how the blueprint will facilitate the reprimand of such grievances is still undetermined. “Further, questions remain on whether the non-binding document will prompt necessary congressional action to govern this unregulated space,” they said in a December paper titled Opportunities and blind spots in the White House’s blueprint for an AI Bill of Rights.

The debate over regulation has picked up pace in the wake of developments around the soft launch of ChatGPT, the chatbot from San Francisco-based OpenAI that is estimated to have lapped up over 100 million users and Google is moving ahead with its Bard chatbot, while Chinese companies have followed with Baidu’s Ernie Bot and Alibaba announcing plans to release a bot to be used internally.

Story continues below this ad

Pause on AI development

Tech leaders Elon Musk, Steve Wozniak (Apple co-founder) and over 15,000 others have reacted by calling for a six-month pause in AI development, saying labs are in an “out-of-control race” to develop systems that no one can fully control. They also said labs and independent experts should work together to implement a set of shared safety protocols. Yudkowsky too, is among those who have called for a global moratorium on the development of AI. But that call has divided opinions further.

“The demand for a pause in work on models more advanced than GPT-4: This is regressive where we are policing a technology that might prove to be harmful to society. But the fact is that anything can prove to be harmful if left unattended and unregulated. Rather than calling for a pause, one should think about the monetisation, regulation, and careful use of LLMS and related technologies,” Anuj Kapoor, an Assistant Professor of Quantitative Marketing at IIM Ahmedabad, told The Indian Express.

While the US has seen a flurry of policy activity, there is less optimism about how much progress is likely in Washington on this issue, given that the US Congress has been repeatedly urged to pass laws putting limits on the powers of Big Tech, but these attempts have made little headway given the political divisions among lawmakers.

The EU seems to be erring on the side of caution, given that Italy set the stage by emerging as the first major Western country to ban ChatGPT out of privacy concerns. The 27-member bloc has been a first-mover by initiating steps to regulate AI in 2018, and the EU AI Act, due in 2024, is, therefore, a keenly awaited document.

Story continues below this ad

China has been developing its regulatory regime for the use of AI. Earlier this month, the country’s federal internet regulator put out a 20-point draft to regulate generative AI services, including mandates to ensure accuracy and privacy, prevent discrimination and guarantee intellectual property rights.

The draft, published for public feedback and likely to be enforced later this year, also requires AI providers to clearly label AI-generated content, establish a mechanism for handling user grievances and undergo a security assessment before going public. Content generated by AI must also “reflect the core values of socialism” and not contain any subversion of state power that could lead to an overthrow of the socialist system in China, according to the draft quoted by Forbes.

Incidentally, the Chinese regulations were published the same morning the US Commerce Department launched its request for comments on AI accountability measures.

Anil Sasi is National Business Editor with the Indian Express and writes on business and finance issues. He has worked with The Hindu Business Line and Business Standard and is an alumnus of Delhi University. ... Read More

Latest Comment
Post Comment
Read Comments
Advertisement

You May Like

Advertisement