Premium

Opinion A look back at AI in 2023: The dangers and the hope

There is a tendency to surrender the policy process around AI to a handful of tech companies who weaponise the very real anxieties about AI to distract from and evade concrete interventions

At the end of 2023, we observe that not only are there significant challenges to AI policy, but a dearth of democratic voices and the tendency to surrender the policy process around AI to a handful of tech companies who weaponise the very real anxieties about AI to distract from concrete interventions. (File)At the end of 2023, we observe that not only are there significant challenges to AI policy, but a dearth of democratic voices and the tendency to surrender the policy process around AI to a handful of tech companies who weaponise the very real anxieties about AI to distract from concrete interventions. (File)
New DelhiDecember 29, 2023 02:53 PM IST First published on: Dec 28, 2023 at 04:00 PM IST

2023 was perceived by both industry leadership and the populace as one where artificial intelligence had a significant impact on social and economic relations. This was, in particular, due to the apparent success of large language models, a family of generative models, in solving complex tasks, and if the claims of organisations like OpenAI are to be believed, making progress towards artificial general intelligence (AGI). Whether this perception is accurate is a debate for researchers — as of now, AGI is speculation. However, the intensity of how capital reacted to the sentiment around AI in 2023 itself bears comment.

The year started with Microsoft deciding to invest $10 billion in the OpenAI project, at the heels of the virality of its ChatGPT, which, by February, had become the fastest-growing application. Not to be outdone, Google introduced its chatbot, Bard. The hype had an impact on hardware profits, with the world’s leading GPU manufacturer NVIDIA reaching a market cap of a trillion dollars. Amazon introduced Bedrock, giving its customers access to large language models of its own called Titan. Google also indicated it will use its generative models to improve its search engine and Microsoft responded by trying to integrate generative models for Windows 11 navigation.

Advertisement

Around this point, both industry and state actors started to verbalise what had been insisted on for a decade by AI academics, policy scholars, and social scientists, but was being soundly ignored by the industry. Namely, that there are real dangers of LLMs in particular, and of publicly deployed AI systems in general. However, what exactly these perils are is strongly contested and there are hints that the panic too is managed and instrumentalised by the industry. One illustrative case was the so-called AI safety letter, which more than 2,900 industry experts and academics signed in March calling for a six-month halt on training AI systems more powerful than GPT4. Their fears were based on the fantastical notion that AGI is imminent and could prove to be an existential threat.

What the letter ignored was the political economy around AI — the question of data hunger of AI which has implications on both diluting privacy and on labour conditions of platform workers. It ignored AI’s stochastic and opaque workings which have impacts on democratic processes when AI systems are used in public-use cases like surveillance and policing (where stochasticity and arbitrariness shouldn’t be tolerated) as well as ignoring the propensity of AI systems in replicating and strengthening structural problems. Thus, the concrete harms of AI systems, which are becoming ubiquitous, were ignored in favour of an industry-centric worldview in which the “danger of AI” was in a mystical future variant of the technology. The practical consequence of this AI panic was to inflate the importance of industry and reinforce the idea that AI is too complex to regulate or even be understood by governments, let alone the masses. So, the industry leaders who have suddenly discovered merit in caution should be allowed to self-regulate. This phenomenon is called “doomwashing”, an analogue to “ethicswashing” already plaguing AI policy.

In July, the US government announced that it had persuaded the companies OpenAI, Microsoft, Amazon, Anthropic, Google, Meta, etc to abide by “voluntary rules” to “ensure their products are safe”. Neither did these rules mention a word about the political-economic factors influenced by AI deployment, reducing the problem to one of safety testing, nor are self-regulations in any manner enforceable. By October, with no sign of the US Congress drafting a regulation on AI, the US administration signed an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, though the challenges discussed above are largely not covered.

Advertisement

In Europe, a regulation called the AI Act was proposed in April and passed into law by December. It remains the only such law in the world. Unlike the US executive order, the EU law has concrete red lines like prohibiting arbitrary and real-time remote biometric identification in public spaces for law enforcement, of the sort which is becoming ubiquitous in India. It bans emotion detection, which is now recognised to be a harmful pseudoscience, in workplaces. It prohibits authorities from using AI systems to generate social scores/credits.

However, this law has gaps that have attracted criticism from policy scholars. For example, emotion detection is outside the regulatory ambit as long as it’s not used in workplaces, which leaves scope for the use of this harmful and fraudulent tech. The law doesn’t address virtual assistants and chatbots with the potential for damage (one common and harmful example is apps using chatbots to give physical and mental health advice). Also, while regulations are being made, there is still a complete lack of industrial policy anywhere on AI, without which who owns AI, how it impacts labour and wage, and where the proceeds go cannot really be altered. Vague frameworks of “trust” and “responsible AI” fill this vacuum.

At the end of 2023, we observe that not only are there significant challenges to AI policy, but a dearth of democratic voices and the tendency to surrender the policy process around AI to a handful of tech companies who weaponise the very real anxieties about AI to distract from concrete interventions. My hope is that 2024 leads to greater socialisation of AI policy and the people taking over its imagination and control.

The writer is an assistant professor, working on AI and Policy, at the Ashank Desai Centre for Policy Studies at IIT, Bombay

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
C Raja Mohan writesOn its 80th birthday, and after Trump, a question: Whose UN is it anyway?
X