Premium

How Anthropic CEO Amodei’s warning about ‘AI destroying humanity in the future’ is a distraction

Whether AI models will become super-intelligent or not, the risks posed by them are already here, and not in the future, like Amodei and other CEOs have been saying

Dario Amodei atAnthropic CEO Dario Amodei at an event in 2023. (Photo: Wikimedia Commons)

Anthropic CEO Dario Amodei, who leads the company behind the large language model (LLM) series Claude, has warned that powerful artificial intelligence (AI) systems could pose catastrophic risks to humanity in the next few years.

In a nearly 20,000-word essay published on Monday (January 26), Amodei wrote that if the technology is allowed to evolve without intervention, it could lead to large-scale job losses, bioterrorism, and empower authoritarians, among other consequences.

He has described “powerful AI systems” as systems which would be “much more capable than any Nobel Prize winner, statesman or technologist”, and has predicted their advent in the “next few years”.

This is not the first time that Amodei has issued such a warning about AI. In May 2025, for instance, the CEO had said that the technology could wipe out half of all entry-level white-collar jobs within the next five years. In fact, over the years, several AI company CEOs, including Elon Musk, have issued such warnings and have urged countries to enact regulations.

Experts, however, have raised questions about not only the timeline for successfully building powerful AI systems but also whether it is even possible to do so with current technology. They have argued that the technology is nowhere near becoming “super-intelligent”, and claims about AI destroying humanity are just a distraction from its ongoing misuses.

Long way to ‘powerful AI systems’

For years, companies have been improving their AI tools, such as Claude and ChatGPT, by using the scaling law. These systems performed increasingly better as more data and graphics processing units (GPUs) were used to develop them. That’s why companies poured billions of dollars and computer power into LLMs — the face of modern-day AI. They also believe that this would one day help them make “super-intelligent” AI systems.

However, the progress has seemingly slowed down. This realisation sank in especially after the launch of GPT-5 last year. While OpenAI CEO Sam Altman called it a significant step towards artificial general intelligence (AGI) — a type of hypothetical AI which would match or surpass human capabilities — users were left disappointed. The model was supposed to overcome the limitations of its predecessors — such as sluggish responses and hallucinations — but could not.

Story continues below this ad

Note that other AI systems like Claude, Gemini, Grok, etc., are plagued by the same problems, and so far have failed to tackle them.

Gary Marcus, an AI entrepreneur and an emeritus professor of psychology and neural science at New York University (the United States), in his newsletter, Marcus on AI, wrote, “So far as I know even the latest language models are still bulls in a china shop, powerful but hard to control; they still can’t reason reliably; they still don’t work reliably with external tools; they continue to hallucinate; they still can’t match domain specific models, the continue to struggle with alignment.”

Marcus is among those researchers who have argued that these issues are fundamental to LLMs, and using a larger number of GPUs or more data will not get the companies to build “super-intelligent” AI models. While the researchers believe AGI is achievable, they say there is a need to find another way to get there.

Ilya Sutskever, co-founder of OpenAI, told Reuters in November 2024, “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again… Everyone is looking for the next thing.”

Story continues below this ad
Also in Explained | AI as ‘normal’ technology

Focus on misuses, not ‘super-intelligence’

Whether AI models will become super-intelligent or not, the risks posed by them are already here, and not in the future, like Amodei and other CEOs have been saying. There is no dearth of studies which have found that the technology is already reinforcing and exacerbating issues, such as bias, discrimination, and misinformation.

For instance, AI models are being widely used to create deepfakes, which have emerged as a significant tool for misinformation and digital impersonation.

Also, the use of AI and algorithmic decision-making systems has led to racial biases in several sectors. In 2019, a study revealed that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias. “Black patients had to be deemed much sicker than white patients to be recommended for the same care. This happened because the algorithm had been trained on past data on health care spending, which reflects a history in which Black patients had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities,” according to a report by the American Civil Liberties Union (ACLU).

Story continues below this ad

Then there are the concerns about the environmental impact of AI. That is because GPUs run in large data centres, which consume enormous amounts of energy to function. Studies have shown that a simple AI query, like the ones posted to OpenAI’s chatbot ChatGPT, could be using between 10 and 33 times more energy than a regular Google search.

However, the foremost concern has been the use of AI to infringe human rights. For example, at least since 2013, AI systems of the US data-mining firm Palantir Technologies have been part of an ecosystem created by Israel to surveil Palestinians in Gaza and the West Bank. These programs were trained on data in the form of intelligence reports on Palestinians in the occupied territories.

After Israel began its onslaught on Gaza in response to Hamas’ October 7 attacks, the government used Palantir’s AI systems, such as “Lavender”, “Gospel”, and “Where’s Daddy” to identify “targets” for airstrikes based on Israeli mass surveillance records of Palestinians in Gaza.

A 2024 investigation done by +972 Magazine, based in Israel, found that systems like Lavender would assign residents of Gaza a numerical score indicating their suspected likelihood of being a member of an armed group. However, this criterion of identifying someone as, say, a Hamas operative was quite broad because “being a young male, living in specific areas of Gaza, or exhibiting particular communication behaviour was enough to justify arrest and targeting with weapons,” the investigation revealed.

Story continues below this ad

Palantir’s products are also being used by the US President Donald Trump administration to combine data gleaned from the Department of Homeland Security, the Department of Defence, the Department of Health and Human Services, the Social Security Administration and the Internal Revenue Service. Experts have claimed that this has helped Trump to spy on his critics, and find and detain immigrants.

Such risks posed by AI have pushed numerous researchers around the world to demand intervention regarding the ongoing misuses, not imagined future catastrophes.

In 2023, researchers based at the University of Oxford (the United Kingdom) wrote, “AI poses real risks to society. Focusing on long-term imagined risks does a disservice to the people and the planet being impacted by this technology today. It is important to recognise when sci-fi is dressed up as science, and to instead focus our attention on problems of today.”

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement
Advertisement
Advertisement