Premium

Opinion AI isn’t about to save us or kill us all. We must rethink the hype around it

There can be no doubt that the technology is proving to be extremely useful in diverse areas. But when people are led to believe the possibilities are limitless, they tend to ignore harms and abuses as side-effects or short-term sacrifices

artificial intelligenceAI has already transformed industries and will continue to do so, especially in domains where pattern recognition and data synthesis are of utmost use. But Apple’s critique exposes a fundamental flaw in the current trajectory.

Adithya Reddy

May 3, 2025 12:26 PM IST First published on: May 3, 2025 at 12:24 PM IST

In November 2024, The Washington Post carried a year-end article headlined, ‘This year, be thankful for AI in medicine’. The gratitude was meant for, among other things, the remarkable accuracy with which chatbots are reportedly able to diagnose health conditions compared to doctors — even doctors assisted by the same technology. Infosys co-founder Nandan Nilekani recently pointed out, in a talk on AI hype, that we see hundreds of people injured daily in accidents caused by human-driven cars, but when an autonomous car causes even a minor accident, the manufacturer has to go back to the drawing board for two years to redo the technology. This, according to Nilekani, is because of the higher expectations we have of technology. The correct way of looking at it is that we know the technology does not know what it’s doing, be it right or wrong, and so it cannot take blame or credit. So, a victim of an accident caused by an autonomous car has no remedy against anyone if the technology itself is legally approved.

Similarly, if medical diagnoses by chatbots have 90 per cent accuracy as opposed to 74 per cent for doctors, it means the chatbot has been fed with enough data, trained well and equipped with the best hardware architecture. It also means chatbots could be extremely useful to doctors in treating people. To go a step further, as Bill Gates has done, and include medicine in the list of professions that AI will replace in future, would require AI to be able to reason like humans — or better — become conscious. Only then could even 100 per cent accuracy in medical diagnosis, the most brilliant legal analysis of a case or good performance in any field that requires dynamic and contextual input be considered trustworthy.

Advertisement

Some reputed scholars back claims that AI will become conscious or truly intelligent soon, but there are reasons to believe they are feeding into a baseless frenzy. From the technological point of view, it is sufficient to know that the recent popularity of AI owes its origin to AI models called Large Language Models (LLM) that use neural network techniques or techniques inspired by biological neural networks. All famous AI applications like ChatGPT, Gemini and Llama are examples of this model. Yet, one of the “godfathers” of AI, Yann LeCun, has recently gone on record to claim that LLMs will become obsolete in a few years. His advice to young techies is to work on “next-gen AI systems that lift the limitations of LLMs”. The hope that some future AI model will emerge to remove the limitations of LLMs (rid them of hallucinations, make them understand natural language and make them conscious) cannot justify the hype today. LeCun’s views align with those of longtime AI critics like cognitive scientist Gary Marcus, who has always held that problems like hallucinations and bias are inherent to such AI models and the industry is not being honest about their true potential. As Arvind Narayanan and Sayash Kapoor of Princeton’s computer science department point out in their book, AI Snake Oil, “Even if AI developers were to somehow accomplish the exceedingly implausible task of filtering the training of dataset to only contain true statements, it wouldn’t matter. The model cannot memorise all those facts, it can only learn the patterns and remix them when generating text. So, many of the statements it generated would in fact be false.” They add that “companies rarely share crucial information about leading language models” which may help researchers identify problems and warn users about when not to use them.

The problem with the hype is not that AI is attracting attention. After all, there can be no doubt that the technology is proving to be extremely useful in diverse areas. There is also room for improvement, which requires investment and research. But when people are led to believe that the possibilities are limitless, they tend to ignore harms and abuses as side-effects or short-term sacrifices. In his book Taming Silicon Valley, Marcus mentions three tricks used by companies to feed the hype on AI: Claiming repeatedly without any tangible evidence that LLMs or today’s AI models will lead to “Artificial General Intelligence”, creating a scarecrow out of China, and pretending that we are close to an AI that is going to kill us all (or save us all). This, according to Marcus, has got major governments of the world to take the narrative seriously and has made “AI sound smarter than it is, driving up stock prices”. The hype is distracting us from what Marcus calls “hard-to-address… risks that are more imminent (or already happening)… such as the damage to democracy from… misinformation, cybercrime etc”. In the name of “scaling” or improving AI accuracy, companies are using bigger and bigger datasets without having to disclose how much copyrighted content they contain. More importantly, training LLMs requires humans to scan these datasets for toxic and harmful content. As Narayanan and Kapoor point out, “labelling or annotating such content can be brutal”. Most of this work is done in developing countries, where labour is cheaper and less regulated. The hype is taking attention away from this human cost as well.

To sum it up, if the goals of AI are only to increase efficiency in repetitive tasks or, in the words of the AI critic and philosopher Hubert Dreyfus, “in isolated domains that do not connect with the rest of human life”, then people will know when the resources that go into AI development are disproportionate. If, on the other hand, the goal is to replicate human intelligence, governments will have to make people literate about how this is going to be achieved. This is not just because public funds are being provided for AI initiatives but also because caution should not be thrown to the wind. There could also be more pressing research needs in areas like the life sciences from which private resources are being diverted.

Advertisement

It is worth recalling Dreyfus’s analogy about the AI situation way back in 1965, “Alchemists were so successful in distilling quicksilver from what seemed to be dirt, that after several hundred years of fruitless effort to convert lead into gold, they still refused to believe that on the chemical level one cannot transmute metals. To avoid the fate of alchemists, it is time we ask where we stand.” 60 years later, I am afraid, we still have to ask the same question.

The writer is a lawyer practising in the Madras High Court

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Neerja Chowdhury writesAmid NDA vs INDIA, why polls may rejig lines between allies
X