Journalism of Courage
Advertisement
Premium

Depending on AI alone to make decisions affecting people’s lives is worrisome: Sayash Kapoor, co-author, AI Snake Oil

Sayash Kapoor, co-author of the book AI Snake Oil, talks about the hype around AI, the problems it can solve, and how it can be used for social good.

Sayash Kapoor is a computer science Ph.D candidate at Princeton University, Center for Information Technology Policy, and a senior fellow at Mozilla.Sayash Kapoor is a computer science Ph.D candidate at Princeton University, Center for Information Technology Policy, and a senior fellow at Mozilla.

Sayash Kapoor is the co-author of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, the book which alerts you to the dangers of false AI claims made by companies to sell their products.

Sayash is a computer science Ph.D candidate at Princeton University, Center for Information Technology Policy, and a senior fellow at Mozilla. He completed B Tech in computer science from IIT Kanpur.

His research focuses on the societal impact of AI, harmful impacts of predictive optimisation for decision making, increasing transparency of the impact of AI, and helping evidence-based AI policymaking.

Sayash spoke to indianexpress.com on the AI hype, how governments view AI, and the risks of using predictive AI. Edited excerpts:

Venkatesh Kannaiah: How has AI changed since the publication of your book?

Sayash Kapoor: When we wrote the book, we tried to make it about the foundational knowledge necessary to understand AI. We now see that not much of an update is needed. We talk about applications of AI that go back to the last 80 years and the processes that have led up to this point. We have looked at such themes from a longer perspective.

Venkatesh Kannaiah: You say that AI does not work as advertised and there is a lot of hype. Tell us what we should be worried about?

Story continues below this ad

Sayash Kapoor: One should be worried about the applications of AI that are used to make consequential decisions about people.

For example, these are applications that might decide whether you get a loan, or if you are arrested by the police, whether or not you should be released on bail, or if you enter a hospital, whether or not you should be prioritised. And, in the last few years, what we have seen is that dozens of these algorithms are being deployed across education, healthcare, and criminal justice sectors. This is what we call predictive AI in the book. Such tools are being used to make very consequential decisions about people. And this worries us.

Unfortunately, predictive AI often does not work as advertised. So, the claims do not match up to how they work in the real world. And this facet might be swept under the carpet with the present-day excitement about generative AI.

It is not to say that generative AI has not made advances. But predictive AI is a different story. Predictive AI uses systems that we have been using for the last 50-60 years. And because AI is an umbrella term, it is used to refer to these different types of technologies. I think it is often hard for people to figure out whether the AI tools being sold or the AI tool that is making decisions about them actually work well or don’t.

Story continues below this ad

Sometimes, these predictive systems could work well when they are being tested, but not when they are deployed. And, as they are deployed more widely, I think it becomes harder to track how well they work. These systems have shown strong limits to predictive accuracy. In some cases, it is not higher than 60 or 70%.

Venkatesh Kannaiah: What are the problems that AI can solve?

Sayash Kapoor: Identifying things in images is a problem that is very close to being solved with AI. Similarly, in the health sector, using AI models for solving issues like protein folding is a significant achievement. Last year, the Nobel Prize in Chemistry went to the team that did this. I think it works well in cases when it was earlier limited by a lack of computational power. And as computing power increases, you can get the right answer. It could be in protein folding, image detection, or writing code.

And then there are some tasks where we don’t care about the correctness of the answers at all. This could be in creative writing or generating images, where it is more about creativity than finding the correct answer.

Story continues below this ad

Venkatesh Kannaiah: As more computing power becomes available, don’t you think these inherent limitations go away?

Sayash Kapoor: Computing is one limitation, but data is the key. Often, when we make predictions about individuals, we do not have sufficient data. Let us say three similar people are being tried for a crime. Perhaps, only one of them regrets committing the crime. But we do not have any data on their internal state, and our data is all about external factors. We cannot make predictions about future behaviour without data on their mental state. But these algorithms take it upon themselves to predict their future behaviour without much data.

In the US legal system, a large number of people in many courtrooms are profiled using predictive AI tools. The judge is presented with a prediction or kind of a rating of how risky this person is to release on bail. And in many cases, the judges take these predictions into account. We can also focus on bias, but the main issue here is that predictive AI tools have very low accuracy.

Venkatesh Kannaiah: How much of the AI hype has been internalised in India?

Story continues below this ad

Sayash Kapoor: There is no cross-country study of AI hype. I do think people in India are a bit grounded when it comes to understanding AI systems, or at least playing around with AI systems. That is because, unlike predictive AI, which is mostly backed by an institution, generative AI can be used by everyone.

Venkatesh Kannaiah: How do you think governments are viewing AI? Do they feel a threat, or are they positive about it?

Sayash Kapoor: Back in 2023 and 2024, there was a lot of conversation around the potential existential risk of AI. That AI will go rogue and kill us all. That captured the imagination. That was also aided by AI companies in many cases, some of them at least, going to lawmakers and lobbying for restrictive regulations on AI.

The existential risk angle has receded from the policy agenda because, in 2023, policymakers were concerned that this would happen in the next six to twelve months. But when these risks did not materialise, the focus shifted to the impact of AI on jobs, on deepfakes, and the impact of AI on the information environment. As for the impact of AI on jobs, I think it is a longer-term threat, a decade or so from now.

Story continues below this ad

In the US, the issue is being framed as a race between the US and China when it comes to training the most capable AI models. This is unproductive since the impact of AI will not be realised by building better models, but when these models are adopted across industries like health, education, and finance.

Outside the US, there has been a focus on sovereign AI. Different countries want to create their own AI models. They all want to create a supply chain of AI systems that they have control over, rather than relying on foreign companies or US-based companies in order to get access to state-of-the-art AI.

Venkatesh Kannaiah: Can you share specific examples where AI could be used for social good?

Sayash Kapoor: In healthcare, we have seen that tools for protein folding models have changed the nature of some fields in medicine, like structural biology, leading to a reduction in the time for new drug discovery.

Story continues below this ad

In education, a lot of pilot studies have been conducted. We have some evidence that, if used well, generative AI can spur learning and improve learning outcomes. We found that if you give students access to a chatbot, they do much better. But when access was taken away, they sort of regressed. They did worse, and in some cases worse than students who never had access to a chatbot.

The task for policymakers and technologists is to build AI applications that allow tutors and students to do better, but not to build an over-dependence on AI-led learning.

In the pharma industry and drug discovery, one should be able to shorten the timelines for producing new drugs. In terms of public health, I think we are yet to see any strong evidence of what happens when people rely on, say, ChatGPT, to get answers to medical questions. There are studies to show that doctors benefit quite a lot from using AI in diagnosis. And so if those things also transfer broadly, to the public, then I think that would be a positive outcome for public health. But I am not aware of any studies that have tried to test this question.

Venkatesh Kannaiah: Tell us about policy changes you would like the Indian government to make with regard to AI?

Story continues below this ad

Sayash Kapoor: The need is to focus more on the diffusion of the tech. Diffusion is the process of gradually deploying AI systems across different sectors of the economy.

We need to get the AI systems in the hands of professionals and give them the knowledge as well as the tools to be able to use these models well. It is a much harder problem to solve than building stronger AI models. In the case of diffusion, there are many bottlenecks. So you need workforce training to allow people to understand how to use AI for their work. You need investments in cloud infrastructure or local models.

There are areas in governance where generative AI could be productive, but it requires a change in the way things are done by governments. This is something that private companies cannot really do and do not have the incentives. Governments need to train their employees to use generative AI to figure out what the right places are to use generative AI and make it more relevant and productive.

On the issue of jobs, if you look at past general-purpose technologies like the internet, their impact, though massive, was concentrated in a couple of industries. And then in most of the others, it led to the augmentation rather than the replacement of jobs. We need to figure out how to train the workforce over the next 10-20 years and prepare them for the structural changes.

The worst impact we have seen from AI so far is in the form of deepfakes: a proliferation of nonconsensual images of people that have been shared on social media. Countering deepfakes is both a policy and enforcement problem. It needs to be treated as a priority to set the norms.

It is also a technical problem, where I think there are emerging standards that allow us to tell whether a photo was generated by a human.

We need to bring all device manufacturers on board to tackle deepfakes by adopting content provenance tools to be able to tell if something was generated by humans and to be able to digitally sign any piece of media.

Tags:
  • artificial intelligence Express Premium
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express InvestigationRamdev aide Balkrishna gets Uttarakhand tourism project, for which 3 firms bid — all controlled by Balkrishna
X