Premium
This is an archive article published on September 25, 2023

‘Newsletter helped us dissect fake claims about AI in real-time’: Indian duo on TIME magazine’s list of most influential voices in AI

Sayash Kapoor and Arvind Narayanan speak to The Indian Express about the reasons behind deciding to write their book on Artificial Intelligence (AI) in the form of a newsletter, the limits of ChatGPT in the realm of education and more.

A newsletter to debunk fake claims about AI put this Indian duo -- Sayash Kapoor and Arvind Narayanan -- on TIME Magazine’s list of 100 most influential voices in the fieldA newsletter to debunk fake claims about AI put this Indian duo -- Sayash Kapoor and Arvind Narayanan -- on TIME Magazine’s list of 100 most influential voices in the field (Photo credit: Princeton University)
Listen to this article
‘Newsletter helped us dissect fake claims about AI in real-time’: Indian duo on TIME magazine’s list of most influential voices in AI
x
00:00
1x 1.5x 1.8x

A year ago, Sayash Kapoor and Arvind Narayanan decided to put together their research on the limitations of AI in the form of a book. Except they also wanted to share the book with an audience as-they-wrote-it; and took to Substack to start a newsletter titled ‘AI Snake Oil.’ As the title suggests, the newsletter and the book, was going to break-down the hype and expectations surrounding AI, and separate what is really plausible from what is just believed to be possible.

Kapoor was then just a year-old into his doctoral studies in the Computer Science Department at Princeton University. He graduated with a B. Tech. in Computer Science from IIT Kanpur in 2019, and worked for Facebook before moving to Princeton in 2021. Narayanan is Kapoor’s supervisor, and teaches Computer Science at Princeton. He graduated with a B. Tech. in Computer Science  from IIT Madras in 2004.

Cut to a year later, their newsletter was applauded by TIME, as the magazine included the duo in the list of the “100 Most Influential People in AI” in the world. The list, published earlier this month, comprises several engineers, scientists, innovators from across the world, who it says are “ a group driving the development of AI.” Kapoor, in his tweet after the inclusion said that he was honoured to be in the list, and that they didn’t expect people to read the newsletter when they started.

Story continues below this ad

Kapoor and Narayanan have authored a little more than 25 editions of their newsletter in the year gone by — with a new edition releasing twice a month.

The editions in the past have drawn attention towards the absence of domain experts in the prediction processes of AI, the gender bias underlying the answers offered by Chat GPT, the pitfalls in the reportage on AI, the limits of the limits of generative AI et cetera.

Kapoor and Narayanan have authored a little more than 25 editions of their newsletter in the year gone by—with a new edition releasing twice a month.

Story continues below this ad

The editions in the past have drawn attention towards the absence of domain experts in the prediction processes of AI, the gender bias underlying the answers offered by Chat GPT, the pitfalls in the reportage on AI, the limits of the limits of generative AI etcetera.

They also released a ‘REFORMS’ (Reporting standards for Machine Learning Based Science) list along with other researchers in one of their editions, to help check the veracity of results thrown up by Machine Learning models.

The duo recently spoke to The Indian Express, over an email exchange, about the reasons behind deciding to write a book as a newsletter, the themes their book will cover, the limits of Chat GPT in the realm of education, and more.

Excerpts from the interview:

How does the inclusion in the TIME list affect your endeavour? Has it boosted the reach of the newsletter?

Story continues below this ad

Sayash Kapoor: The inclusion in the TIME 100 list was certainly an honour. Of course, any list is arbitrary—there are many people who were just as impactful as us but were left out — but to us, it showed that making scholarly contributions, such as through our research, and societal contributions, such as through our newsletter, need not be at odds with each other.

Your newsletter has culminated into a book. Could you tell me a bit more about the broad themes of the book, and the areas it focuses on? Did you have a particular audience or approach in mind while writing the book/ newsletter?

Sayash Kapoor: Our book serves as a guide to understanding the actual potential and limitations of AI in today’s world. Through it, we endeavour to differentiate between the genuine advancements in AI and the hyperbolised  narratives that are quite prevalent in the industry today. The two main types of AI we discuss are generative AI, such as ChatGPT, and predictive Artificial Intelligence, AI used to make predictions about future social outcomes, such as which defendant will go on to commit a crime or who will pay back a loan. In our view, while generative AI presents a true advance, many predictive AI products are actually snake oil: false marketing about products that do not, and perhaps cannot work. Our aim is to equip a broad spectrum of readers, including policymakers, journalists, and the general populace, with the tools to critically evaluate AI technologies and make informed decisions, therefore avoiding AI snake oil.

Do you see AI playing a greater role in the space of education, going forward? What are, or could be the limits of Chat GPT in this regard?

Story continues below this ad

Sayash Kapoor: Certainly, AI has the potential to play a pivotal role in the educational sector. But current AI systems suffer from several shortcomings which students and teachers need to be cautious of. Systems like ChatGPT tend to fabricate information. This has led to quite a lot of harm. For example, a lawyer used ChatGPT to come up with legal citations for a case. Later, it was found that all of these citations were made up, and he ended up getting penalized by the judge. Similarly, ChatGPT will often defame people — for example, by making up a false harassment case against a professor. When applied to education, such fabrications can be deal breakers in cases where factual accuracy is important. Still, many current tools can be useful in education if one is cognizant of these shortcomings.

There are other concerns about the use of AI in education that are worrisome. One is  the impact on teachers, who often have to re-design their assignments from scratch, because students can now use ChatGPT to complete older versions of assignments. As a result, many teachers are resorting to AI detection tools to detect if a student used AI. But these tools are even worse. They have a high rate of false positives, they don’t work well on non-native speakers, and so end up giving teachers a false sense of security.

Arvind Narayanan: Chatbots are already playing a big role in education. When I teach, I encourage my students to take advantage of AI, while being mindful of its limitations. It is useful to experts as well, and not just students. For me, the quickest way to gain a basic understanding of a new paper often involves discussing it with a chatbot.

Since current chatbots struggle with factual accuracy, the ability to read text critically will be even more important to students in the future.

Story continues below this ad

Is there any particular reason behind choosing ‘newsletter’ as a format to share your findings, reflections on the subject?

Sayash Kapoor: We opted to write our newsletter because it allows us to engage with our audience in a more continuous and interactive manner. AI advances at a break-neck speed, so compared to communicating our findings through academic papers, the newsletter has allowed us to dissect false or misleading claims about AI in real time. When we started out, we weren’t sure if this was the best decision, but looking back, it is clear that the newsletter has resonated with our readers—we now have over 14,000 readers who subscribe to our newsletter.

Arvind Narayanan: On social media, the spread of information isn’t necessarily based on the quality of the arguments. Content that stirs emotions or gets people to click and engage tends to do well. In comparison, when people subscribe to newsletters they tend to pay more attention to quality. So we felt that newsletters were a better format for serious writing compared to social media.

Raunaq covers Education for 'The Indian Express.' He's interested in long-form reportage, and stories that put people and the intricacies of their lives at the front and centre. He completed his undergraduate studies in Chemical Engineering from IIT Delhi in 2022, and pursued a year-long fellowship in liberal studies from Ashoka University thereafter. He's previously interned with The Quint, and written for Firstpost, Mint Lounge, The Hindu Sunday Magazine, and The Wire Science as a freelance journalist. The Indian Express marks his foray into full-time journalism. ... Read More

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement