Follow Us:
Wednesday, May 27, 2020

Shreya Singhal case was one of the defining rulings of modern internet law

With Shreya Singhal judgment, India showed the world how to protect plurality and innovation online. Draft Intermediary Rules by the IT ministry move away from that achievement

Written by Daphne Keller | Updated: January 17, 2020 12:24:18 pm
Filtering out free speech Platforms routinely receive allegations that users have violated the law from accusers who demand removal of particular posts or accounts. (Illustration by C R Sasikumar)

In a landmark ruling earlier this month, India’s Supreme Court held that citizens’ right to freedom of speech and rights to carry out business using the internet are constitutionally protected. The new decision builds in part on an equally important 2015 case, Shreya Singhal v. Union of India, in which the Court defined key rules for the relationship between democratic governments and commercial internet platforms. That case called on courts and government agencies — not companies like Google or Facebook — to decide what speech and information violates the law, and must be removed from public view on the internet.

As I teach my students at Stanford, Shreya Singhal was one of the defining rulings of modern internet law. It had important consequences for lawyers reviewing takedown demands at platforms like Google, as I did in my previous role as associate general counsel to that company. But its greatest importance was for smaller businesses and ordinary internet users. Shreya Singhal clarified that competent public authorities, not private platforms, should sit in judgement when online speech is alleged to violate the law. In practice, this has allowed platforms to enforce their terms of service, while ensuring that only courts and government authorities decide what speech and information is prohibited by law. That model is under attack in India today, most importantly through the Intermediary Rules proposed by the Ministry for Information and Technology.

Shreya Singhal corrected a serious problem with platforms’ incentives to remove lawful content from the internet. Platforms routinely receive allegations that users have violated the law from accusers who demand removal of particular posts or accounts. Sometimes those claims are correct. But all too often, they are false, intended to manipulate platforms into silencing particular speakers. Some seek to suppress important online speech, like reporting on police brutality or scientific research. Others — a majority, according to one early study — come from businesses trying to harm their competitors. Research, including in India, has shown that platforms of all sizes often simply honour these invalid requests from accusers — improperly silencing legal speech, or cutting off customers’ access to legitimate businesses. The Court in Shreya Singhal corrected this lopsided incentive, saying that binding decisions about what content violates the law should come from courts or appropriate government agencies following fair processes — not from private platforms or accusers.

The IT ministry’s proposed Rules depart dramatically from this principle. For one thing, they require platforms to act on government demands in just 24 hours, with no mechanism to correct mistakes or clarify confusing orders. As dozens of public interest groups including the Internet Freedom Foundation and Human Rights Watch have pointed out, that is a recipe for over-compliance and unnecessary removal of lawful expression. The Rules also, alarmingly, require platforms to build “automated tools” to proactively police and remove internet users’ speech.

The idea that platforms can adjudicate speech using automated software filters has gained traction in recent years, and not just in India. Lawmakers in Europe and elsewhere have faced similar proposals, fuelled in part by incumbent platforms’ optimistic (and often commercially self-interested) claims about the potential of technologies like Artificial Intelligence.

But software filters are no substitute for human judgement — much less for proper review by courts or government authorities. As independent technologists and researchers have warned, even the best filters can make serious mistakes, exacerbating the problems internet users and businesses already face from wrongful removals. In particular, filters can’t understand the context in which material appears. That means that even if all a filter does is find duplicates of content already deemed illegal — a terrorist recruitment video, for example — we can’t expect it to understand or protect speech that uses the same material in important new contexts, like news reporting or scholarship. Critics of European filtering proposals, including UN officials, human rights and journalistic organisations, and multiple civil society groups, raised precisely this concern there, leading the EU Parliament to eliminate filters from the latest draft of a proposed law on terrorist content. Another controversial European law did, in 2019, introduce a filtering mandate for copyright infringement. That law is now being challenged before the EU’s highest court as a violation of human rights, in a case posing the same profound questions about public and private power over online speech that India’s Supreme Court considered in Shreya Singhal.

Filtering mandates raise equally serious concerns about economic development and innovation. Startups and small platforms can’t afford to invest $100 million in filtering technologies, as YouTube says it has done. A law requiring filters might be economically tolerable to incumbents, but devastating to their smaller competitors — not to mention non-profit operators of forums for online speech, as Wikipedia pointed out in comments opposing the IT Ministry’s proposal. Applying new removal obligations only to large companies might ease some of these problems, but would distort incentives for mid-sized and growing platforms. And encouraging major platforms to adopt clumsy filters or rushed removal processes would only exacerbate threats to online speakers and small businesses that depend on those platforms.

The laws governing internet platforms can and should face careful public review. In many cases, change may be appropriate. But the IT Ministry’s mandate for rushed and automated content removal is a major step in the wrong direction. Its shift from reliance on courts and public authorities toward over-reaching removal by platforms will affect untold numbers of individuals and businesses online. That may well violate India’s Constitution: the Supreme Court declined to examine Shreya Singhal’s Constitutional parameters in a case last month involving a blogger’s alleged defamation of an asbestos company, but litigation on this question will continue. In the meantime, Indian policymakers can preserve the place carved out by Shreya Singhal for India as a global leader in carefully considered internet laws by opposing the draft Rules’ filtering mandate.

This article first appeared in the print edition on January 17, 2020 under the title ‘Filtering out free speech’. The writer is director of intermediary liability at Stanford’s Center for Internet and Society, and was formerly associate general counsel to Google.

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

0 Comment(s) *
* The moderation of comments is automated and not cleared manually by indianexpress.com.
Advertisement
Advertisement
Advertisement
Advertisement