Premium
This is an archive article published on February 23, 2023
Premium

Opinion YouTube lawsuit: Why the algorithm may not be the villain

Accountability for recommendation systems is laudable, but holding digital platforms liable for harm allegedly caused by their recommendations would lead to chaos

Rather than allowing all videos on YouTube and letting the ranking algorithm feed users videos they may be interested in, YouTube may simply start removing any videos that it thinks its users (as a whole) may not engage with. (Photo: Reuters)Rather than allowing all videos on YouTube and letting the ranking algorithm feed users videos they may be interested in, YouTube may simply start removing any videos that it thinks its users (as a whole) may not engage with. (Photo: Reuters)
February 23, 2023 02:14 PM IST First published on: Feb 23, 2023 at 02:14 PM IST

Written by Aishwarya Giridhar and Vasudev Devadasan

On Tuesday, the US Supreme Court heard arguments on whether online platforms should be liable for harms allegedly caused by their algorithmic recommendations. In Gonzalez v Google, the petitioners are the family members of an individual killed in an ISIS attack in Paris. They argue that YouTube’s recommendation algorithm promoted ISIS propaganda, leading to the radicalisation of individuals and the violence in Paris. We argue that while platforms should be more accountable for their recommendation algorithms, holding YouTube liable for this by narrowing the legal protection that platforms are ordinarily entitled to, is a step in the wrong direction.

Advertisement

The case centres around the statutory immunity provided to online platforms by Section 230 of the Communications Decency Act, which also provides platforms immunity for user-generated content. This means that if a user posts hate speech on Twitter, the user is liable, not Twitter. Section 230 also permits platforms to remove content they believe violates their “Terms of Service”. Just as a restaurant owner may throw out an unruly customer, platforms can remove user content that they believe violates their rules. But unlike being thrown out of a restaurant, having your content removed from a platform results in your freedom of expression being restricted. The drafters of Section 230 foresaw that users may try to sue platforms for removing their content or suspending their accounts. Thus, it also protects platforms when they are sued for removing such content. In India, Section 79 of the Information Technology Act, 2000, provides platforms with similar immunity. It protects them from the liability arising from user-generated content and their decision to remove this content.

The question Gonzalez asks is whether Section 230 also protects platforms from the harm caused by their (algorithmically determined) targeted recommendations. The petitioners argue that platforms have developed algorithms that drive users toward inflammatory and divisive content. For example, an internal Facebook study revealed that 64 per cent of individuals that joined extremist groups on the platform did so due to the platform’s own recommendation system. In the eyes of the petitioners, therefore, platforms should be liable for the harm resulting from such systems. Further, they argue that there is a difference between holding a platform liable for content (that Section 230 prohibits) and holding them liable for the systemic amplification of that content. Thus, a platform should not be liable for a single terrorism video, but if a platform consistently feeds a user radicalising content for months or even years, the platform should be responsible.

On the other hand, platforms argue that ranking is essentially no different from removal; it determines what users see. Without ranking algorithms, users would be subject to a chaotic barrage of content that may increase the amount of terrible or unlawful content users see. Imagine using YouTube where instead of recommendations, you merely saw the latest videos, many of which would be low quality or even unlawful. Without using recommendation systems, a “neutral” platform would have to display such content to all its users. Thus, recommendation systems allow platforms to de-prioritise potentially problematic content and assist them in their content moderation efforts. Several rights groups have argued that if the Supreme Court finds platforms to be liable for harms caused by their recommender systems, they would be incentivised to remove large swathes of risky or low-engagement content — leading to the over-removal of lawful speech, and a chilling effect on free speech. For example, rather than allowing all videos on YouTube and letting the ranking algorithm feed users videos they may be interested in, YouTube may simply start removing any videos that it thinks its users (as a whole) may not engage with. Thus, YouTube goes from a universal library where everybody’s content has a home (with users still getting niche content interesting to them), to a homogenised platform where only videos that YouTube believes have the widest possible appeal will be uploaded.

Advertisement

Moreover, recommendation systems are central to platforms’ functioning. Virtually all platforms, from search engines to social media, use some form of sorting and recommendation system because of the volume of information available online. This is both necessary and useful because it allows users to access relevant information and connect with other users that share their interests and values. Holding platforms liable for harm caused by their recommender systems may result in them redesigning their entire architecture, in a manner that will likely be less useful to users, to avoid lawsuits.

The central questions that underpin Gonzales are, to what extent are platforms responsible for their recommender systems, and whether narrowing intermediary liability protection is the best way to improve the accountability of these ranking algorithms. While algorithmic accountability for recommender systems is a laudable goal, holding platforms liable for harms allegedly caused by recommendations would lead to chaotic results. This is because there exists no judicial standard for exactly when a recommender system can be proven to have caused harm, and it may take courts years to iterate just how bad a recommender system must be for a platform to be liable.

A more nuanced approach could be to have platforms comply with certain clearly defined statutory obligations regarding their recommender systems that would mitigate harm. Measures could include increased transparency on who is seeing what, impact assessments (where platforms proactively identify and institute plans to mitigate risks before deploying new algorithms), audits (where internal or external stakeholders assess algorithms to test how they function), and allowing researchers access to platform data (to identify and flag risks and gaps in platform functioning). As India reimagines its internet laws, domestic regulators should consider instituting such measures to preserve the benefits of recommender systems while also increasing the transparency and accountability of platforms.

Giridhar is a Project Manager and Devadasan is a Project Officer at the Centre for Communication Governance

Latest Comment
Post Comment
Read Comments