July 21, 2021 8:21:36 pm
Written by Prajjwal Kushwaha and Kanishk Garg
The Supreme Court’s refusal to quash the summons issued to Facebook MD Ajit Mohan to appear before the “Peace and Harmony Committee” of the Delhi Assembly has reignited the debate on whether the government can effectively regulate Big Tech platforms. Even though the SC compelled Mohan to appear before the committee, the court has directed it not to pose any questions relating to “law and order” or the “police”. Further, any obligations on Facebook to remove hate speech on its platform flow from the Information Technology Act, 2000, which again is outside the purview of the Assembly Committee.
Emphasising Facebook’s responsibility to recognise its role in influencing democracy and political discourse, the court cited allegations of platforms like Facebook facilitating Russian interference in the 2016 US elections. Facebook’s acknowledgement of its faults in not curbing the proliferation of hate speech on its platforms, which helped fuel ethnic violence in Myanmar and Sri Lanka, also shed light on the scale of this problem.
Platforms like Facebook restrict hate speech as a part of their community guidelines and take down content that qualifies as a direct attack on people based on protected characteristics like race, religion or sex. A user can flag any post violating the community guidelines. A team of language-based content moderators go through flagged posts to sort between hate speech and offensive speech protected under free speech laws. Facebook’s pro-free speech policies focus more on making sure it does not take down legitimate speech. As the line between offensive and hate speech is increasingly murky, some removable content slips past the system.
The sheer amount of content published on such platforms also requires them to place automated systems that proactively detect and take down hate speech. Although Facebook boasts that its automated hate-speech detection system has lately been 97 per cent efficient, it struggles in countries where people speak many languages. For deploying its approach to a local language, the platform engages content moderators who create training data sets for the algorithms. It learns and picks up the kind of posts it should target for removal. Zoning in on posts is tricky when they are long, complicated and with colloquial terms. A lack of trained content moderators for that language means an undertrained detection algorithm and a backlog of flagged posts.
In September 2020, the Shashi Tharoor-led Parliamentary Committee on Information Technology questioned Mohan and Facebook on safeguarding citizens and preventing misuse of social/news media platforms. The committee is empowered to investigate and report on issues falling under any concerned ministries. However, the investigation into this issue is being conducted on partisan lines. BJP MPs alleged that Facebook unfairly targeted posts made by party members for takedowns, whereas the Congress alleged discriminatory policies favouring the ruling party. Individual members are using the committee meetings to score political points, and this politicisation hampers attempts to study Facebook’s hate speech moderation.
The “Peace and Harmony” Committee of the Delhi Legislative assembly sought to study Facebook’s role in alleged “intentional omission and deliberate inaction” in applying hate speech rules, leading to disruption of peace and harmony in Delhi. The chairman of the committee, Raghav Chadha, sought to treat Facebook as a co-accused in the 2020 Delhi riots. Facebook and Mohan speedily moved for quashing the summons. The Supreme Court ruled that the contours of “peace and harmony” were far wider than those of “law and order”, “police” or “information technology”. Therefore, the committee would have the authority to pose any questions to Mohan and Facebook as long as they stayed clear of these three subjects. If any question is related to any of these three areas, Mohan can decline to answer.
Any enquiry into Facebook’s role in the incidents in February 2020 will involve asking about its hate speech detection algorithms and content moderation team and whether it has done enough against hate speech to be able to claim safe harbour from intermediary liability. The Delhi Assembly Committee does not have the legislative competence to get into these technical questions. Any questions will now be on broad policy related to communal harmony, not on specific aspects of its moderation practices. Answers to these questions are necessary to develop a deeper understanding of its inadequacies.
For any meaningful government attempt at regulating Facebook, a rigorous expert report which looks into social media platforms’ role in letting hate speech fester is necessary. It will be interesting to see if the central government can force Facebook and other industry leaders to urgently upgrade their hate speech moderation infrastructure to make their platforms safer and more ethical.
The writers are final-year students of law at National University of Juridical Sciences, Kolkata
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines
- The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.