Facebook instructs its content moderators to flag posts degrading a religion in India because that speech violates Indian local laws, according to leaked documents accessed by The New York Times.
These content moderation instructions contradict the company’s own laid out global policy as well as articulation of the policy in meetings with the press. In a meeting with a handful of reporters in October in the Facebook Delhi office, Facebook’s Vice-President for Global Policy Solutions Richard Allen had said Facebook does not consider hate speech to be speech attacking any religion or belief, but rather it defines hate speech as speech attacking a group of people.
“Attacking a concept is not hate speech,” Allen said at the time. “But you can’t say you hate a group of people… This is one of the areas we debate. Some people find it controversial.”
The Times report, based on over 1,400 leaked documents outlining the company’s content moderation guidelines, includes a PowerPoint slide from Facebook’s India and Pakistan rules. On it, a diagram showcases four categories to keep in mind when moderating content. These categories include ‘Logically illegal content’, ‘Respecting local laws when the government actively pursues their enforcement’, ‘Facebook risks getting blocked in a country, or it’s a legal risk’, and ‘Content doesn’t violate Facebook policy’.
Apart from the October interaction where Facebook had defined what it considered hate speech, in another closed-door roundtable with members of the public, its executives had described this distinction as, for example, the difference between attacking Islam, which the company allows, and attacking Muslims, which would be considered hate speech by the company and taken down.
Facebook’s Community Standards reads: “We define hate speech as a direct attack on people based on what we call protected characteristics.” “Religious affiliation” is one of Facebook’s listed protected categories.
Facebook representatives in India did not respond to queries specific to these India-related findings, but replied with a Facebook blog post that responded to the Times report. That blog post did not include any India-specific information.
The Times report describes another slide stating that Indian law prohibits calls for an independent Kashmir. “The slide instructs moderators to ‘look out for’ the phrase ‘Free Kashmir’ — though the slogan, common among activists, is completely legal,” the Times report reads.
Besides this content moderation system that abides by Facebook’s internal policy, the company also receives requests from law enforcement agencies and government to take down unlawful content. In the India portal of the company’s public reports outlining content takedowns due to legal requests, the company lists “anti-religious” content or “defamation of religion” as comprising a majority of the content takedowns in every quarterly update of the report since 2013.
These Facebook moderation documents arrive on the heels of recently publicised draft amendments to the Information Technology Act, one of the main laws governing online content in India, which would place more liability on companies to proactively take down unlawful content on their platforms.
In the case of unlawful content attacking religion, a Law Commission report on Hate Speech from March 2017 states: “Hate speech has not been defined in any law in India. However, legal provisions in certain legislations prohibit select forms of speech as an exception to freedom of speech.”
Included in those provisions are Indian Penal Code sections: Section 153A penalises “promotion of enmity between different groups based on religion;” Section 295A of the IPC penalises “deliberate and malicious acts intended to outrage religious feelings of any class by insulting its religious beliefs;” and Section 298 penalises “uttering, words, etc, with deliberate intent to wound the religious feelings of any person”.
The Constitution also allows limits to freedom of speech and expression if these are “in the interests of… public order, decency or morality”.
Facebook’s transparency reports show it has been removing an increasing amount of content it considers hate speech globally, from 1.6 million posts in the last quarter of 2017 to 3 million in the third quarter of 2018. In addition, more and more of that content is flagged by the company before reported by a user. For example, in the third quarter of 2018, more than 50 per cent of the hate speech takedowns worldwide comprised “proactive detection”, meaning these were flagged by the platform itself.
Along with automatic detection and user reports, Facebook moderates content through a network of 15,000 content moderators worldwide who flag posts for removal according to the company’s instructions.
A Facebook report states, “The amount of content we flagged increased from around 24% in Q4 2017 because we improved our detection technology and processes to find and flag more content before users reported it.”
In the closed-door meeting, Allen had said the Facebook standards have been developed over time through global conversations and allegiance to international human rights standards.
India is Facebook’s largest market with roughly 300 million users.