Communal violence and “vigilante violence against students” in Delhi, violence surrounding citizenship law protests in Lucknow, violence targeting North Indian migrants in Mumbai and “antagonisms” related to religion and Bangladeshi migrants in Kolkata were among the “salient conflicts” flagged by Facebook researchers from posts and site visits focussed on key Indian cities, as reported in an internal research memo dated July 14, 2020.
The city-wise “conflicts” grid also noted that participants surveyed during the field site visits reported “demonizing anti-Muslim content connected to CAA protests and Delhi riots”. Separately, the document also lists the protests against the Citizenship Amendment Act (CAA) and the Delhi riots as “violent crisis events in India” that create an “environment of risk for offline harms”.
Approximately one in three users of the Facebook family of apps (Facebook, Messenger, WhatsApp and Instagram) surveyed “reported seeing inflammatory content within seven days or in more than seven days,” which excluded the “user experiences with inflammatory content on WhatsApp”, according to the memo, titled “Communal Conflict in India”.
These internal reports are part of documents disclosed to the United States Securities and Exchange Commission (SEC) and provided to Congress in redacted form by the legal counsel of former Facebook employee and whistle-blower Frances Haugen. The redacted versions received by Congress have been reviewed by a consortium of global news organisations, including The Indian Express.
Police violence associated with CAA protests (e.g. Delhi, Uttar Pradesh) meet our human rights definition of civil unrest… Inflammatory content spiked during the peak of these protests in late 2019,” the memo noted. It added that Facebook internally designated the Delhi riots as “a hate event, acknowledging vulnerable group risks”.
Based on survey from the aforementioned cities, the research memo also pointed out the behaviour of “impunity in single-religion spaces” highlighting some “Hindu and Muslim participants felt more comfortable sharing harmful content when they believed only other members of their religion would see it”. “For both communities, WhatsApp groups were most often cited as a more comfortable space,” the memo noted.
“Most participants from Hindu and Muslim communities felt that they saw a large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp. This content primarily targeted Muslims, and Muslim users felt particularly threatened or upset,” it said.
However, the preference for single-religion space did not preclude public sharing. “While participants shared the preference for sharing misinformation and inflammatory content on a homogenous surface, some explicitly continued posting openly on FB (Facebook),” the memo noted.
In February 2020, four months before the internal memo was presented to the teams at Facebook, New Delhi saw clashes between the Citizenship Amendment Act supporters and protesters. These riots, which took place in the north-east part of Delhi, left 53 people dead and more than 200 people injured.
Responding to queries, a spokesperson for Meta – Facebook rebranded to Meta last month – told The Indian Express: “Enforcement against hateful content is a continuous process and we take all the inputs that we get from our teams to ensure we are able to keep users safe. Every day our teams have to balance protecting the ability of billions of people to express themselves with the need to keep our platform a safe and positive place”.
“We continue to make significant improvements to keep harmful content off of our platforms but there is no perfect solution. Our work is a multi-year journey, and we’re proud of the immense progress we’ve made. That progress is in large part due to our team’s dedication to continually understanding challenges, identifying gaps and executing on solutions. It’s a continual process that is fundamental to how we operate,” the spokesperson added.
The role of Facebook and WhatsApp groups formed by rioters to coordinate and plan the attacks, as well as spread inflammatory rumours also came under the lens, following which Delhi Legislative Assembly’s Committee on Peace and Harmony summoned top Facebook India officials. The committee had sought to know Facebook’s views on “critical role of social media in preventing the spread of false, provocative, and malicious messages that can incite violence and disharmony”.
Specifically, the research note also submitted recommendations to build major categories such as “hate, inflammatory, misinformation, violence and incitement” in user reporting for WhatsApp, but the messaging app still hasn’t rolled out such reporting categories.
“WhatsApp is an industry leader among end-to-end encrypted messaging services in preventing and combating abuse and we are deeply committed to user safety. As a messaging service, WhatsApp connects people with their family, friends and contacts, quite different from social media. We’ve taken several steps to decrease the risk of problematic content going viral, potentially inciting violence or hatred including banning mass-messaging globally and reducing the number of people one can forward a message to, to just five chats at once,” the Meta spokesperson said, when asked about the internal memo that took a note of WhatsApp being linked to communal violence.
Facebook-owned messaging app WhatsApp has been at the centre of a debate between the Indian government over the service being end-to-end encrypted – something that the administration has sought a key into for law enforcement purposes. In May this year, the Central government notified the new social media intermediary guidelines mandating significant social media intermediaries – those with more than 50 lakh users – to trace originators of messages that break the law. WhatsApp has sued the Indian government against this particular rule, suggesting that it undermines user privacy.
Among the recommendations submitted in the research note, it is also suggested to “understand disparate impact of the platform on vulnerable groups”, including exploring “possibility of segmentation by religious/ethnic groups”.
“In addition to our safety features and controls, we employ teams of engineers, data scientists, analysts, researchers, and experts in law enforcement, online safety, and technology developments to oversee these efforts. We enable users to block contacts and to report problematic content and contacts to us from inside the app just as we pay close attention to user feedback and engage with specialists in stemming misinformation and promoting cybersecurity,” the spokesperson said.
“WhatsApp continues to work closely with Law Enforcement agencies and is always prepared to carefully review, validate and respond to law enforcement requests based on applicable law and policy and share actionable information, including Indian Law Enforcement agencies when called upon. Specific to these incidents, WhatsApp responded to lawful requests from law enforcement agencies to assist them in their investigations,” the spokesperson said.