scorecardresearch
Follow Us:
Tuesday, September 29, 2020

Processes and concerns: How does public policy in social media work?

How does content moderation work on social media sites? Some technology policy experts have called for a thick line between public policy and content moderation, while others have criticised the processes as “informal and opaque”.

Written by Karishma Mehrotra , Edited by Explained Desk | New Delhi | Updated: September 2, 2020 9:08:16 am
(Image: Bloomberg)

Last week, a committee of the Delhi Legislative Assembly took up Facebook’s alleged “inaction” in a matter involving the BJP, in the wake of a report in The Wall Street Journal that a top executive of the company in India had “opposed applying hate-speech rules” to users linked to the ruling party, citing business imperatives.

This isn’t the first time. Earlier too, Facebook has been accused of bias, of not doing enough to discourage hate speech, and of letting governments influence content decisions on the platform.

A decade of debate

Back in 2009, when Facebook had only 200 million users (fewer than its current user base in India), critics noticed Holocaust deniers on the platform.

One of the first governments to take up the issue was Germany. In 2015, Chancellor Angela Merkel discussed rising xenophobic attacks on refugees with CEO Mark Zuckerberg. That same year, German prosecutors launched an investigation against the company’s local head executive; Der Spiegel said this was the first time a government had investigated Facebook representatives for abetment of violent speech. Germany passed a law against hate speech in 2017.

In the run-up to the 2016 presidential election in the United States, Facebook allowed videos posted by then candidate Donald Trump that violated their guidelines, saying the content was “an important part of the conversation around who the next US President will be”. The issue returned in the ongoing election campaign after Facebook refused to take down a post by Trump about the Black Lives Matter protest in May.

In 2018, Facebook admitted to having failed to prevent human rights abuses on the platform in Myanmar. Civil society had been sounding alarms since 2014, which were eventually taken up by the United Nations and the US Congress. In November that year, Facebook said in a statement that they “weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence” in Myanmar.

The company had then commissioned an external human rights impact assessment of its role in Myanmar; it is now learnt to have asked at least one legal office in India to conduct a human rights audit of its presence in this country.

Also read | Before 2019 polls, BJP flagged 44 ‘rival’ pages, 14 now off Facebook

Facebook has often said that it should not be in charge of making these decisions, and has attempted to outsource the decision-making to fact-checking third-party entities such as BOOM Live. In the face of backlash, it has continuously promised to do more. The company plans to allow some content decisions to be appealed by a new independent Oversight Board, even though that would only apply to content that is taken down, not what was flagged but remained.

Internet regulations in several countries (including the US’s Section 230 of the Communications Decency Act and India’s Section 66A of The Information Technology Act) protect platforms from having immediate liability for content, ostensibly to guard against over-censoring. Because of rising frustrations, however, governments are threatening those legal protections – for example, through the executive order by Trump in May, and the pending amendments to the Intermediary Liability Guidelines by the IT Ministry. In Australia, tech executives can now be jailed if their platforms do not swiftly remove violent video content.

facebook, facebook hate speech, facebook BJP hate speech, facebook WSJ report on hate speech, facebook bias, facebook content, facebook content moderation Facebook Chairman and CEO Mark Zuckerberg (Reuters Photo: Erin Scott)

How content moderation works

The vast majority of content on Facebook is flagged by an algorithm if it violates company rules. Users can manually flag content, which gets directed to content moderators sitting in contracted offices all over the world.

In 2019, the company took down 20 million posts for hate speech, according to company reports. Roughly three quarters (16 million) was taken down algorithmically before a user reported it.

The remaining 4 million were flagged by contracted content moderators and/or escalated to content operations teams within Facebook company offices.

If the content comes to these Facebook employees in a team called ‘Strategic Response’, they can rope in other local teams, such as legal. Public policy is often involved if there are certain sensitive risk factors, such as political risk, that need to be considered, sources said.

If internal groups disagree, the decision is escalated, potentially all the way up to Zuckerberg himself.

When Facebook India escalated the February Delhi riots as an event under the “Dangerous Individuals and Organisations” policy, it ultimately required the global team to sign off on it. Once labeled, any content promoting the riots was blocked globally.

Other companies function similarly.

Also read | Delhi Assembly panel links riots to social media messages

Former and current executives in Google told The Indian Express that content moderation and public policy remain separate, but that public policy units are allowed to help input or guide decisions, especially if the decision could lead to significant political or government backlash.

Twitter officials said their verticals remain completely separate.

📣 Express Explained is now on Telegram. Click here to join our channel (@ieexplained) and stay updated with the latest

facebook, facebook hate speech, facebook BJP hate speech, facebook WSJ report on hate speech, facebook bias, facebook content, facebook content moderation The Wall Street Journal report accused a top executive of Facebook in India of having “opposed applying hate-speech rules” to users linked to the BJP.

The situation in India

The WSJ report said Facebook’s top public policy executive in India cited government-business relations to not apply hate-speech rules to BJP-linked individuals and groups internally flagged for violent speech. A Facebook spokesperson told The WSJ that the concerns of the executive were only one input in the deliberation. The report’s findings on a “pattern of favoritism” towards the BJP calls into question the personal political leanings of company officials, and their ability to dominate content deliberations in these processes.

Some technology policy experts have called for a thick line between public policy and content moderation, while others have criticised the processes as “informal and opaque”.

“Traditionally, public policy roles have entailed government relations. This model doesn’t extend itself well to social platforms though,” said Prateek Waghre, Technology Policy Research Analyst at The Takshashila Institution. “There is an inherent conflict when government relations and policy enforcement go to the same set of people or organisation units.”

Read | On ‘bias in enforcing policy’, Facebook says: Denounce hate, bigotry

But many current company executives believe that a public policy teams input from a government perspective may be warranted, for example in the case of the pandemic. “It does not have to result in a negative outcome,” said a public policy employee at one of the major technology companies. “But, local public policy executives do not call the shots alone. The buck stops at global headquarters.”

“The real concern is the lack of public transparency and insufficient internal accountability in decision-making when it comes to content moderation,” said Udbhav Tiwari, Policy Advisor, Mozilla. He believes that while public policy teams provide valuable local context, current processes can often be ad-hoc.

“The problem in the recent revelations regarding Facebook India is public policy teams having the apparent ability to veto or block these decisions based exclusively on business considerations. There should be a publicly documented process which allows competing equities within companies to input into these decisions,” he said.

Chinmayi Arun, fellow at Harvard’s Berkman Klein Center for Internet & Society, wrote in 2018: “Freedom of expression is eroded by global platforms’ reaction to the threats to their businesses in markets around the world. The threats are delivered through shutdowns, potential data localization law and other means. The platforms are reacting by self-regulating in a manner that imperils free expression while failing to truly address the problem of hyper-local harmful speech.”

Editorial | Social media platforms need to work with government but they must be seen to be agnostic to ideology

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Explained News, download Indian Express App.

0 Comment(s) *
* The moderation of comments is automated and not cleared manually by indianexpress.com.
Advertisement
Advertisement
Advertisement
Advertisement