Updated: October 25, 2021 12:41:53 pm
The user account created by a Facebook researcher in Kerala two years ago that encountered several instances of hate speech and misinformation on the basis of algorithmic recommendations led the company to undertake a “deeper, more rigorous analysis” of its recommendation systems in India, the social media platform said.
Facebook was responding to queries from The Indian Express on The New York Times report about the effects of the social media platform in India, especially in the run-up to the 2019 general elections.
“This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them. Product changes from subsequent, more rigorous research included things like the removal of borderline content and civic and political Groups from our recommendation systems,” a Facebook spokesperson said.
“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include 4 Indian languages,” the spokesperson said.
The New York Times reported that the researcher’s report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India.
“The internal documents, obtained by a consortium of news organisations that included The New York Times, are part of a larger cache of material called The Facebook Papers. They were collected by Frances Haugen, a former Facebook product manager who became a whistleblower and recently testified before a Senate subcommittee about the company and its social media platforms,” it said.
“References to India were scattered among documents filed by Haugen to the Securities and Exchange Commission in a complaint earlier this month,” it said.
Facebook’s changes build on restrictions the company claims to have made to recommendations, like removing health groups from these surfaces, as well as groups that repeatedly share misinformation.
Specifically for groups sharing misinformation, the company has started ranking all content from such groups lower in the News Feed and limiting notifications with an aim that fewer members see their posts.
The platform also said it witnessed “new types of abuse on Facebook” in the context of Covid, and updated its policies to reflect the changes.
For example, the company said, it now removes content that states that people who share a protected characteristic such as race or religion have the virus, created the virus or are spreading the virus. As part of enforcing this policy, the company claims to have blocked several hashtags that were primarily being used to target the Muslim community in the context of Covid.