scorecardresearch
Follow Us:
Saturday, October 16, 2021

33 million content pieces removed during June 16-July 31: Facebook

These monthly reports are mandated by the Ministry of Electronics and Information Technology as per the new Intermediary Guidelines and Digital Media Ethics Code.

By: ENS Economic Bureau | New Delhi |
Updated: September 1, 2021 5:58:46 am
Facebook, Facebook outage, Instagram, Instagram outage, WhatsApp, WhatsApp outage, WhatsApp down, Instagram server, Facebook newsFacebook is used to sign in to many other apps and services, leading to unexpected domino effects such as people not being able to log into shopping websites or sign into their smart TVs, thermostats and other internet-connected devices (Image credit: Reuters)

Facebook said Tuesday it took proactive action to remove 33.3 million pieces of content which violated one of the 10 policies of the platform, while it took action on 2.8 million pieces of content that violated any one of the 8 policies on Instagram.

“Over the years, we have consistently invested in technology, people and processes to further our agenda of keeping our users safe and secure online and enable them to express themselves freely on our platform. We use a combination of Artificial Intelligence, reports from our community and review by our teams to identify and review content against our policies,” a spokesperson for Facebook said.

In its second monthly report published for the June 16-July 31 period, Facebook said it took proactive action on 25.6 million pieces of spam content, while also acting to remove 3.5 million pieces of violent or graphic content. It also said it removed 2.6 million pieces of content that had adult nudity or sexual activity.

Though the platform took action on 123,400 pieces of content which were in the form of bullying or harassment, its proactive action rate on such content remained low at 42.3 per cent.

Facebook’s Instagram, on the other hand, took action on 1.1 million violent and graphic content while also acting on 811,000 pieces of content that depicted suicide and self-injury content. Instagram does not yet have a metric to measure spam.

In its first monthly report published in July, the social media conglomerate had said that it had taken proactive action on 1.8 million pieces of content containing adult nudity and sexual activity, 2.5 million pieces on violent and graphic content, and about 25 million content pieces containing spam.

Apart from these, Facebook also received 1,504 reports between June 16 and July 31 through its grievance mechanism, while Instagram received 265 such reports through the grievance channel. Both the platforms acted on all of the complaints received through the grievance mechanism.

Meanwhile, Facebook-owned instant messaging platform WhatsApp said it has removed more than three million accounts between June 16 and July 31.

Google, in its monthly transparency reports, said it received 36,934 complaints from users and removed 95,680 pieces of content based on those complaints in July.

These monthly reports are mandated by the Ministry of Electronics and Information Technology as per the new Intermediary Guidelines and Digital Media Ethics Code. As per the intermediary guidelines announced in February and put into force from May 26, all significant social media intermediaries — those with more than 50 lakh users in India — had to publish monthly reports mentioning the details of complaints received, the details of such complaints, the action taken on them, and the number of specific communication links that the platform removed through proactive monitoring.

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

  • The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.
Advertisement
Advertisement
Advertisement
Advertisement