Follow Us:
Thursday, December 12, 2019

Facebook says it removed millions of content on child nudity, terrorism and more

Facebook removed millions of pieces of content in Q2 and Q3 2019 for violating its content policies. This includes content that violated its suicide and self-injury regulated goods policies, child nudity and sexual exploitation of children, policies against hate speech.

By: Tech Desk | New Delhi | Published: November 14, 2019 12:49:34 pm
Facebook revealed that the same detection systems are used on Facebook as well as Instagram to detect and remove harmful content. (Image: Reuters)

Facebook removed millions of pieces of content in Q2 and Q3 2019 for violating its content policies. This includes content that violated its suicide and self-injury regulated goods policies, child nudity and sexual exploitation of children, policies against hate speech, and more.

The move comes as Facebook CEO Mark Zuckerberg is facing criticism from its own employees as well for the company’s stand on political ad policy of letting politicians lie in advertisements.

Apart from this, Facebook shared for the first time the data on how it has enforced its content policies on Instagram.  In the November 2019 edition of its Community Standards Enforcement Report, Facebook revealed that the data from Instagram is from four main areas including child nudity and child sexual exploitation; regulated goods, which include illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda.

In Q3, Facebook removed close to 11.6 million posts for violating its child nudity and sexual exploitation of children policy, while on Instagram, it was 754,000 pieces of content, respectively. Some 512,000 posts were removed from Instagram in Q2 as well.

In Q2 and Q3, 4.5 million pieces of content related to suicide and self-injury were removed. While on Instagram, 1.7 million such posts were removed. Close to 841,000 posts of drug sale content and 2.3 million pieces of firearm sales content in the third quarter were removed from Facebook. On Instagram, it was 1.5 million pieces of drug sale content and 58,600 posts of firearm sales content, respectively.

Facebook’s VP Integrity Guy Rosen said in the report that the same detection systems are used on Facebook as well as Instagram to detect and remove harmful content, though different metrics are used across the two platforms. The social media company mainly relies on AI to tackle the issue of harmful content.

For all the latest Technology News, download Indian Express App

Advertisement
Advertisement
Advertisement
Advertisement