Journalism of Courage
Advertisement
Premium

Australian regulator says YouTube, others ‘turning a blind eye’ to child abuse material

The report looked at how Apple, Google, Meta, Microsoft, Discord, Skype, Snap, and WhatsApp are addressing child abuse content.

AustraliaThe Australian government recently decided to include YouTube in its social media restrictions for teenagers. (File Photo)

Australia’s online safety regulator has said that major tech companies, including YouTube and Apple, are not doing enough to stop child sexual abuse material from appearing on their platforms.

In a report published on Wednesday, the eSafety Commissioner said YouTube had been especially unresponsive to questions and failed to share how many user reports it receives or how long it takes to act on them. The same was said of Apple.

“When left to their own devices, these companies aren’t prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,” said Julie Inman Grant, Australia’s eSafety Commissioner, in comments reported by Reuters.

The Australian government recently decided to include YouTube in its social media restrictions for teenagers, after the regulator advised against giving it an exemption.

The report looked at how Apple, Google, Meta, Microsoft, Discord, Skype, Snap, and WhatsApp are addressing child abuse content. According to the findings, many platforms had gaps in safety. These included poor systems for detecting live-streamed abuse, weak methods for reporting harmful content, and a failure to block known child abuse links.

The regulator also said some companies had not taken action even after being warned in previous years. It pointed out that not all companies were using “hash-matching” technology across their services – a tool used to detect known child abuse images by comparing them to a database.

“In the case of Apple services and Google’s YouTube, they didn’t even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on staff,” Inman Grant told Reuters.

Story continues below this ad

Google has previously said it uses industry-standard tools, including hash-matching and artificial intelligence, to detect and remove abuse material. Meta, which owns Facebook, Instagram and Threads, says it bans graphic content on its platforms.

From the homepage

Stay updated with the latest - Click here to follow us on Instagram

Tags:
  • Australia
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express ExclusiveAIIMS study: 6 in 10 top Indian doctors not trained to certify brain death, hurting organ donation
X