World

Australia's regulator accuses YouTube and other platforms of 'ignoring' child abuse content.

Published On Wed, 06 Aug 2025
Rohit Deshmukh
0 Views
news-image
Share
thumbnail

Australia’s internet safety regulator has criticized major social media companies for continuing to “turn a blind eye” to child sexual abuse material on their platforms, singling out YouTube for being unresponsive to its inquiries. In a report released on Wednesday, the eSafety Commissioner stated that YouTube and Apple failed to keep track of the number of user reports regarding child abuse content on their platforms and were unable to provide timelines for how quickly they address such reports.

Last week, the Australian government decided to include YouTube in its groundbreaking social media ban for teenagers, reversing an earlier plan to exempt Google’s video platform based on the eSafety Commission’s recommendation. “When left unchecked, these companies are not prioritizing child safety and are willfully ignoring criminal activity occurring on their platforms,” said eSafety Commissioner Julie Inman Grant. “No other public-facing industry would be allowed to operate while enabling such horrific crimes against children.”

Google has previously stated that child abuse content is not tolerated on its services, highlighting its use of industry-standard methods to detect and remove such material. Meta, which owns Facebook, Instagram, and Threads, has also said it bans graphic content. The eSafety Commissioner, an agency responsible for safeguarding internet users, has required Apple, Discord, Google, Meta, Microsoft, Skype, Snap, and WhatsApp to report on their efforts to combat child exploitation and abuse material in Australia.

The report on their compliance revealed multiple “safety shortcomings” that increase the risk of child abuse content and activity on their platforms. These include failures to stop livestreams of abuse, inability to block links to known abuse material, and inadequate systems for reporting violations. Furthermore, the platforms were found not to be consistently using “hash-matching” technology across their services — a tool that scans for child abuse images by comparing them to a known database. Google has previously claimed it uses hash-matching and AI to combat abuse material.

The regulator also noted that several companies have made little to no progress in addressing these issues, despite being warned in prior years. “For Apple services and Google’s YouTube, they couldn’t even answer basic questions about the volume of user reports related to child sexual abuse, or disclose how many trust and safety staff they employ,” Inman Grant said.

Disclaimer: This image is taken from Reuters.