7 Big Tech Firms Criticised Over Not Doing Enough To Stop Child Exploitation: Australian Regulator

7 Big Tech Firms Criticised Over Not Doing Enough To Stop Child Exploitation: Australian Regulator
Logos of the Big Tech giants are displayed on a tablet on Oct. 1, 2019. (Denis Charlet/AFP via Getty Images)
12/15/2022
Updated:
12/15/2022

Seven of the world’s largest Big Tech firms have been told to do better in tackling online child sexual exploitation by the Australian government after a report undertaken by the eSafety Commissioner found their response inadequate.

Apple, Meta (Facebook and Instagram), Snap, Microsoft, WhatsApp, Skype, and Omegle were reprimanded by eSafety Commissioner Julie Inman Grant, who said in a media release on Dec. 15 that the report’s (pdf) findings were “very disturbing” and firms needed to do more to address the “scourge of online child sexual exploitation.”

“We’re talking about illegal content that depicts the sexual abuse of children—and it is unacceptable that tech giants with long-term knowledge of extensive child sexual exploitation, access to existing technical tools, and significant resources are not doing everything they can to stamp this out on their platforms,” Inman Grant said.

“We don’t need platitudes; we need to see more meaningful action.”

eSafety Commissioner Julie Inman Grant during Senate Estimates at Parliament House in Canberra, Australia, on Feb. 15, 2022. (AAP Image/Mick Tsikas)
eSafety Commissioner Julie Inman Grant during Senate Estimates at Parliament House in Canberra, Australia, on Feb. 15, 2022. (AAP Image/Mick Tsikas)
This comes after the regulator issued legal notices to the seven firms in August under the country’s Online Safety Act 2021 and Basic Online Safety Expectations guidelines compelling Big Tech companies to answer questions on how they were dealing with the problem.

They were given 28 days to respond to the notice or risk fines of up to $550,000 a day.

At the time, the regulator said the country had seen a surge in reports of child sexual exploitation from the start of the pandemic, adding that “technology was weaponised to abuse children.”

“The harm experienced by survivors is perpetuated when platforms and services fail to detect and remove the content,” the regulator said.

“We know there are proven tools available to stop this horrific material being identified and recirculated, but many tech companies publish insufficient information about where or how these tools operate and too often claim that certain safety measures are not technically feasible.”

Apple, Microsoft Highlighted By Commissioner

The regulator found that two of the world’s largest tech firms, Apple and Microsoft, do not attempt to proactively detect child abuse material stored on iCloud and OneDrive services.

This is despite the common availability of PhotoDNA detection technology, which was originally developed by Microsoft. It is now used by tech companies around the world to scan for known child sexual abuse images and videos, with a false positive rate of one in 50 billion, the commissioner said.

Apple and Microsoft also admitted that they do not use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams, or FaceTime—despite Skype being a commonly used platform.

However, Microsoft received praise from the commissioner for its in-service ability to report on the exploitation of children.

“There is no in-service reporting on Apple or Omegle, with users required to hunt for an email address on their websites—with no guarantees they will be responded to,” Inman Grant said.

“Fundamental to safety by design and the Basic Online Safety Expectations are easily discoverable ways to report abuse. If it isn’t being detected and it cannot be reported, then we can never really understand the true scale of the problem.”

The regulator also unearthed large differences in how rapidly the tech companies responded to reports of child sexual exploitation and abuse on their platforms, with the time ranging from an average of four minutes from Snapchat, to two days for Microsoft.”

“Speed isn’t everything, but every minute counts when a child is at risk,” she said.

Grooming was also spotlighted with Microsoft, Skype, Snap, and Apple admitting to the regulator that they do not use any tools to help detect this on their platforms, including Outlook.com Teams, OneDrive, Skype Messaging, Snapchat’s direct chat, snaps, and Apple iMessage.

However, Xbox Live does have tools to help with this form of abuse.

Meta, WhatsApp Struggling to Stop Repeat Offenders

The report also noted that firms like Meta and Whatsapp struggle with repeat offenders, with Meta noting in their response that if an account is banned on Facebook, the ban does not always flow through to Instagram. Likewise, when a user is banned on WhatsApp, that information is not then given to Facebook or Instagram.

“This is a significant problem because WhatsApp report they ban 300,000 accounts for child sexual exploitation and abuse material each month—that’s 3.6 million accounts every year,” Inman Grant said.

“What’s stopping all those offenders creating new accounts on Facebook or Instagram and continuing to abuse children?”

Victoria Kelly-Clark is an Australian based reporter who focuses on national politics and the geopolitical environment in the Asia-pacific region, the Middle East and Central Asia.
twitter
Related Topics