Australia’s Regulator Condemns Apple, Microsoft for ‘Turning Blind Eye’ to Child Exploitation, Abuse Online

Australia’s Regulator Condemns Apple, Microsoft for ‘Turning Blind Eye’ to Child Exploitation, Abuse Online
Social media apps on a smartphone in this file photo. (Chandan Khanna/AFP via Getty Images)
Katabella Roberts
12/15/2022
Updated:
12/15/2022
0:00

Australia’s e-safety commissioner, the independent regulator for online safety, has taken aim at tech giants Apple and Microsoft for allegedly “turning a blind eye” to child exploitation on iCloud and OneDrive.

In a press release on Dec. 15, e-Safety Commissioner Julie Inman Grant said the regulator had sent out legal demands for information to some of the world’s biggest technology companies in August requesting that they provide information regarding how they are actively working to tackle the issue of child abuse material and grooming.

Firms that were sent the legal demands included Apple, Facebook parent company Meta, Microsoft, Skype, Snap, and others.

According to the e-safety commissioner, responses showed that both Apple and Microsoft were failing to proactively detect child abuse material and exploitation in their storage and streaming services, iCloud and OneDrive.

That is despite the wide availability of PhotoDNA detection technology, according to the commissioner.

“PhotoDNA was developed by Microsoft and is now used by tech companies around the world to scan for known child sexual abuse images and videos, with a false positive rate of 1 in 50 billion,” Grant said.

eSafety Commissioner Julie Inman Grant during Senate Estimates at Parliament House in Canberra, Australia, on Feb. 15, 2022. (AAP Image/Mick Tsikas)
eSafety Commissioner Julie Inman Grant during Senate Estimates at Parliament House in Canberra, Australia, on Feb. 15, 2022. (AAP Image/Mick Tsikas)

Apple Drops Plans to Scan iCloud Photos for Child Sexual Abuse

In addition, the independent regulator said that Apple and Microsoft also reported that they do not use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams, or FaceTime, while noting that the former is often used to conduct such a crime.

Microsoft does, however, offer in-service reporting, unlike Apple or Omegle, the report noted.

The findings from the e-safety commissioner come shortly after it was reported that Apple will drop its controversial plan, first announced in August, to scan users’ photos stored in iCloud for child sexual abuse material.

The decision reportedly comes amid concerns over potential abuses of such surveillance.

“This report shows us that some companies are making an effort to tackle the scourge of online child sexual exploitation material, while others are doing very little,” Grant said. “It is unacceptable that tech giants with long-term knowledge of extensive child sexual exploitation, access to existing technical tools, and significant resources are not doing everything they can to stamp this out on their platforms.”

The introduction of the Online Safety Act 2021 and the Basic Online Safety Expectations allows the independent regulator to request that tech companies provide them with information pertaining to issues of child exploitation and child abuse materials on their platforms and sites.
Firms that fail to respond to the questions within 28 days risk paying fines of up to $550,000 a day.

Glaring Disparities in Response Time to Reports of Child Sexual Exploitation

The report also found wide disparities in how long it takes companies to respond to user reports of child sexual exploitation and abuse, with video-sharing app Snap taking on average four minutes to respond to such reports while Microsoft can take up to two days, or up to 19 days if the reports need to be reviewed again.

Elsewhere, the report noted issues with accounts that have been banned for sharing child sexual exploitation and abuse material, pointing to how many are easily able to create new accounts on certain platforms.

Singling out Meta, which owns Facebook and Instagram, the independent regulator said its report found that even if an account is banned on Facebook, the same user may still be able to set up an account on Instagram. Similarly, when an account is banned on WhatsApp, the account owner’s information is not shared with Facebook or Instagram.

“This is a significant problem because WhatsApp report they ban 300,000 accounts for child sexual exploitation and abuse material each month—that’s 3.6 million accounts every year,” Grant said.

A Microsoft spokesperson said the company was committed to combatting the rise of child abuse material, but “as threats to children’s safety continue to evolve and bad actors become more sophisticated in their tactics, we continue to challenge ourselves to adapt our response.”

The Epoch Times has contacted Apple for comment.