Twitch Platform Used to Spread Child Sexual Content: Report

Viewers are capturing short clips from minors’ livestreams in which the kids perform sexual acts.
Twitch Platform Used to Spread Child Sexual Content: Report
A Twitch logo at Tokyo Game Show on Sept. 21, 2018. (Martin Bureau/AFP via Getty Images)
Naveen Athrappully
1/6/2024
Updated:
1/7/2024
0:00

Short videos depicting sexual content involving children are being spread on the Amazon-owned streaming platform Twitch, with such videos receiving thousands of views.

Twitch has a feature called “Clips’' that allows users to capture short videos from livestreams that can then be edited and shared. A recent Bloomberg analysis of 1,100 clips, in conjunction with the Canadian Centre for Child Protection, revealed that at least 83 of them involved sexual material related to children. Of the 83 clips, 34 showed children, mostly boys between the ages of 5 and 12, displaying their genitalia.

Such exhibitionism was found to often be triggered by the encouragement they get from livestream viewers. Those clips had been watched 2,700 times. Some of the remaining 49 clips involved children being subjected to sexual grooming, with the videos racking up 7,300 views.

When a livestream viewer captures such material, it “becomes an almost permanent record of that sexual abuse,” according to Canadian Centre Director Stephen Sauer. “There’s a broader victimization that occurs once the initial livestream and grooming incident has happened because of the possibility of further distribution of this material.”

Mr. Sauer insisted that social media firms can’t be looked upon to regulate child abuse content and called for government intervention.

“We’ve been on the sidelines watching the industry do voluntary regulation for 25 years now. ... We know it’s just not working. We see far too many kids being exploited on these platforms. And we want to see government step in and say, ‘These are the safeguards you have to put in place.’”

In a statement to Bloomberg, Twitch CEO Dan Clancy said that “youth harm, anywhere online, is deeply disturbing.” When alerted by the outlet, the company deleted the child sexual content.

“Even one instance is too many, and we take this issue extremely seriously,” Mr. Clancy said.

Twitch has made “significant updates” for detecting and removing child sexual exploitation material, according to the platform’s “Transparency Report” for the first half of 2023. It’s also “addressing evolving forms of harm, such as AI-enabled Child Sexual Abuse Material (CSAM).”

During this period, Twitch issued 13,801 enforcements for violating the firm’s Youth Safety Policy.

However, Twitch submitted fewer tips to the U.S. National Center for Missing and Exploited Children (NCMEC). Between the second half of 2022 and the first half of 2023, the number of tips fell to 3,300 from 7,600.

The company insisted that the decrease “reflects a change in our categorization to ensure we are accurately reporting illegal content.”

“It does not represent a change in our enforcement for content that may endanger youth,” it stated.

An October 2023 study by the American Academy of Pediatrics warned that minors using Twitch are at risk of being manipulated and groomed by sexual predators.

“Twitch represents a clandestine, threatening digital environment where minors are interacting with adult strangers without parental supervision,” the study reads.

“Young users clearly feel a false sense of safety on the platform; a significant proportion were willing to reveal personal information despite having no knowledge of who might be listening.”

Representatives of Twitch didn’t respond by press time to a request by The Epoch Times for comment.

Internet Child Exploitation Material

The issue of child sexual content proliferation isn’t just limited to Twitch. Many tech firms, such as Twitter, TikTok, Google, and Facebook, are facing similar accusations.
In February 2023, Julie Inman Grant, Australia’s eSafety commissioner, issued legal notices to Google, Twitter, and TikTok, asking them to explain what they’re doing to tackle the issue.

“The creation, dissemination, and viewing of online child sexual abuse inflicts incalculable trauma and ruins lives. It is also illegal,” she said at the time. “It is vital that tech companies take all the steps they reasonably can to remove this material from their platforms and services.”

In November 2023, a U.S. federal judge ruled that big tech social media firms have to face a lawsuit that accused the companies of triggering a “youth mental crisis” and facilitating the spread of child sexual content.

The companies argued that the First Amendment protected them from liability for the content they published. However, District Judge Yvonne Gonzalez Rogers pointed out that many violations alleged in the lawsuit do “not constitute speech or expression, or publication of same.”

For instance, plaintiffs accused the social media firms of not providing effective parental controls to parents, not offering options for users to self-restrict time spent on a platform, not using robust age verification, and not implementing reporting protocols that would allow users to report CSAM and other such material.

“Addressing these defects would not require that defendants change how or what speech they disseminate,” the judge wrote.

Lawmakers are taking action to counter the problem. In August 2023, Rep. Ann Wagner (R-Mo.) introduced the Child Online Safety Modernization Act, which requires social media firms to collaborate with law enforcement to identify children in images classified as CSAM.

“This bill will make it clear that images and videos of children being raped is not ‘pornography,’ it is sexual abuse of a child. America cannot, and should not, accept a reality where innocent children are sexually exploited for financial gain,” she said.

Meta’s rollout of default end-to-end encryption for personal messages and calls on its Messenger and Facebook platforms has also raised concerns among child welfare activists.

In a Dec. 7, 2023, statement, the Canadian Centre for Child Protection stated that Meta’s decision means that “millions of child sexual abuse and exploitation cases will cease to be reported.”

Since 2020, Meta has forwarded 74.4 million reports of suspected child sexual abuse and exploitation to the NCMEC as per legal requirements, it stated. These reports have triggered numerous investigations by law enforcement.

Meta’s decision means that law enforcement “will lose its ability to effectively monitor these crimes unfolding across large swaths of their platforms, including Facebook and Instagram,” according to the Canadian Centre’s statement.

“NCMEC, which processes Meta’s child exploitation reports, has estimated these actions could cause as much as 70 percent of all reportable cases on its services to go undetected,” it reads.