‘Deepfake’ Danger: Children Who Exploit Other Kids Online Using Pornographic Imagery

‘Deepfake’ Danger: Children Who Exploit Other Kids Online Using Pornographic Imagery
Sens. Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.) speak at a news briefing after a hearing of the Senate Judiciary Committee on Jan. 31, 2024. The committee heard testimony from the heads of the largest tech firms on the dangers of child sexual exploitation on social media. (Alex Wong/Getty Images)
Emma Waters
2/19/2024
Updated:
2/19/2024
0:00
Commentary
In a show of bipartisanship outrage at a recent Senate Judiciary Committee hearing, Democratic Sen. Dick Durbin and Republican Sen. Lindsey Graham castigated the CEOs of Big Tech companies for their failure to protect children online.
Durbin (D-Ill.), the panel’s chairman, didn’t hold back. At the Jan. 31 hearing, he alleged that the CEOs of Meta (parent company of Facebook and Instagram), TikTok, Snapchat, Discord, and X (formerly Twitter) are in a “constant pursuit of profit over basic safety that puts our kids at risk.”

He went on to warn that “social media and messaging apps have given predators powerful new tools to sexually exploit children.”

The Department of Justice defines child sexual abuse material as “sexually explicit images or videos of a child’s abuse, rape, molestation and/or exploitation.” It’s already a scourge, but with artificial intelligence, it’s now easier than ever to mass produce.

For a long time, the creation and distribution of such material relied on perverse adults, sophisticated technology and loose social media platform regulations to distribute. Now, however, a new culprit has emerged: other children.

After years of accessing sexually explicit material, minors have begun to use artificial intelligence (AI) tools to create pornographic “deepfakes” of their peers and classmates. This disturbing development—children sexually exploiting other children—has been enabled by social media platforms and by easy access to pornographic websites. That’s just what many state and federal laws are designed to prevent.

These AI-generated photos and videos, dubbed “deepfakes,” can be produced in a matter of minutes on a multitude of apps and websites. The technology is simple. Anyone can use “face swap” on ready-to-use apps, such as DeepSwap and FaceSwapper, to place someone else’s likeness in a sexually explicit photo or video.
According to an NBC review, websites that host deepfake sexual material are easily accessible through Google. What’s worse, deepfake websites known for hosting pornographic content run advertisements on messaging apps used by children, such as Discord. With the click of a few buttons, children can find themselves viewing, and later creating, deepfake pornography.
Over the past several years, countless women have been the victims of deepfake pornography. Exploited women range from students to celebrities, including Taylor Swift.
Heartbreaking stories have emerged of male students using this technology to bully female classmates, or perhaps just entertain their friends. The results can be deadly. Mia Janin, a 14-year-old British schoolgirl, was humiliated when she found her face pasted onto the body of another female. The unknown woman—a victim herself—was engaging in sexually explicit acts. To the casual onlooker, however, it looked like Mia herself was.

Male classmates used AI applications to create sexual abuse material of Mia that they circulated around the school using Snapchat. Unable to bear the subsequent bullying, Mia took her own life.

In another example, 14-year-old New Jersey student Francesca Mani found herself in the same situation. Male classmates used an AI photo-editing app—available on Apple and Google app stores—to generate pictures of nude women with Francesca’s face on it.

The photos were distributed among minors using social media platforms and unfiltered access to smartphone app stores.

In a report by Home Security Heroes on the “2023 State of Deepfakes,” online deepfakes increased by 550 percent from 2019 to 2023, and in the past year, 98 percent of deepfake videos were found to be pornographic. Without legal restrictions, these numbers will only continue to rise.
Children are imitators by nature, and they are particularly susceptible to the influence of online social media accounts. Kids naturally imitate the men and women in their life, beginning with their own father and mother, to learn what it means to be a good person.
That’s a feature, not a bug, in a child’s development, but it can be hijacked by bad actors, including social media platforms. When children stumble upon sexually explicit material, and even learn to crave it, we should not be surprised to see them imitate it through their actions or AI creations.
The responsibility lies first with parents to play an active role monitoring who and what their child can access online. But it doesn’t end there. The pervasive nature of these online tools requires legislators and social media platforms who are also committed to ensuring that children are protected online.

As Graham (R-S.C.), the committee’s ranking member, said at the hearing, “These companies must be reined in, or the worst is yet to come.” Unfortunately, for families like Mia’s, it already has.

Congress and parents must unite in their effort to protect children from the corrosive influence of AI-generated child sexual abuse material.

Reprinted by permission from The Daily Signal, a publication of The Heritage Foundation.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Emma Waters is a research associate with the DeVos Center for Life, Religion, and Family at The Heritage Foundation.
twitter
Related Topics