Trump Signs Law Targeting Deepfakes and Revenge Porn—Here’s What to Know

The law makes it criminal to publish nonconsensual intimate imagery, sometimes called revenge porn, and deepfake content generated with artificial intelligence.
Trump Signs Law Targeting Deepfakes and Revenge Porn—Here’s What to Know
President Donald Trump hands a marker to first lady Melania Trump during the signing ceremony for the Take It Down Act alongside lawmakers and victims of artificial intelligence deepfakes and revenge porn in the Rose Garden of the White House on May 19, 2025. Chip Somodevilla/Getty Images
Savannah Hulsey Pointer
Updated:
0:00
President Donald Trump signed the “Take It Down Act“ into law on May 19, criminalizing the publication of nonconsensual intimate imagery. The legislation puts a 48-hour timer on online platforms to remove explicit content at the request of the individuals featured.

Take It Down is an acronym for Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, and is an attempt to mitigate the spread of nonconsensual imagery and deepfake content generated with artificial intelligence (AI).

“With the rise of AI image generation, women have been harassed with deepfakes and other explicit images distributed against their will,” the president said during the signing event.
“It’s just so horribly wrong, and it’s a very abusive situation ... and today we’re making it totally illegal.”

What the Law Does

The law makes it criminal to publish nonconsensual intimate imagery, sometimes called revenge porn, or AI-generated equivalents, which have become a prevalent issue because of the circulation of technology that can quickly create such materials.

AI-generated content now circulating on the internet includes videos that, while fake, often appear deceptively authentic and use the image of a real person.

The recently signed law gives online platforms a maximum of two days to remove explicit content after the identifiable individual in the materials submits a request.

Anyone convicted of intentionally distributing explicit images without the consent of the person whose likeness is used could face prison time.

First lady Melania Trump offered her appreciation for the bill’s passage at the time of the signing and warned about the dangers that children face online.

“Artificial intelligence and social media are the digital candy for the next generation: sweet, addictive, and engineered to have an impact on the cognitive development of our children,” the first lady said. “But unlike sugar, these new technologies can be weaponized, shape beliefs, sadly affect emotions, and even be deadly.”

The Law’s History

The legislation was introduced by Sens. Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) on Jan. 16 and picked up another boost of momentum because of Melania Trump’s public support.

The first lady went as far as to tie it to her Be Best initiative, which is a campaign focused on the well-being of children.

The Senate passed the bill on Feb. 13 by unanimous consent and sent it to the House of Representatives.
The deepfake and revenge pornography legislation passed the House with near-unanimous support on April 28 in a vote of 409–2.

Rep. Maria Salazar (R-Fla.), the bill’s sponsor in the House, said it would help to prevent cyberbullying and “suicide born out of shame.”

“It is outrageously sick to use images, the face, the voice, the likeness of a young, vulnerable female, to manipulate them, to extort them, and to humiliate them publicly, just for fun, just for revenge,” Salazar said on the House floor ahead of the vote.

“And that is why we created this bill, to stop the abuse spreading like wildfire right now on social media.”

Previous Immunity

Online platforms have long enjoyed immunity from repercussions of user-generated content because of Section 230 of the Communications Decency Act, which provides general immunity for computer services with regard to third-party content.
Legislative Attorney Victoria Killion penned a legal sidebar on the “Take It Down Act” that was published online by the Library of Congress. In it, Killion addressed the legislation’s possible impact on the application of Section 230.
According to Killion, while the act does not explicitly address the relationship between Section 230 and the act’s notice-and-removal requirements, it provides that a “covered platform shall not be liable for any claim based on the covered platform’s good faith disabling of access to, or removal of, material claimed to be a nonconsensual intimate visual depiction” for some circumstances.

The attorney pointed to the possibility that a service provider might argue that it would still enjoy protection under Section 230 if it were prosecuted under the “Take It Down Act,” and there is some debate as to whether the law is, in essence, a rollback of part of those protections.

“[This is] a question that a court may resolve as a matter of statutory interpretation ... whether the two sets of provisions can be harmonized in a way that gives effect to each,” Killion said.

Section 230 has been used to protect social media platforms in a number of high-profile cases. One recent instance took place earlier in 2025, when the law was cited in a Feb. 24 decision by a federal court to dismiss a suit brought by mothers who claimed that platforms should have legal responsibility for allegedly dangerous “choking challenges” and other harmful content.

The plaintiffs alleged that the platforms were strictly liable for failing to remove dangerous videos after notice was given, conveying harrowing stories of parents who lost children who participated in the challenges.

The judge concluded that the platform’s conduct in the moderation and removal of user-generated content is largely protected under Section 230.

Outsiders’ Takes

Not everyone was in favor of the legislation, with Rep. Thomas Massie (R-Ky.) announcing the day of the House vote that he would not support the legislation.
“I’m voting NO because I feel this is a slippery slope, ripe for abuse, with unintended consequences,” Massie said in a post on social media platform X.
Similarly, Jason Kelley, activism director for the Electronic Frontier Foundation, outlined his organization’s concern when the legislation made it to the House, saying the bill had serious flaws and could be a suppression of free speech.

“The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests,” Kelley said.

“The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.”

Meta, parent company of both Facebook and Instagram, came out in support of the legislation, with spokesman Andy Stone saying in March 2025 that “having an intimate image—real or AI-generated—shared without consent can be devastating and Meta developed and backs many efforts to help prevent it.”

The National Center on Sexual Exploitation lauded the bill’s passage, saying the bill becoming law is “a testament“ to what people can achieve when they ”overcome political differences and work together to end sexual exploitation.”

Savannah Hulsey Pointer
Savannah Hulsey Pointer
Author
Savannah Pointer is a politics reporter for The Epoch Times. She can be reached at [email protected]
twitter
truth