Politics
Federal Law Criminalizes Non-Consensual Deepfakes After Major Advocacy

Washington, D.C. — On May 19, 2025, President Donald Trump is scheduled to sign the Take It Down Act, a federal law aimed at criminalizing the sharing of non-consensual, explicit deepfakes. This legislation comes in response to rising concerns about the prevalence of AI-generated explicit images that harm individuals, especially women and young people.
The new law makes it illegal to share non-consensual explicit images, whether real or AI-generated. It requires tech platforms to remove such content within 48 hours after being notified. This legislation aims to provide clearer protections for victims of revenge porn and AI-generated sexual imagery, while increasing accountability for tech companies.
“AI is new to a lot of us and we’re still figuring out what is harmful,” said Ilana Beller from Public Citizen, an advocacy group endorsing the legislation. “Non-consensual intimate deepfakes are a clear harm with no benefit.”
Initially, the proposition of this law gained bipartisan support, passing both chambers of Congress with only two dissenting votes. More than 100 organizations, including major tech firms like Meta and TikTok, voiced support for the initiative. First lady Melania Trump also advocated for the passed legislation, referencing the issue in Congress.
The Take It Down Act was inspired by several high-profile cases, including that of teenager Elliston Berry, whose image was manipulated and shared without her consent. Berry remarked, “Every day I’ve had to live with the fear of these photos getting brought up. By this bill getting passed, I will no longer have to live in fear.”
In addition to criminalizing sharing, the law also aims to close existing gaps in federal legislation regarding the production and distribution of non-consensual explicit images. Previously, laws only addressed the creation of such content relating to children, with protections for adults being inconsistently applied across states.
Some tech platforms had already begun to address the growing concern of deepfakes. Companies like Google and Snapchat have introduced tools allowing users to request the removal of non-consensual explicit images. However, advocacy groups emphasize the ongoing need for strong legal measures to deter misuse.
Imran Ahmed, CEO of the Center for Countering Digital Hate, stated, “This legislation finally compels social media platforms to do their jobs and protect women from invasive breaches of their rights.” As AI technology continues to evolve, advocates hope this law will set a precedent in safeguarding individuals from its misuse.