Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Future-proof compliance before it’s mandated.
New federal law imposes strict takedown rules for AI-generated intimate imagery, signaling a turning point in GenAI content regulation. Covering the newly passed “Take It Down Act”, this blog includes everything enterprises need to know to stay ahead of this evolving legal obligation.
Between 2022 and 2024, ActiveFence’s researchers recorded a 105% surge in AI-generated non-consensual intimate imagery (NCII), more than doubling in just two years. This sharp rise is not incidental. It reflects the rapid democratization of generative AI (GenAI) technologies, which has made nudifying bots, deepfake apps, and image manipulation websites widely accessible. Often free and requiring little to no technical skill, these tools are increasingly being weaponized to target women and minors, resulting in severe reputational damage, psychological trauma, depression, and, in some cases, self-harm.
But the threat doesn’t stop with fringe platforms. The abuse is being amplified by public-facing GenAI deployments, applications that, in the wrong hands, can be manipulated through prompts to produce grooming instructions, enable sextortion, or generate intimate deepfakes. These tools empower bad actors while evading traditional content moderation frameworks designed for known abuse types.
For years, law and regulation have not responded. That changed on May 19, 2025, as President Trump signed the Take It Down Act into federal law. This landmark regulation introduces enforceable standards and protections aimed at curbing the spread of NCII, including content generated by AI.
The Take It Down Act introduces federal protections against the non-consensual distribution of intimate imagery, including AI-generated and digitally altered content such as deepfakes. It creates both criminal liability for individuals and compliance requirements for online platforms, with enforcement falling to the Federal Trade Commission (FTC).
While many states have already banned the sharing of sexually explicit deepfakes, often classified as “revenge porn,” this law stands out as one of the few instances where federal regulators impose direct obligations on internet companies. It sets a new baseline for how platforms must respond to this category of abuse, elevating the issue from a patchwork of state laws to a unified federal standard.
Although the law’s scope is limited to U.S.-based users and companies, its impact may extend globally. As with other major U.S. regulations, platforms operating internationally are likely to align their policies and reporting procedures to comply more broadly.
The Act is intentionally victim-centric. It removes traditional legal barriers to reporting and mandates action without requiring litigation.
To submit a valid removal request, a user must provide:
No court order, legal representation, or proof of harm is required. As a result, claims can be submitted quickly and at scale, significantly increasing the operational burden on platforms facing potentially high volumes of requests.
Once submitted, platforms must review the request and act within 48 hours. They are also expected to apply “reasonable efforts” to identify and remove all known identical copies. While the Act does not mandate specific tools, the use of automated detection systems for duplicate content is strongly implied as the benchmark for “reasonable efforts.” Neglecting to use such tools, especially for platforms with large volumes of user-generated content, could open the door to regulatory enforcement or liability. Therefore, adopting these tools is both a best practice and a legal risk mitigation strategy.
The Act directly affects platforms and services that host, generate, or facilitate the sharing of non-consensual intimate imagery, including AI-generated deepfakes. This may apply to a broad range of technologies and services, including:
Key legal risks now include:
Further, the Act includes a specific exemption from Section 230 of the Communications Decency Act, which traditionally shields platforms from liability for user-generated content. Platforms are now potentially liable for hosting or failing to remove non-consensual intimate imagery, including deepfakes, even if they didn’t create it.
From an operational standpoint, many enterprises will find themselves underprepared. Manual review of user-submitted reports, identity verification, and content correlation across surfaces is labor-intensive and error-prone. Worse, emerging AI-generated content is often novel enough to bypass detection by legacy trust & safety systems.
While the legislation does not apply to non-intimate deepfakes, such as those used for misinformation or satire, it sets a clear precedent for broader regulation of GenAI misuse. The legal landscape around AI-generated content is still largely undeveloped, but momentum is building. The Take It Down Act is a meaningful step, and it signals that additional regulatory efforts are likely to follow.
Enforcement standards, including definitions, appeals, and reporting protocols, are still being shaped. Enterprises should expect evolving guidance from the FTC and plan for continued adjustments to compliance workflows.
At ActiveFence, we work closely with leading AI deployers and have direct insight into how AI-generated NCII is evolving. We also see that many organizations are not yet equipped to detect, respond to, or prevent this type of abuse.
In our assessment, this law is not a one-time measure. It marks the beginning of a more assertive approach to content and AI regulation in the United States, especially in areas involving intimate and personal harm. Future laws may broaden the scope of regulated content, increase penalties, or shorten the time allowed for response.
Enterprises deploying GenAI should now view content abuse detection, reporting procedures, and mitigation mechanisms as foundational elements of their systems. These are no longer optional features to be added later.
Staying compliant, protecting users, and maintaining trust will increasingly depend on early and ongoing alignment with legal and safety expectations.
Need help preparing for the next era of AI and internet safety regulation?
A searchable interactive guide to the legislation of almost 70 countries that govern online disinformation.
ActiveFence provides a searchable interactive guide to the legislation of over 70 countries that govern online hate speech content.
ActiveFence provides a searchable interactive guide to the legislation of over 60 countries that govern online terrorist content.