The Take It Down Act: All You Need to Know

By
May 20, 2025
A neon-lit photograph of an open legal folder displaying a document titled “Take It Down Act,” set against a stylized American flag background with visible stars and stripes. The lighting casts vivid pink and blue tones across the scene, emphasizing the document and evoking a sense of official legislation.

Future-proof compliance before it’s mandated.

Learn how.

New federal law imposes strict takedown rules for AI-generated intimate imagery, signaling a turning point in GenAI content regulation. Covering the newly passed “Take It Down Act”, this blog includes everything enterprises need to know to stay ahead of this evolving legal obligation.

 

The Deepfake Crisis: A Legal, Operational, and Ethical Tipping Point

Between 2022 and 2024, ActiveFence’s researchers recorded a 105% surge in AI-generated non-consensual intimate imagery (NCII), more than doubling in just two years. This sharp rise is not incidental. It reflects the rapid democratization of generative AI (GenAI) technologies, which has made nudifying bots, deepfake apps, and image manipulation websites widely accessible. Often free and requiring little to no technical skill, these tools are increasingly being weaponized to target women and minors, resulting in severe reputational damage, psychological trauma, depression, and, in some cases, self-harm.

But the threat doesn’t stop with fringe platforms. The abuse is being amplified by public-facing GenAI deployments, applications that, in the wrong hands, can be manipulated through prompts to produce grooming instructions, enable sextortion, or generate intimate deepfakes. These tools empower bad actors while evading traditional content moderation frameworks designed for known abuse types.

For years, law and regulation have not responded. That changed on May 19, 2025, as President Trump signed the Take It Down Act into federal law. This landmark regulation introduces enforceable standards and protections aimed at curbing the spread of NCII, including content generated by AI.

 

What the Take It Down Act Establishes

The Take It Down Act introduces federal protections against the non-consensual distribution of intimate imagery, including AI-generated and digitally altered content such as deepfakes. It creates both criminal liability for individuals and compliance requirements for online platforms, with enforcement falling to the Federal Trade Commission (FTC).

While many states have already banned the sharing of sexually explicit deepfakes, often classified as “revenge porn,” this law stands out as one of the few instances where federal regulators impose direct obligations on internet companies. It sets a new baseline for how platforms must respond to this category of abuse, elevating the issue from a patchwork of state laws to a unified federal standard.

Although the law’s scope is limited to U.S.-based users and companies, its impact may extend globally. As with other major U.S. regulations, platforms operating internationally are likely to align their policies and reporting procedures to comply more broadly. 

Key Legal Provisions:
  • Criminal Liability for Distribution: Knowingly sharing or threatening to share intimate images without consent is now a federal crime. Penalties include:
    • Up to 2 years imprisonment for adult-related content
    • Up to 3 years for content involving minors
  • Platform Responsibility: All online platforms, regardless of size, are required to:
    • Remove reported NCII within 48 hours of notification
    • Prevent reuploads through “reasonable efforts,” such as duplicate detection mechanisms
  • FTC Enforcement: Non-compliance may result in civil penalties of up to ~$51,000 per violation. Importantly, the definition of a “violation” remains vague. If interpreted broadly (e.g., per user, per image, per day), platforms could face cumulative fines reaching into the millions for unresolved reports.
  • Effective Date: While criminal provisions are effective immediately, platforms have until May 2026 to implement compliant takedown mechanisms.

The User Takedown Process: A Low Threshold, High Operational Burden

The Act is intentionally victim-centric. It removes traditional legal barriers to reporting and mandates action without requiring litigation.

To submit a valid removal request, a user must provide:

  • A statement of non-consent (malicious intent need not be shown)
  • Proof of identity and confirmation that they are the depicted individual (e.g., government ID or selfie)
  • URLs or screenshots of the content
  • (Optionally) uploader details or relevant timestamps

No court order, legal representation, or proof of harm is required. As a result, claims can be submitted quickly and at scale, significantly increasing the operational burden on platforms facing potentially high volumes of requests.

Once submitted, platforms must review the request and act within 48 hours. They are also expected to apply “reasonable efforts” to identify and remove all known identical copies. While the Act does not mandate specific tools, the use of automated detection systems for duplicate content is strongly implied as the benchmark for “reasonable efforts.” Neglecting to use such tools, especially for platforms with large volumes of user-generated content, could open the door to regulatory enforcement or liability. Therefore, adopting these tools is both a best practice and a legal risk mitigation strategy.

Legal and Technical Implications for Enterprises Using GenAI

The Act directly affects platforms and services that host, generate, or facilitate the sharing of non-consensual intimate imagery, including AI-generated deepfakes. This may apply to a broad range of technologies and services, including:

  • Social platforms
  • Image generation apps
  • Video synthesis tools
  • Chatbots with multimodal capabilities
  • Third-party GenAI model deployments that output media

Key legal risks now include:

  • Failure to comply with 48-hour takedown obligations
  • Insufficient detection of reuploaded content
  • Inadequate logging or auditing of abuse reports
  • Liability exposure under FTC enforcement actions

Further, the Act includes a specific exemption from Section 230 of the Communications Decency Act, which traditionally shields platforms from liability for user-generated content. Platforms are now potentially liable for hosting or failing to remove non-consensual intimate imagery, including deepfakes, even if they didn’t create it.

From an operational standpoint, many enterprises will find themselves underprepared. Manual review of user-submitted reports, identity verification, and content correlation across surfaces is labor-intensive and error-prone. Worse, emerging AI-generated content is often novel enough to bypass detection by legacy trust & safety systems.

A Word of Caution, and a Call for Readiness

While the legislation does not apply to non-intimate deepfakes, such as those used for misinformation or satire, it sets a clear precedent for broader regulation of GenAI misuse. The legal landscape around AI-generated content is still largely undeveloped, but momentum is building. The Take It Down Act is a meaningful step, and it signals that additional regulatory efforts are likely to follow.

Enforcement standards, including definitions, appeals, and reporting protocols, are still being shaped. Enterprises should expect evolving guidance from the FTC and plan for continued adjustments to compliance workflows.

At ActiveFence, we work closely with leading AI deployers and have direct insight into how AI-generated NCII is evolving. We also see that many organizations are not yet equipped to detect, respond to, or prevent this type of abuse.

In our assessment, this law is not a one-time measure. It marks the beginning of a more assertive approach to content and AI regulation in the United States, especially in areas involving intimate and personal harm. Future laws may broaden the scope of regulated content, increase penalties, or shorten the time allowed for response.

Enterprises deploying GenAI should now view content abuse detection, reporting procedures, and mitigation mechanisms as foundational elements of their systems. These are no longer optional features to be added later. 

Staying compliant, protecting users, and maintaining trust will increasingly depend on early and ongoing alignment with legal and safety expectations.

Table of Contents

Need help preparing for the next era of AI and internet safety regulation?

Contact ActiveFence’s GenAI experts today.