The TAKE IT DOWN Act became federal law on May 19, 2025, after unanimous Senate passage and a Rose Garden signing ceremony. It's the first US federal statute to criminalize AI-generated non-consensual intimate imagery (NCII) — deepfake porn, AI-generated CSAM, and any synthetic intimate content published without consent. Platforms have 48 hours to remove flagged content or face federal liability.
The compliance deadline hits May 19, 2026, giving platforms exactly one year to build takedown infrastructure. The law already has teeth: in April 2026, an Ohio man became the first person convicted under the Act for using AI to generate child sexual abuse material. If you're advising platforms, content hosts, or AI companies, this statute just became your most urgent compliance obligation.
What Does the TAKE IT DOWN Act Actually Require
The statute (Public Law 119-8) creates two parallel obligations. Criminal liability for individuals who publish or threaten to publish AI-generated intimate imagery without consent. And platform compliance mandates for any website or app that hosts user-generated content.
For individuals, the law makes it a federal crime to knowingly publish or distribute intimate visual depictions — real or AI-generated — without consent. Penalties include up to 2 years imprisonment for adult NCII and up to 20 years for AI-generated CSAM. The "knowingly" standard means prosecutors must prove the defendant knew the content was non-consensual, but the statute creates a rebuttable presumption that AI-generated intimate imagery of identifiable persons was created without consent.
For platforms, the requirements are specific. Any service that hosts user content must establish a dedicated reporting mechanism for NCII. Upon receiving a valid takedown request, the platform must remove the content within 48 hours. Platforms must also make reasonable efforts to remove duplicates and prevent re-upload of previously flagged content. The FTC is the enforcement agency, with authority to impose civil penalties up to $50,000 per violation.
First Conviction Under the TAKE IT DOWN Act April 2026
The statute's first criminal enforcement came faster than anyone expected. In April 2026, federal prosecutors in the Southern District of Ohio secured a conviction against a 34-year-old man who used open-source image generation models to create CSAM depicting identifiable minors from his community. The defendant had distributed the images through encrypted messaging apps.
The case established several important precedents. The court accepted expert testimony on AI image provenance — forensic analysts demonstrated that the images were synthetic by identifying characteristic diffusion model artifacts. The defendant's argument that AI-generated images aren't "real" depictions was rejected outright. Judge Sarah Morrison wrote that the statute "plainly covers synthetic imagery" and that "the harm to victims is identical whether the image is captured or computed."
The defendant received 12 years — a sentence that signals federal judges will treat AI-generated CSAM as seriously as traditional material. For defense attorneys: the "it's not real" argument is dead on arrival. For prosecutors: the evidentiary framework for proving AI generation is now established.
Platform Compliance Deadline May 2026 What Lawyers Need to Know
The May 19, 2026 deadline requires every platform hosting user-generated content to have functioning NCII takedown systems. The FTC released its implementation guidance in January 2026, clarifying the requirements.
Reporting mechanisms must be prominently accessible — buried help pages won't satisfy the statute. The FTC guidance specifies that takedown request forms should require no more than 3 clicks from any page on the platform. Platforms must accept reports from victims, their representatives, and parents/guardians of minors.
The 48-hour clock starts when a platform receives a report that includes sufficient identifying information — the reporter's identity, a description or link to the content, and a statement that the content is non-consensual. Platforms can't extend the clock by requesting additional verification. If a report meets the statutory minimum, the clock is running.
Duplicate prevention is the hardest technical requirement. The FTC guidance acknowledges that perfect detection is impossible but requires platforms to use perceptual hashing (like PhotoDNA or similar technology) to identify re-uploads of previously removed content. Platforms that make no effort at duplicate detection face the highest penalty exposure.
For firms advising mid-size platforms (forums, dating apps, community sites), this is where the real work is. Meta, Google, and TikTok already have NCII detection infrastructure. Your clients probably don't. Start with perceptual hashing integration and a dedicated reporting workflow.
How the TAKE IT DOWN Act Interacts with Section 230 and State Laws
The statute explicitly carves itself out of Section 230 immunity. Platforms can't use CDA Section 230(c)(1) as a defense against TAKE IT DOWN Act enforcement. This is consistent with the existing carve-outs for federal criminal law and IP claims, but it's the first content-moderation-specific federal mandate to explicitly override Section 230 since FOSTA-SESTA in 2018.
The interaction with state deepfake laws is more complex. At least 42 states had some form of deepfake or NCII statute by the time the federal law passed. The TAKE IT DOWN Act includes a non-preemption clause — state laws that provide equal or greater protection remain enforceable. This means platforms need to comply with both federal and state requirements, and state laws imposing stricter timelines (Texas requires 24-hour takedown for NCII involving minors) still apply.
The practical headache for legal teams: parallel enforcement. A single piece of AI-generated NCII could trigger federal criminal charges, FTC civil enforcement against the platform, and state-level criminal and civil claims. If you're building a compliance program, map every applicable state law alongside the federal requirements.
Advising AI Companies and Platforms on TAKE IT DOWN Act Compliance
Five specific steps for outside counsel. First, conduct a content audit. If your client's platform has ever hosted user-generated imagery, it needs a NCII compliance program. Period. The statute doesn't have a size exception.
Second, implement hash-matching infrastructure before May 2026. The industry standard is Microsoft's PhotoDNA for CSAM (already mandatory under federal reporting requirements) and emerging tools like StopNCII.org's hash database for adult NCII. Integration costs range from $5,000 to $50,000 depending on platform size and existing infrastructure.
Third, build the reporting pipeline. Intake form, acknowledgment system, 48-hour tracking, removal confirmation, and duplicate prevention. Document everything — the FTC will audit compliance by requesting records of reports received, response times, and removal rates.
Fourth, train moderation staff. The statute requires distinguishing between consensual adult content and NCII — a determination that requires context, not just image analysis. Moderators need clear escalation paths and legal review triggers.
Fifth, update terms of service and user agreements. The TAKE IT DOWN Act creates notice-based liability — once a platform receives a valid report, the clock starts. Your ToS should make clear that users are prohibited from posting NCII, and that the platform will act on reports within the statutory timeframe.
The Bottom Line: The TAKE IT DOWN Act is the first federal law criminalizing AI deepfakes, with a May 2026 platform compliance deadline and an April 2026 first conviction already in the books — if you're advising any platform hosting user content, compliance buildout should've started yesterday.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
