Deepfake law went from nothing to everywhere in 18 months. The TAKE IT DOWN Act (signed January 2025) created the first federal criminal penalties for non-consensual intimate deepfakes. Forty-three states now have deepfake-specific statutes. And a wave of civil litigation — from Taylor Swift's unauthorized AI images to AI-generated political ads — is building the case law framework in real time. For attorneys, deepfakes touch defamation, privacy, IP, election law, employment law, and criminal law simultaneously.

Here's the legal landscape as of April 2026: federal law covers intimate image deepfakes. State laws cover everything from election interference to commercial fraud. And the case law is being written right now. If your practice touches media, employment, entertainment, family law, or criminal defense, deepfake issues are already in your pipeline — or they will be within the year.


The TAKE IT DOWN Act: Federal Criminal Liability

The TAKE IT DOWN Act (formally the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act) creates federal criminal penalties for publishing non-consensual intimate images, including AI-generated deepfakes. Key provisions:

Criminal penalties: up to 2 years imprisonment for knowingly publishing non-consensual intimate deepfakes. Enhanced penalties when the victim is a minor (up to 10 years).

Platform obligations: social media platforms and websites must remove reported non-consensual intimate images within 48 hours of receiving a valid removal request. Platforms that fail to comply face FTC enforcement and potential fines.

Victim remedies: the Act provides civil remedies for victims, including damages and injunctive relief. Importantly, victims can pursue civil claims regardless of whether criminal prosecution occurs.

The law's scope is narrow by design — it targets intimate images specifically, not all deepfakes. Political deepfakes, commercial deepfakes, and non-intimate fake content fall outside the federal statute and remain governed by state law, if any applies.

State Deepfake Laws: The Patchwork

Forty-three states have enacted deepfake-specific legislation as of April 2026, creating a patchwork that practitioners need to navigate by jurisdiction.

Election deepfakes: 28 states prohibit AI-generated content depicting candidates in false scenarios within defined periods before elections. Texas, California, and Michigan have the broadest prohibitions. Penalties range from misdemeanor charges to civil fines up to $100,000. The constitutional challenges are coming — First Amendment arguments against these restrictions are pending in multiple circuits.

Non-consensual intimate deepfakes: 38 states have laws addressing this category, often predating the federal TAKE IT DOWN Act. State laws typically provide broader civil remedies and longer statutes of limitation than the federal statute. California and Illinois have the strongest victim protection frameworks.

Commercial deepfakes: 15 states address the use of AI-generated likenesses for commercial purposes without consent. These laws extend traditional right-of-publicity protections to AI-generated content. Tennessee's ELVIS Act (Ensuring Likeness Voice and Image Security Act) is the model statute, explicitly protecting vocal and visual likeness from AI replication.

Employment deepfakes: 8 states address the use of deepfakes in employment contexts — fake reference videos, manipulated interview recordings, and fabricated credentials. Illinois's AI Video Interview Act requires disclosure when AI is used in hiring decisions, with deepfake provisions added in 2025.

Civil Case Law: Where the Precedent Is Building

The case law is developing across multiple causes of action:

Defamation: courts are treating AI-generated deepfakes depicting real people in defamatory scenarios as actionable defamation. The publication element is satisfied by online distribution. The "of and concerning" element is met when the deepfake is recognizable as the plaintiff. The key question — whether deepfake creators can claim the content is "obviously fake" as a defense — is being litigated in multiple jurisdictions. Early rulings suggest that realistic deepfakes don't receive the same satirical protection as obvious parody.

Right of publicity: Tennessee, California, and New York courts have applied right-of-publicity claims to AI-generated likenesses. The emerging rule: using someone's likeness to create AI-generated content for commercial purposes without consent violates right of publicity regardless of whether the result is photorealistic. This extends to voice cloning — AI-replicated voices are treated as protectable aspects of personal identity.

Emotional distress: victims of non-consensual intimate deepfakes are bringing IIED claims in addition to statutory claims. Courts have consistently found that creating and distributing intimate deepfakes constitutes extreme and outrageous conduct meeting the IIED threshold. Damage awards in early cases range from $50,000 to $500,000.

Copyright: the intersection of deepfakes and copyright is murky. Using copyrighted photographs to generate deepfakes may constitute derivative works. Using a person's appearance (not copyrighted) to generate AI content doesn't trigger copyright. The training-data question — whether AI models trained on copyrighted images infringe by generating deepfakes — connects to the broader AI copyright litigation landscape.

Criminal Defense: Deepfakes as Evidence Challenges

Deepfake technology creates a novel evidence authentication challenge that criminal defense attorneys are already exploiting. The "deepfake defense" — arguing that video or audio evidence could be AI-generated — has been raised in at least 47 criminal cases as of early 2026.

Authentication requirements: courts are tightening evidence authentication rules in response. The DOJ issued updated guidelines in 2025 requiring prosecutors to provide chain-of-custody documentation, metadata analysis, and forensic authentication for video and audio evidence challenged as potential deepfakes.

Expert testimony: deepfake detection experts are becoming a standard feature in cases where video evidence is contested. Forensic analysis tools (Microsoft Video Authenticator, Intel FakeCatcher, academic tools) can identify manipulation markers, but detection accuracy varies. Courts are applying Daubert analysis to deepfake detection testimony, and the standard for admissible detection methodology is still being established.

The defense attorney's dilemma: the deepfake defense raises legitimate questions about evidence reliability, but overuse risks undermining genuine evidence challenges. Courts are beginning to require attorneys raising the deepfake defense to proffer some basis for the challenge beyond theoretical possibility.

Practice Implications: What Every Attorney Needs to Know

For family law attorneys: deepfakes are appearing in custody disputes, divorce proceedings, and protective order hearings. Fabricated evidence showing a spouse in compromising situations, manufactured communications, and AI-manipulated audio recordings are all emerging issues. Counsel should proactively address deepfake authenticity in evidence motions.

For employment attorneys: AI-generated deepfakes in the workplace create hostile work environment claims (deepfakes of coworkers), discrimination claims (AI-generated content reflecting bias), and fraud claims (fabricated credentials or references). Advise employer clients to implement policies addressing AI-generated content and deepfake reporting procedures.

For IP attorneys: right-of-publicity claims for AI-generated likenesses are the fastest-growing area of deepfake litigation. Advise commercial clients that using any person's likeness in AI-generated content requires consent, even if the output isn't photorealistic. Voice cloning for commercial purposes requires the same consent framework.

For criminal defense attorneys: understand deepfake detection methodology well enough to challenge it effectively. The technology is new enough that forensic standards aren't settled — there's room for legitimate Daubert challenges to detection testimony. But raise the deepfake defense selectively, with specific technical basis, to maintain credibility with the court.

The Bottom Line: The deepfake legal landscape in 2026 spans federal criminal law (TAKE IT DOWN Act), 43 state statutes covering election interference, intimate images, commercial use, and employment, plus rapidly developing case law in defamation, right of publicity, IIED, and evidence authentication. Every practice area is affected. Attorneys who understand the current framework will capture the wave of deepfake-related matters already building across civil and criminal dockets.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.