When AI hallucinates in a court filing, the consequences escalate from monetary sanctions ($5,000 to $109,700 in documented cases) to case dismissal to bar disciplinary referrals. The attorney who signs the filing bears full responsibility — "the AI made it up" has never been accepted as a defense.

Since the landmark Mata v. Avianca decision in 2023, courts have sanctioned attorneys in at least a dozen published opinions for AI-related filing errors. The pattern is consistent: an attorney uses AI to generate or research a filing, doesn't verify the output, and submits fabricated citations or incorrect legal claims to the court. The penalties get worse every year as courts lose patience.


The Escalating Consequences

AI hallucination sanctions follow a clear severity ladder. Level 1 — Monetary sanctions: the most common outcome. Fines range from $5,000 (for first-time, minor hallucinations caught early) to $109,700 (the highest documented AI-related sanction as of 2025, involving multiple fabricated citations and failure to correct after being notified). The typical range is $5,000-$15,000 for a first offense with fabricated citations. Level 2 — Filing consequences: courts have struck filings, denied motions, and required refiling with verified citations. In some cases, the tainted filing prejudiced the case outcome — a motion to dismiss based on fabricated case law doesn't just get denied, it damages credibility for every future filing. Level 3 — Professional consequences: bar referrals for investigation, mandatory CLE requirements (courts ordering attorneys to complete AI-specific training), published opinions that permanently associate the attorney's name with AI misconduct, and potential malpractice liability to the client.

Real Cases: What Actually Happened

Mata v. Avianca (2023): the case that started it all. Attorney Steven Schwartz used ChatGPT to research a personal injury case and submitted a brief containing six fabricated case citations. When opposing counsel couldn't find the cases, the court ordered verification. Schwartz asked ChatGPT to confirm the cases existed — it confirmed fabricated cases as real. Judge Castel imposed $5,000 in sanctions and the case became the most-cited AI cautionary tale in legal history. Park v. Kim (2024): attorney submitted AI-generated brief with three fabricated case citations in a federal court with an AI disclosure standing order. The attorney failed to both verify citations and comply with the disclosure requirement. Sanctions: $10,000 plus mandatory AI CLE. Ex parte Allen (2025): in one of the highest AI-related sanctions, an attorney submitted multiple filings with fabricated authorities over several months. After being warned, continued using unverified AI output. Sanctions exceeded $100,000 with bar referral.

Why "The AI Made It Up" Isn't a Defense

Courts have uniformly rejected attempts to blame AI for filing errors, and the legal reasoning is straightforward. Rule 11 (Federal): by signing a filing, the attorney certifies that "the factual contentions have evidentiary support" and "the legal contentions are warranted by existing law." There's no AI exception. Duty of Candor (Rule 3.3): attorneys must not make false statements of law to a tribunal. If a cited case doesn't exist, that's a false statement regardless of whether a human or machine authored it. Competence (Rule 1.1): using a tool you don't understand — including not knowing it can fabricate citations — is itself a competence violation. The courts' position is clear: the attorney is the gatekeeper. AI is a tool. The attorney chose to use it, chose not to verify its output, and chose to submit the result to the court. Every step involved attorney judgment (or lack of it). The tool didn't file the brief. The attorney did.

How to Prevent AI Hallucination Sanctions

Five practices eliminate the risk. 1. Verify every citation: check every case name, citation, and quoted holding against the primary source — Westlaw, LexisNexis, or Google Scholar. No exceptions. 2. Verify legal standards: confirm that the standard of review, elements of claims, and procedural requirements stated in the AI draft are accurate for your jurisdiction. AI frequently applies the right standard from the wrong jurisdiction. 3. Use citation-connected AI when possible: CoCounsel searches Westlaw directly, reducing (but not eliminating) hallucination risk. General AI (Claude, ChatGPT) should only be used for drafting with separate citation verification. 4. Implement a verification checklist: before filing any AI-assisted document, the attorney completes a checklist confirming all citations verified, all legal standards confirmed, all factual statements checked, and compliance with any AI disclosure orders. 5. Never ask AI to verify itself: Mata v. Avianca's attorney asked ChatGPT to confirm its own fabricated cases existed. The AI confirmed them. Never use the same tool to verify its own output.

The Malpractice Exposure Managing Partners Don't See

Sanctions are the visible consequence. The malpractice exposure is the one that should keep managing partners up at night. When an AI hallucination affects case outcome — a motion denied because the supporting authority was fabricated, a deadline missed because AI misstated a procedural rule, a contract provision based on non-existent case law — the client has a malpractice claim. The damages aren't limited to sanctions. They include the value of the underlying case. A $2 million personal injury case lost because the attorney relied on AI-generated research without verification isn't a $5,000 sanctions problem. It's a $2 million malpractice claim. Malpractice insurers are watching. Several carriers have begun adding AI-specific questions to applications and renewal forms. Firms without AI policies may face higher premiums or coverage exclusions for AI-related claims. The policy, verification protocols, and training aren't just ethics compliance — they're risk management.

The Bottom Line: AI hallucinations in court filings result in sanctions ($5,000-$109,700), case consequences (struck filings, denied motions), and professional consequences (bar referrals, mandatory CLE). "The AI made it up" is never a defense. Prevention is straightforward: verify every citation, confirm every legal standard, and never ask AI to verify its own output. The real risk isn't sanctions — it's malpractice exposure on the underlying case.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.