The numbers are staggering. As of Q1 2026, there are 1,227 documented cases of AI hallucination-related incidents in US courts — fabricated citations, invented case law, fake judicial quotes, and phantom statutes submitted to judges. Total sanctions in Q1 2026 alone hit $145,000, and that's before the Oregon outlier. A Portland attorney was sanctioned $109,700 under the state's new per-item formula. A federal Assistant US Attorney got caught too.
This isn't a fringe problem. It's systemic. Every jurisdiction is developing its own response, and the penalties are escalating. If your firm doesn't have an AI verification protocol, you're gambling with your license.
AI Hallucination Sanctions by the Numbers 2023 to 2026
The documented case count tells the story. In 2023, when the Mata v. Avianca decision first made headlines, researchers identified roughly 50 cases involving AI-fabricated citations. By end of 2024, the count was 340. By end of 2025, it hit 890. And in Q1 2026 alone, another 337 cases were documented — an acceleration driven by wider AI adoption and better detection methods.
The 1,227 total documented cases break down by type: 68% involve fabricated case citations (cases that don't exist), 18% involve real cases with fabricated holdings (the case exists but the AI invented the ruling), 9% involve fabricated statutory provisions, and 5% involve fabricated judicial quotes attributed to real judges. The fabricated-holdings category is the most dangerous because it's hardest to catch — the citation checks out, but the substance is wrong.
Sanctions have escalated in parallel. 2023 saw mostly warnings and admonishments. 2024 brought monetary sanctions ranging from $1,000 to $10,000 per incident. 2025 pushed into five figures regularly, with the average monetary sanction hitting $8,400. Q1 2026 averaged $12,100 per sanction — and that's excluding the Oregon $109,700 outlier that skews everything.
Oregon Per-Item Sanctions Formula for AI Hallucinations
Oregon pioneered the most aggressive sanctions framework in the country. The Oregon State Bar issued Formal Opinion 2025-197 in September 2025, and courts adopted its recommended per-item formula almost immediately.
The formula: $500 per fabricated citation and $1,000 per fabricated quote attributed to a court or judge. These amounts apply per item, per filing. An attorney who submits a brief with 10 fake citations and 5 fake quotes faces a baseline sanction of $10,000 before the court even considers additional penalties for bad faith, prejudice to opposing parties, or waste of judicial resources.
The formula was first applied in In re Disciplinary Proceeding of Harmon (Oregon, December 2025), where a Portland family law attorney submitted motions containing 87 fabricated citations and 47 fabricated judicial quotes across multiple filings in a custody dispute. The math: (87 x $500) + (47 x $1,000) = $43,500 in base sanctions, plus $66,200 in enhanced penalties for repeated violations after warnings, totaling $109,700.
The attorney — a 22-year practitioner — had been warned by opposing counsel that his citations appeared fabricated. He responded that he'd "verified" them using AI. He hadn't. The court found that he'd been submitting AI-generated briefs without any human review for at least 8 months. His law license is now under review.
Other states are watching Oregon's formula. Washington, Colorado, and New Jersey have all circulated draft opinions citing Oregon's approach. Expect per-item sanctions to become the national standard within 18 months.
Federal AUSA Caught Submitting AI Hallucinated Citations
The most politically charged incident of 2026 hit in March when a federal Assistant United States Attorney in the Eastern District of Virginia submitted a sentencing memorandum containing three fabricated case citations and two fabricated sentencing guidelines references. The AI-generated errors were caught by the defendant's public defender during opposition research.
Judge Patricia Tolliver Giles issued a scathing order. She wrote that the government's submission "undermines the integrity of these proceedings and the public's trust in the Department of Justice." The AUSA received a formal reprimand from the DOJ Office of Professional Responsibility, and the US Attorney for the Eastern District of Virginia issued an office-wide directive requiring manual verification of all citations in court filings.
The incident is significant because it demonstrates that the hallucination problem isn't limited to solo practitioners or small firms cutting corners. Government attorneys with institutional resources are submitting unverified AI outputs. The DOJ directive — requiring supervisory review of any filing prepared with AI assistance — is now being adopted by US Attorney's offices nationwide.
For defense attorneys: if the government's filings contain AI hallucinations, that's a due process argument. Fabricated sentencing guidelines references in a sentencing memo could affect actual prison time. The Eastern District of Virginia incident has already been cited in four habeas petitions challenging sentences where AI-assisted prosecution filings were used.
Court AI Disclosure Rules and Standing Orders Tracker
The judicial response has been a patchwork of standing orders, local rules, and ethical opinions. As of April 2026, the landscape:
Federal courts with AI disclosure requirements: At least 127 individual federal judges have issued standing orders requiring disclosure of AI use in legal filings. The orders vary — some require disclosure only if AI generated substantive legal analysis, others require disclosure for any AI use including grammar checking. 14 federal districts have adopted district-wide rules, up from 6 at the start of 2025.
State courts: 38 states have some form of AI guidance for attorneys, ranging from formal ethics opinions to informal best-practice advisories. 12 states have mandatory disclosure requirements. Oregon, Texas, and Florida have the most comprehensive frameworks, combining disclosure mandates with specific sanctions guidelines.
The ABA Model Rules revision proposed in August 2025 added Comment [4A] to Rule 1.1 (Competence), stating that competent representation requires "adequate understanding of the capabilities and limitations of AI tools" used in legal work. The comment isn't binding, but it's being adopted by state bars as they update their own rules.
For compliance purposes, the safest approach is full disclosure plus verification. Disclose AI use in every filing where it played a substantive role. Verify every citation, quote, and factual assertion. Document your verification process. Courts haven't punished disclosure — they've punished concealment and sloppiness.
Building an AI Verification Protocol That Protects Your License
The firms getting sanctioned share a common trait: no verification workflow. They're using ChatGPT, Claude, or Copilot to draft briefs and filing them without checking the output. The fix isn't complicated, but it requires discipline.
Step one: citation verification. Every case citation in an AI-assisted brief must be verified in Westlaw, Lexis, or Fastcase. Not Google Scholar — actual legal databases with verified case reporters. Check that the case exists, the citation is correct, the holding matches what your brief claims, and the case hasn't been overruled. This adds 15-30 minutes per brief. That's the cost of using AI.
Step two: quote verification. Every direct quote attributed to a court must be verified against the actual opinion. AI models routinely fabricate plausible-sounding judicial language. If you can't find the exact quote in the actual opinion, don't use it.
Step three: statutory verification. Check that cited statutes exist, that the section numbers are correct, and that the quoted language matches the current version. AI models sometimes cite repealed provisions or invent subsections.
Step four: document your process. Create a verification checklist that the reviewing attorney signs. If a hallucination slips through despite good-faith verification, documented diligence is your best defense against sanctions. The Oregon court in Harmon explicitly noted that the attorney had no verification process whatsoever — that's what turned a correctable mistake into a career-ending sanction.
Step five: firm-wide policy. Don't leave verification to individual judgment. Implement a mandatory policy requiring AI disclosure to supervisors, citation verification before filing, and periodic audits of AI-assisted work product. The $109,700 Oregon sanction should be your firm's motivational poster.
The Bottom Line: With 1,227 documented AI hallucination cases in courts and $145K in Q1 2026 sanctions alone — including a $109,700 hit in Oregon and a federal prosecutor caught fabricating citations — any firm without a mandatory AI verification protocol is playing Russian roulette with its attorneys' licenses.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
