An AI hallucination is when an artificial intelligence tool generates information that sounds authoritative but is factually wrong — invented cases, fabricated statutes, or misquoted holdings that don't exist. In legal practice, this isn't an abstract technical problem. It's a malpractice risk with documented consequences.
Courts have identified over 1,227 cases involving AI-generated hallucinations in legal filings as of early 2026. That number has been accelerating since Mata v. Avianca set off the alarm in 2023. The core issue isn't that AI sometimes gets things wrong — it's that AI presents fabricated information with the same confidence as accurate information, and lawyers have been filing it without verification.
Why Legal AI Hallucinates Differently Than General AI
General AI tools like ChatGPT hallucinate by inventing entirely fictional information — fake case names, nonexistent statutes, fabricated judges. These are relatively easy to catch. You search the citation, it doesn't exist, problem solved.
Legal AI tools (Westlaw AI, Lexis+ AI, CoCounsel) hallucinate differently because they use retrieval-augmented generation. They have access to real legal databases, so they rarely invent fake cases from scratch. Instead, they produce subtler distortions: citing a real case but misstating its holding, pulling accurate statutory language but applying it to the wrong jurisdiction, or weaving together fragments from multiple opinions into an analysis that sounds right but misrepresents each source. These errors are significantly harder to detect because every individual piece looks legitimate.
The Mata v. Avianca Origin Story and What Followed
In June 2023, attorneys Steven Schwartz and Peter LoDuca submitted a brief in Mata v. Avianca containing six fabricated case citations generated by ChatGPT. When opposing counsel couldn't find the cases, Judge Kevin Castel ordered the attorneys to explain. They admitted using ChatGPT and claimed they didn't realize it could fabricate cases.
The sanctions were $5,000 and a public opinion that became the most-cited AI ethics case in legal history. But the real impact was the cascade it triggered. Within 18 months, judges across the country began issuing standing orders requiring AI disclosure. Bar associations launched ethics investigations. And the documented cases of AI hallucinations in court filings multiplied from a handful to over 1,227 — not because hallucinations increased, but because courts started looking for them.
The 1,227 Documented Cases: What the Data Shows
The growing database of AI hallucination cases reveals clear patterns. Solo practitioners and small firms account for a disproportionate share — they're more likely to use general AI tools without enterprise-grade guardrails. Brief writing and motion practice are the highest-risk activities, where citation-heavy work creates more opportunities for hallucinated authorities to slip through.
The consequences range from sanctions and fines to suspended licenses and malpractice suits. Courts have shown decreasing tolerance over time. Early cases in 2023 drew warnings and modest fines. By 2025, judges were imposing harsher penalties on the reasoning that the risks are now well-known and lawyers have no excuse for failing to verify AI output. The trajectory is clear: the window for claiming ignorance has closed.
What Causes AI Hallucinations (Technical Explanation for Lawyers)
Large language models work by predicting the most probable next word in a sequence based on patterns learned during training. They don't "know" anything — they generate statistically likely text. When the model encounters a prompt where its training data is thin or ambiguous, it fills gaps with plausible-sounding content rather than saying "I don't know."
In legal contexts, this creates specific failure modes. Outdated training data means the model doesn't know about recent decisions or statutory amendments. Jurisdictional confusion happens because legal rules vary by state and the model blends them. Citation fabrication occurs because the model has learned the pattern of legal citations (volume number, reporter, page) and can generate new ones that follow the format perfectly but don't correspond to real cases. The output looks exactly like a real citation because the model learned the format, not the underlying reality.
The Verification Obligation: Your Ethical Duty
ABA Model Rule 1.1 requires competent representation, which includes the duty to verify the accuracy of any authority cited in a filing — regardless of how it was generated. Rule 3.3 requires candor toward the tribunal, prohibiting the submission of false statements of law. These rules existed long before AI, but they map directly onto the hallucination problem.
ABA Formal Opinion 512 (2024) made this explicit: lawyers have an ethical obligation to understand the limitations of AI tools they use and to verify AI-generated output before relying on it. "The AI did it" is not a defense. It never was. The lawyer who signs the filing bears full responsibility for every citation, every holding, and every legal argument — whether a human associate wrote it, a contract researcher drafted it, or an AI generated it.
The Bottom Line: AI hallucinations in legal practice aren't a theoretical risk — they're a documented epidemic with over 1,227 cases and counting. The technology that causes them isn't going away, and neither is the ethical obligation to catch them. Every firm using AI tools needs a verification workflow that treats AI output the way you'd treat a first draft from an unsupervised summer associate: useful starting point, but never ready to file without review.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
