Every lawyer knows about Mata v. Avianca by now. But treating it as a punchline instead of a case study is a mistake. The AI failures in legal practice reveal systemic problems that better prompts alone won't fix -- they expose gaps in verification workflows, supervision structures, and basic professional competence.

These cases aren't ancient history. They're happening now. And the sanctions are getting larger, not smaller, as judges lose patience with lawyers who should know better by 2026.


Mata v. Avianca: The Case That Started Everything

In June 2023, Judge Castel in the Southern District of New York sanctioned attorneys Steven Schwartz and Peter LoDuca $5,000 for submitting a brief citing six cases that didn't exist -- all hallucinated by ChatGPT. The financial sanction was modest. The reputational damage was catastrophic. What actually went wrong: Schwartz used ChatGPT for legal research without understanding that large language models generate plausible-sounding text, not verified facts. He then asked ChatGPT to confirm its own citations -- and it did, because that's what generative AI does. The failure wasn't using AI. It was using AI without any independent verification. No Westlaw check. No Lexis search. Nothing.

The Portland Sanctions: $109,000 and Counting

In late 2024, a Portland attorney faced $109,000 in sanctions for filing AI-generated briefs with fabricated citations in a personal injury case. The amount reflected not just the fake citations but the cascading damage: opposing counsel spent dozens of hours researching non-existent cases, the court's time was wasted, and the client's case was severely prejudiced. This case proved that courts would impose sanctions proportional to the harm caused, not just the ethical violation. It also showed that 'I didn't know AI could hallucinate' is no longer a viable excuse. By late 2024, every lawyer had been warned.

Whiting v. Athens: The Supervising Attorney Problem

This case added a critical layer: liability for supervising attorneys who don't check AI-generated work product. A junior associate used AI to draft a motion. The supervising partner signed and filed it without adequate review. When fake citations surfaced, both the associate and the partner faced consequences. The court's reasoning was clear -- Rule 11 requires the signing attorney to conduct a reasonable inquiry into the legal and factual bases of a filing. Delegating to AI doesn't reduce the duty of reasonable inquiry; it increases it. Managing partners take note: if your associates are using AI and you're signing their filings, you own the output.

The Pattern: Why Lawyers Keep Getting Caught

Every AI sanctions case shares the same three failures. First, no verification workflow. The lawyer treated AI output as authoritative rather than as a draft requiring independent confirmation. Second, no understanding of the tool. The lawyer didn't know that LLMs hallucinate or thought it wouldn't happen to them. Third, no firm policy. There was no required checklist, no mandatory verification step, no supervision protocol. These aren't technology failures. They're process failures. The AI worked exactly as designed -- it generated plausible text. The lawyer's job was to verify that plausible text against reality, and they skipped that step.

How to Prevent AI Failures in Your Practice

Step 1: Never trust AI citations. Every single citation must be verified in Westlaw, Lexis, or Fastcase. No exceptions. Step 2: Implement a firm-wide AI policy that requires disclosure of AI use, mandatory verification steps, and supervisor review of all AI-assisted filings. Step 3: Train every attorney and paralegal on what AI can and cannot do. Five minutes explaining hallucination prevents five-figure sanctions. Step 4: Use the right tool for the right task. AI is excellent for drafting, brainstorming, and analysis. It's terrible for citation generation. Use Westlaw and Lexis for research. Use AI for writing. Step 5: Document your verification process so you can demonstrate reasonable inquiry if challenged.

The Bottom Line: AI failures in legal practice are 100% preventable. Every sanctioned lawyer skipped the same step: independent verification. Build verification into your workflow as a non-negotiable requirement, and AI becomes a powerful tool instead of a professional liability.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.