Federal Rule of Civil Procedure 11 requires every attorney who signs a filing to certify that the legal contentions are "warranted by existing law" and that factual contentions have "evidentiary support." AI hallucinations violate both requirements. When you submit a brief citing cases that don't exist or quoting holdings that were never issued, you've certified something false to the court. Rule 11 doesn't care that a machine generated the false content.
The sanctions are already landing. A Portland attorney paid $109,700. Oregon adopted a formula: $500 per fabricated citation, $1,000 per fabricated quotation. These aren't hypothetical risks — they're established penalties with a growing body of case law behind them. The courts aren't sympathetic, and they shouldn't be.
What Rule 11 Actually Requires When You Use AI
Rule 11(b) creates four certification requirements every time an attorney signs a pleading, motion, or other paper. All four apply to AI-assisted work:
11(b)(1): The filing isn't presented for an improper purpose. Using AI to mass-generate frivolous motions could trigger this.
11(b)(2): Legal contentions are warranted by existing law or a nonfrivolous argument for modification. Every case citation must refer to an actual case that actually says what you claim it says. AI hallucinations fail this on both counts.
11(b)(3): Factual contentions have evidentiary support. If AI generates fabricated facts and you include them in a filing, you've violated this provision.
11(b)(4): Denials of factual contentions are warranted on the evidence. AI-generated denials that contradict the record violate this.
The standard is "reasonable inquiry." A lawyer must conduct an investigation reasonable under the circumstances before signing. Submitting AI output without verification fails the reasonable inquiry standard by definition. The technology's known hallucination rate (17-33% for legal AI platforms) makes unverified reliance objectively unreasonable.
The Oregon Formula: Quantified Sanctions for AI Hallucinations
Oregon's approach to AI hallucination sanctions has created the clearest penalty framework in the country. The numbers are specific and escalating:
- $500 per fabricated case citation: If the case doesn't exist in any reporter, that's $500 per instance. A brief with six fake cases costs $3,000 just on citations. - $1,000 per fabricated quotation: If the attorney attributes a quote to a real or fake case and the quote doesn't appear in the actual opinion, that's $1,000 per instance. Fabricated quotes are treated more severely because they suggest a deeper failure to verify.
These amounts are per-violation floors, not caps. Courts retain discretion to impose higher sanctions based on the circumstances — the harm to opposing parties, the attorney's experience level, whether the conduct was a first offense, and whether the attorney took remedial action.
The Oregon formula is influential beyond Oregon. Federal courts in other districts are citing it as a reasonable sanctions framework. Expect these numbers to become the national baseline as more AI hallucination cases arise.
The Portland Attorney: $109,700 and a Career Lesson
The most significant AI sanctions case to date involved a Portland attorney who used AI to draft a brief and submitted it without verifying the citations. The court found multiple fabricated cases, fabricated holdings, and fabricated quotations. The total sanctions: $109,700.
The breakdown tells the story. The sanctions included not just the per-citation penalties but also opposing counsel's fees for identifying the fabrications, the court's costs for investigating the matter, and a punitive component reflecting the attorney's failure to take responsibility promptly. The attorney initially attempted to blame the AI tool, which the court explicitly rejected as a defense.
Key takeaway from the opinion: the court stated that using AI to draft legal documents is not itself problematic. The violation was submitting AI output without conducting the reasonable inquiry Rule 11 requires. The court analogized it to submitting research from a first-year law student without checking their work — the supervisor bears responsibility for the final product, regardless of who or what produced the draft.
How Courts Are Responding: The Trend Lines
Since Mata v. Avianca in 2023, the judicial response has moved through three phases. Phase one was surprise — judges were shocked attorneys would submit unverified AI output. Phase two was education — courts issued standing orders requiring AI disclosure. Phase three is enforcement — courts are now sanctioning without hesitation.
Over 30 federal courts have adopted AI-specific standing orders or local rules. Common requirements include: disclosure of AI use in legal research or drafting, certification that all citations have been verified by a human attorney, and agreement that the attorney bears full responsibility for AI-generated content.
The trend is toward harsher sanctions, not leniency. Early cases resulted in modest penalties and strong language. Recent cases show larger financial penalties and, in some instances, referrals to state bar disciplinary authorities. The window of judicial sympathy for attorneys who didn't know about AI hallucination risks closed in 2024. By 2026, every practicing attorney is charged with knowledge of the risk.
How to Protect Yourself: The Verification Protocol
Rule 11 protection requires a documented verification process that demonstrates reasonable inquiry. Here's the minimum:
Before filing anything containing AI-assisted research or drafting:
1. Verify every citation exists. Search each case in Westlaw or Lexis. Confirm the case name, citation, court, and year. Don't use AI to verify AI — use traditional databases.
2. Read every cited case. Not the headnote, not the AI summary — the actual opinion. Confirm the holding matches what you've cited it for.
3. Verify every quotation word-for-word. Copy the quote, search it in the actual opinion. AI frequently generates plausible-sounding quotes that appear nowhere in the cited case.
4. Shepardize or KeyCite every case. Confirm it hasn't been overruled, reversed, or distinguished in ways that undermine your argument.
5. Document your verification. Keep a log showing which citations you verified, when, and how. This log is your Rule 11 defense if a citation is challenged.
6. Get a second review on critical filings. For dispositive motions and appellate briefs, have another attorney independently verify the AI-assisted research.
This takes time. That's the point. The time investment is the reasonable inquiry Rule 11 demands. Firms that treat AI verification as optional are accumulating sanctions risk with every filing.
The Bottom Line: Rule 11 sanctions for AI hallucinations are no longer a cautionary tale — they're an established enforcement pattern. The Oregon formula gives courts a clear framework. The Portland case gives them a precedent. Every filing you sign certifies that you've done reasonable inquiry. Unverified AI output is unreasonable by definition. Verify everything.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
