In June 2023, Mata v. Avianca made international headlines when Judge P. Kevin Castel sanctioned attorneys Steven Schwartz and Peter LoDuca for submitting a brief containing six fabricated case citations generated by ChatGPT. The $5,000 fine was modest. The reputational damage was catastrophic. But Mata was just the opening act.

Since then, courts across the country have moved from shock to policy. Morgan v. V2X established disclosure requirements for AI-assisted litigation in late 2024. Park v. Kim imposed sanctions for undisclosed AI use in briefing. By early 2026, over 30 federal judges have adopted standing orders addressing AI-generated work product, and the judicial stance is no longer cautionary — it's enforcement-oriented.


Mata v. Avianca: The Case That Started Everything

Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023), remains the foundational AI work-product case. Attorney Steven Schwartz used ChatGPT to research case law for an opposition brief, and the tool generated six citations to cases that didn't exist — including fabricated docket numbers, judicial opinions, and procedural histories. When opposing counsel couldn't locate the cases, the court ordered Schwartz to produce copies. He asked ChatGPT to verify them. It confirmed they were real. They weren't.

Judge Castel sanctioned both Schwartz and his co-counsel Peter LoDuca $5,000 and required them to notify every judge cited in the fabricated cases. The opinion made clear that the issue wasn't using AI — it was submitting AI output to the court without verification. The duty to verify legal citations is non-delegable. An attorney who relies on AI output without independent research violates Rule 11 of the Federal Rules of Civil Procedure, which requires a reasonable inquiry into the legal contentions presented.

Mata became the catalyst for every AI disclosure order that followed. Courts read the opinion and immediately recognized the systemic risk: AI tools generate plausible but unverifiable legal content, and attorneys under time pressure will submit it without checking.

Morgan v. V2X and the Disclosure Framework

Morgan v. V2X, Inc., No. 1:23-cv-1021 (S.D. Ind. 2024), shifted the judicial focus from sanctions to prevention. The Morgan v. V2X protective order didn't just address fabricated citations — it established a comprehensive framework for AI use in litigation.

The order requires parties to disclose any AI tools used in drafting filings, conducting research, or reviewing discovery materials. It mandates that attorneys certify they have verified all AI-assisted work product, including legal citations, factual assertions, and case analysis. And it requires that AI tools used in discovery meet specific data protection standards.

This framework goes beyond the "don't fabricate citations" lesson of Mata. It treats AI as a tool that requires governance at every stage of litigation — from research through production. Courts in the Northern District of Illinois, the District of Colorado, and the Eastern District of Pennsylvania have adopted similar frameworks in their standing orders.

The practical impact is significant. Attorneys who use AI for legitimate purposes — drafting, research assistance, document review — now face disclosure obligations in a growing number of jurisdictions. The failure to disclose AI use is itself becoming sanctionable, separate from any issues with the work product.

Park v. Kim and the Expanding Sanctions Landscape

Park v. Kim, No. 22-cv-2057 (E.D.N.Y. 2024), extended the Mata framework to undisclosed AI use. The court found that an attorney had used AI to draft portions of a brief without disclosing it, even though the brief didn't contain fabricated citations. The sanctions were based on the nondisclosure itself — a violation of the court's standing order requiring AI use transparency.

This was a pivotal development. Park v. Kim established that the sanctionable conduct isn't limited to submitting fabricated content. Simply using AI without disclosure, in a jurisdiction that requires it, is independently sanctionable. The opinion explicitly rejected the argument that AI is no different from using Westlaw or word processing software, noting that generative AI tools create content rather than merely retrieving it.

Other notable cases have reinforced this trajectory. Kruse v. Karv Automotive Group saw sanctions for AI-generated content in a consumer protection action. Ex parte Allen raised the question of AI-generated inventorship claims. And multiple unreported orders have imposed discovery sanctions for undisclosed AI use in document review.

The pattern is clear: courts are treating AI-generated work product as a category requiring specific oversight, disclosure, and verification — and the penalties for noncompliance are escalating.

What This Means for Your Firm

The judicial consensus has crystallized around three requirements for AI-generated work product: disclosure, verification, and accountability.

Every attorney in your firm should know which jurisdictions require AI disclosure before filing. The AI disclosure requirements by district vary significantly, and noncompliance is sanctionable regardless of whether the work product contains errors. Build AI disclosure into your filing checklist the same way you check formatting and page limits.

Verification is non-negotiable. Every legal citation, factual assertion, and case analysis generated or assisted by AI must be independently verified through primary sources — Westlaw, Lexis, PACER. The Rule 11 duty of reasonable inquiry doesn't shrink because you used a tool that sounds confident. Implement a firm-wide AI governance policy that requires documented verification of all AI-assisted work product before submission.

Accountability flows uphill. Under Model Rule 5.1, supervising attorneys are responsible for work product submitted under their signature, regardless of who — or what — drafted it. If an associate uses AI to draft a brief and a partner signs it without verifying the citations, the partner bears the sanctions risk. Train every attorney on these obligations now, before the standing order in your jurisdiction makes it mandatory.

The Bottom Line: Courts aren't debating whether to regulate AI work product. They're debating how hard to sanction firms that don't self-regulate first.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.