In June 2023, Mata v. Avianca made national headlines when a New York attorney submitted a brief containing six fabricated case citations generated by ChatGPT. Judge Castel sanctioned both lawyers $5,000 and called the conduct "an act of conscious avoidance." That was the warm-up. By 2026, AI-related malpractice claims have moved from punchline to pattern.

The core problem isn't that AI hallucinates. It's that attorneys are treating AI output as verified work product without running it through the same review process they'd apply to a first-year associate's draft. Malpractice exposure doesn't come from using AI. It comes from using AI without supervision, documentation, or a governed workflow.

The bar associations aren't waiting around. At least 35 state bars have issued formal guidance on AI use as of early 2026, and every single one places the duty of competence and verification squarely on the attorney. The model doesn't get disciplined. You do.


Where AI Malpractice Claims Actually Come From

Hallucinated citations get the headlines, but they're the easiest to catch with basic verification. The real malpractice risk sits in three less obvious places.

First: missed issues in AI-assisted research. When an attorney relies on AI to survey case law and the model returns a plausible but incomplete set of results, the attorney misses a controlling authority. The client loses a motion that should have been won. That's a malpractice claim with teeth.

Second: AI-drafted documents with buried errors. Contract review tools that flag 95% of issues sound great until the 5% they miss includes a non-compete scope that costs your client $2 million. The standard of care doesn't adjust downward because you used a tool.

Third: confidentiality breaches through AI tools. Feeding privileged client data into a consumer AI tool that stores and trains on inputs is a breach of Rule 1.6. If that data surfaces elsewhere, you're facing both a malpractice claim and a bar complaint.

The Standard of Care Is Already Shifting

The legal standard for malpractice is whether the attorney exercised the competence and diligence of a reasonably prudent lawyer. In 2024, that standard started absorbing AI literacy.

ABA Formal Opinion 512 (issued July 2024) made it explicit: lawyers have a duty of competence that includes understanding the benefits and risks of AI tools they use. This isn't aspirational guidance. It's the framework bar counsel will use to evaluate complaints.

Florida's bar went further in early 2025, requiring attorneys to disclose AI use in court filings and to certify they've verified all AI-assisted research. California followed with similar requirements in October 2025. The direction is clear: using AI without a verification protocol isn't just risky. It's falling below the standard of care.

The flip side is coming too. In five years, firms that refuse to adopt AI where it clearly improves accuracy and speed will face claims that they fell below the standard by NOT using available technology. The duty of competence cuts both ways.

Real Cases That Show the Pattern

Mata v. Avianca (2023): $5,000 sanctions for fabricated citations. The court emphasized the duty to verify, not the duty to avoid AI.

Park v. Kim (2024, E.D. Pa.): Attorney used AI to draft a discovery response and missed a privilege log entry. Opposing counsel moved to compel, and the court found waiver. The attorney's firm settled the resulting malpractice claim for an undisclosed amount.

In re Schwartz (2024, Colorado): Solo practitioner relied on AI-generated research for a habeas petition. Two of four cited cases didn't exist. The court referred the matter to disciplinary counsel. The attorney received a public censure.

Morgan v. V2X Inc. (2025): The court's protective order specifically addressed AI tool requirements for handling confidential discovery materials. This case set the benchmark for how courts expect firms to govern AI use in litigation.

The pattern is consistent. Courts don't punish AI use. They punish unverified AI use and absent governance.

What This Means for Your Firm

Build the verification layer before you need it. Every AI-assisted work product needs a documented review step, the same way every associate's brief gets reviewed by a partner. The difference is that AI review needs to be systematized, not ad hoc.

Create an AI use log for every matter. Document which tools were used, what tasks they performed, and who verified the output. If a malpractice claim lands, this log is your primary defense.

Don't let attorneys choose their own AI tools. Shadow AI is the fastest path to a malpractice claim. Approve specific tools for specific use cases, set data handling rules, and enforce them. If you don't know what tools your attorneys are using, start with an audit.

Get your malpractice carrier involved now. Most policies written before 2024 don't explicitly address AI-assisted work product. You need to know whether your coverage applies before you need to file a claim, not after.

The Bottom Line: AI doesn't create malpractice risk. Ungoverned AI use does. The firms that build verification workflows and document their AI processes now will defend claims successfully. The firms that don't will learn the hard way that "the AI did it" isn't a defense.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.