The ABA said it plainly in Formal Opinion 512: using AI without understanding its limitations is 'almost certainly' a violation of the duty of competence. At least 8 attorneys have been publicly sanctioned for AI-related misconduct, malpractice insurers are adding AI-specific questionnaires to renewal applications, and the first wave of AI-related malpractice claims is already working through the system.
The malpractice risk from AI isn't about the technology being dangerous -- it's about lawyers using it carelessly. Every AI risk has a documented mitigation. The attorneys getting sanctioned aren't the ones using AI thoughtfully. They're the ones treating AI output like verified legal authority.
ABA Opinion 512: The Malpractice Standard
ABA Formal Opinion 512 (July 2024) didn't just offer guidance -- it defined the standard of care for AI use in legal practice. The key language:
'A lawyer who uses GAI in connection with a representation without understanding its capabilities and limitations almost certainly violates the duty of competence under Rule 1.1.'
This means: - Using AI for legal research without knowing it hallucinates: likely malpractice - Submitting AI-generated citations without verification: likely malpractice - Using consumer AI tools for privileged client information: likely a confidentiality violation - Failing to supervise associates' AI use: likely a supervisory violation
The opinion applies all existing Model Rules to AI use -- competence (1.1), confidentiality (1.6), supervision (5.1/5.3), communication (1.4), and candor (3.3). It didn't create new obligations; it clarified that existing obligations cover AI.
What this means for malpractice claims: A plaintiff suing for AI-related malpractice now has the ABA's own language supporting the argument that failure to verify AI output falls below the standard of care. This makes AI malpractice claims significantly easier to prosecute than they were before July 2024.
The Sanctions Record: What's Actually Happened
The documented cases of AI-related sanctions and discipline through early 2026:
Federal court sanctions: - *Mata v. Avianca* (SDNY, 2023): The case that started it all. Attorneys Steven Schwartz and Peter LoDuca submitted a brief with 6 fabricated case citations generated by ChatGPT. $5,000 sanctions per attorney. - *Park v. Kim* (ED Pa, 2024): Attorney cited AI-generated cases without verification. $2,000 sanction and mandatory CLE. - Multiple additional sanctions in the Southern District of Texas, Central District of California, and Northern District of Illinois for similar failures to verify AI citations.
State bar discipline: - Colorado: Attorney received public censure for submitting AI-fabricated citations in a state court brief - New York: Disciplinary proceedings initiated against attorney who used consumer AI for client communications without data protections
The common thread: Every sanctioned case involves the same pattern -- attorney used AI, didn't verify the output, submitted it to the court, and got caught. Not a single sanction has been issued for using AI with proper verification. The risk isn't AI use. The risk is lazy AI use.
Emerging claims: Malpractice insurers report a growing number of pre-claims (notifications of potential malpractice) related to AI use, primarily around: (1) incorrect legal research leading to missed deadlines or waived arguments, (2) privilege waiver from consumer AI use, and (3) billing disputes over AI-assisted work.
The Insurance Gap: What Your Policy May Not Cover
Here's where managing partners need to pay attention: most malpractice policies weren't written with AI in mind, and the coverage gap is real.
What's happening in the malpractice insurance market:
AI-specific questionnaires: Major legal malpractice carriers (ALAS, CNA, Swiss Re) are adding AI-specific questions to renewal applications. They want to know: does your firm have an AI policy? Which tools are approved? How do you verify AI output? Your answers affect your premium and potentially your coverage.
Potential exclusions: Some carriers are exploring AI-specific exclusions or sub-limits. If your firm uses AI without a governance framework, you may find that AI-related claims are excluded or subject to a reduced coverage limit.
Premium impact: Firms with documented AI governance policies are getting standard or favorable rates. Firms without policies are seeing 5-15% premium increases at renewal, with insurers citing 'technology risk' as the factor.
The action item: Contact your malpractice carrier today and ask three questions: (1) Does our current policy cover AI-related malpractice claims? (2) What documentation do you need to see regarding our AI governance? (3) Are any AI-specific exclusions being considered for our next renewal? Get the answers in writing.
The Documentation Defense: Protecting Yourself
If an AI-related malpractice claim hits your desk, your best defense is documentation showing you used AI responsibly. Here's what to document:
Firm-level documentation: - Written AI governance policy (approved tools, data handling, verification requirements) - Training records (who completed training, when, assessment results) - Vendor due diligence files (DPAs, SOC 2 reports, security assessments) - Regular policy review records (quarterly/annual updates)
Matter-level documentation: - Which AI tools were used on the matter - What verification steps were taken (citation checking, Shepardizing, completeness review) - Who performed the verification (licensed attorney, not paralegal for final sign-off) - Client disclosure of AI use (engagement letter language or separate notification)
Incident documentation: - If AI produced an error that was caught before filing: document the catch and the correction - If AI produced an error that made it into a filing: document the discovery, correction, and any client/court notification
The standard you're building toward: 'We used AI as a tool, with full knowledge of its limitations, with enterprise-grade security, with a verification workflow, and with documented quality control. Every citation was independently verified. The error, if any, occurred despite reasonable precautions.' This is the defense that wins.
The Risk Framework: Assessing Your Firm's Exposure
Not all AI use carries the same malpractice risk. Assess your exposure by use case:
Low risk (with proper verification): - AI for initial research followed by full verification - AI for document summarization reviewed by an attorney - AI for drafting assistance where the attorney rewrites/edits substantially - AI for administrative tasks (billing narratives, scheduling)
Moderate risk: - AI for contract review where attorney reviews AI flags but doesn't re-read entire contract - AI for brief drafting where attorney edits but doesn't independently research - AI for client communications drafted by AI and lightly reviewed
High risk: - AI for legal research without citation verification - AI for court filings without independent legal analysis - AI for legal advice communicated to clients without attorney review - Consumer AI tools used with client information
The mitigation for every risk level is the same: verification by a competent attorney, documentation of the verification, and a firm policy that requires both. The firms getting sanctioned aren't in the 'moderate risk' category -- they're in the 'high risk' category without mitigation. Don't be there.
The Bottom Line: ABA Opinion 512 makes clear that using AI without understanding its limitations is 'almost certainly' malpractice. At least 8 attorneys have been sanctioned, and malpractice insurers are adding AI-specific questionnaires and considering exclusions. Protect yourself with a written AI policy, verification workflows, documentation at the firm and matter level, and written confirmation from your malpractice carrier that AI-related claims are covered.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
