Here's the uncomfortable truth about agentic AI in legal practice: no court has ruled on who's liable when an AI agent makes a mistake. Not a single decision. The legal framework is completely unresolved. And firms are deploying these tools on client work right now — 100,000 lawyers on Harvey alone, processing 700,000 tasks daily.
WilmerHale published an analysis in March 2026 warning that agentic AI creates "new hidden routes to privilege waiver." The ABA's Opinion 512 requires lawyers to supervise AI output. But nobody's defined what "supervision" means when an AI agent autonomously executes a 15-step workflow while you're in a client meeting. This is the gap that will generate the first wave of malpractice claims.
Who's liable when an AI agent files the wrong motion
The liability question breaks into three layers, and none of them have clear answers:
The lawyer. Under existing ethics rules, the lawyer who submits work product is responsible for its accuracy. Period. It doesn't matter if an AI agent drafted it, reviewed it, or filed it. ABA Model Rule 5.3 extends supervisory duties to "nonlawyer assistants" — and every ethics opinion so far treats AI tools as falling under that umbrella. The lawyer signs it, the lawyer owns it.
The firm. Managing partners carry supervisory obligations under Model Rules 5.1 and 5.3. If a firm deploys AI agents without adequate oversight protocols, the firm's leadership is exposed. Not just the associate who hit "submit" — the partners who approved the technology without governance guardrails.
The vendor. This is where it gets interesting. Harvey, Thomson Reuters, LexisNexis, and DISCO all include limitation-of-liability clauses in their terms of service. If an AI agent produces a hallucinated case citation (it's happened), the vendor's position is: you should have checked. The product liability theories that might apply — defective product, negligent design, failure to warn — haven't been tested against AI agents in court.
The most likely scenario: the lawyer and firm absorb 100% of the liability in the near term. Vendor liability theories will take years to develop through litigation.
ABA Opinion 512 and the supervision problem
ABA Formal Opinion 512 (issued 2024) established that lawyers have a duty to supervise AI tools under the existing ethics framework. It didn't create new rules — it applied Model Rules 1.1 (competence), 1.6 (confidentiality), 5.1 (supervisory responsibility), and 5.3 (nonlawyer assistants) to AI usage.
The problem is that Opinion 512 was written for chatbot-era AI. You ask a question, you get an answer, you check the answer. That's manageable. Agentic AI breaks this model entirely.
When Harvey's Agent Builder executes a multi-step workflow — ingesting a data room, analyzing 500 contracts, cross-referencing risk factors, and producing a due diligence report — the "supervision" required isn't checking the final output. It's understanding every intermediate decision the agent made. Which documents did it prioritize? What risk thresholds did it apply? Why did it flag these provisions and not those?
Most firms can't answer those questions. They review the output, not the reasoning. That's like supervising a junior associate by only reading the final memo without ever discussing their analysis. It's technically supervision. It's practically inadequate. And when something goes wrong, the bar complaint will ask: "What steps did you take to ensure the AI's reasoning was sound?" Having no answer to that question is the malpractice risk.
WilmerHale's privilege waiver warning
WilmerHale's March 2026 analysis identified a risk most firms haven't considered: agentic AI creates new routes to inadvertent privilege waiver.
Here's how it works. An AI agent reviewing documents in discovery encounters privileged attorney-client communications. The agent — following its instructions to identify relevant documents — includes privileged material in a production set, a summary report, or worse, an analysis shared with opposing counsel's AI agent in an automated workflow.
In a human-reviewed process, a trained attorney recognizes the privilege markers and pulls the document. An AI agent might not — especially if the privilege indicators are subtle (in-house counsel copied on a business email, legal advice embedded in a memo labeled "business strategy").
The multi-agent propagation risk makes this worse. When one agent's output feeds into another agent's input (common in multi-step workflows), privileged information can spread through the system before any human sees it. One agent extracts key terms from documents. A second agent uses those terms to search for related documents. A third agent drafts a summary. If Step 1 extracted privileged content, Steps 2 and 3 amplified the exposure.
Under *In re Grand Jury Subpoena* precedent, inadvertent disclosure can waive privilege unless the producing party took "reasonable steps" to prevent it (FRE 502(b)). Whether deploying an AI agent without privilege-specific guardrails constitutes "reasonable steps" is an open question that will be litigated.
What to document now before the first ruling
Smart firms are building their defense files now — before a court forces the issue. Here's what to document:
AI governance policy. Written policy covering which AI tools are approved, for which tasks, with what supervision requirements. Clio's data shows 53% of firms have no AI policy at all. Don't be in that majority when the first malpractice claim hits.
Supervision protocols. Define who reviews AI agent output, what level of review is required for different risk levels, and how review is documented. A contract summary might need spot-checking. A court filing needs line-by-line review. Write it down.
Audit trails. Every AI agent interaction should be logged — the input, the agent's reasoning steps, the output, and the human review decision. If you can't reconstruct what the agent did six months from now, you can't defend your supervision.
Client disclosure. Some jurisdictions now require disclosing AI usage to clients. Even where not required, proactive disclosure builds trust and creates a consent record. "We used AI-assisted tools for initial document review, with attorney supervision of all output" is a defensible position.
Vendor agreements. Review your AI vendor contracts for liability allocation, data handling, and indemnification clauses. Most vendors disclaim liability for output accuracy. Know where you stand before something goes wrong.
Incident response plan. When (not if) an AI agent makes a material error, your firm needs a defined response: who investigates, how errors are corrected, when clients are notified, and how the agent is retrained or restricted.
The malpractice insurance question
Here's what your malpractice carrier is thinking about right now: does your policy cover AI agent errors?
Most legal malpractice policies cover errors and omissions in professional services. An AI agent that produces a hallucinated case citation, misses a critical contract provision, or inadvertently waives privilege arguably falls under "errors in professional services" — if the lawyer used the AI as part of delivering those services.
But carriers are watching this space closely. Expect new exclusions, riders, or premium adjustments as AI agent adoption grows. Some carriers are already asking about AI usage in renewal applications. Others are developing AI-specific endorsements.
The coverage gap that worries firms most: systemic errors. If an AI agent applies the wrong analysis framework across 200 matters before anyone catches it, the aggregate exposure could exceed policy limits. That's not a single-matter malpractice claim — it's a firm-wide crisis.
What to do now: talk to your carrier. Disclose your AI usage proactively. Ask whether AI-related errors are covered under your current policy. Get it in writing. And budget for premium increases — carriers will price this risk, and firms with strong governance programs will pay less than firms without them.
The Bottom Line: No court has ruled on AI agent liability yet, but the first case is coming — firms that document their governance, supervision protocols, and audit trails now will be the ones that survive it.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
