In March 2026, a midsize litigation firm in Chicago deployed an AI agent to handle first-pass document review on a products liability case. The agent autonomously pulled documents from the firm's DMS, summarized them, and flagged relevance. Three weeks later, opposing counsel argued that the AI's access to privileged attorney work product, without human gatekeeping, constituted a waiver. The court agreed to hear the motion.
This isn't hypothetical anymore. AI agents don't just answer prompts. They act. They retrieve, summarize, route, and sometimes share information across systems without a human reviewing each step. That autonomy is exactly what makes them useful for legal workflows. It's also what makes them a privilege time bomb.
The core problem is simple: attorney-client privilege requires that communications stay confidential and within the scope of legal advice. When an AI agent processes, stores, or transmits privileged material through third-party infrastructure without adequate controls, the confidentiality element breaks down. And once privilege is waived, it doesn't come back.
How AI Agents Differ from AI Assistants
A standard AI tool like ChatGPT or Claude responds to a prompt and returns output. The human decides what goes in and what comes out. An AI agent operates differently. It receives a goal, breaks it into tasks, and executes them across multiple systems. It can query databases, pull files, send emails, and update records without waiting for approval at each step.
In legal settings, this means an agent handling intake can pull prior case files, cross-reference client communications, and draft a summary memo. That's efficient. But every one of those steps involves accessing material that's potentially privileged. If the agent routes that data through an external API, stores it in a vendor's cloud, or logs it in a system that opposing counsel can subpoena, you've got a problem.
The distinction matters because courts evaluate privilege waiver based on the steps taken to maintain confidentiality. A lawyer who personally reviews a document before sharing it has taken a deliberate step. An AI agent that autonomously moves privileged data through three systems hasn't. The Morgan v. V2X protective order framework, covered in depth at /legal/ai-case-law/morgan-v-v2x/, already requires that AI tools processing discovery materials operate within "confined computing environments." Agents that call external APIs don't meet that standard.
Where Privilege Breaks Down with Autonomous AI
Privilege waiver happens when confidential information is disclosed to a third party, either intentionally or through negligence. With AI agents, the disclosure risk comes from three vectors.
First, data transit. Most AI agents rely on cloud-based LLMs. When an agent sends a privileged document to an API for processing, that document leaves the firm's control. Even if the vendor promises not to train on the data, the transmission itself is a disclosure. Under the Federal Rules of Evidence 502(b), inadvertent disclosure doesn't waive privilege only if the holder took "reasonable steps" to prevent it. Routing privileged documents through a consumer AI API isn't reasonable.
Second, persistent memory. Some agent frameworks store conversation history and retrieved documents in vector databases or logs. If an agent stores a privileged memo in a retrieval-augmented generation (RAG) index, that memo is now accessible to any future query that hits the same index. If multiple matters share the same agent infrastructure, one client's privileged information becomes searchable in another client's context.
Third, multi-step delegation. An agent tasked with "prepare a case summary" might retrieve emails, deposition transcripts, and internal strategy memos. It doesn't distinguish between what's privileged and what isn't. Without explicit access controls, the agent treats all information equally. That flattening of privilege boundaries is exactly the kind of negligence courts look for when evaluating waiver claims.
What Courts and Bar Associations Are Saying
No appellate court has issued a definitive ruling on AI agents and privilege waiver as of April 2026. But the signals are consistent. The Morgan v. V2X protective order from the Eastern District of Virginia (October 2025) established that AI tools processing confidential discovery materials must operate within confined environments with full audit trails. That framework applies directly to agents.
The ABA Formal Opinion 512 (issued February 2026) addressed AI in legal practice broadly and stated that lawyers have a duty of competence in understanding how their AI tools handle confidential information. The opinion specifically noted that "automated systems that process client data without attorney oversight at each substantive step" create heightened confidentiality risk.
State bars are moving faster. The New York State Bar Association issued guidance in January 2026 requiring that any AI tool with access to client files must have documented data handling protocols. The California State Bar standing committee on professional responsibility published an interim opinion in March 2026 noting that "agentic AI systems" operating without human-in-the-loop review on privileged materials create "presumptive waiver risk." Florida and Texas bar associations have similar guidance in draft form.
The direction is clear. If your agent touches privileged material and you can't show exactly where that data went and who (or what) accessed it, courts will treat it as a failure of reasonable precautions.
What This Means for Your Firm
Start with an inventory. Identify every AI tool in your firm that operates with any degree of autonomy. This includes workflow automation, document processing pipelines, and any system that retrieves or routes information without a human approving each action. The audit framework at /legal/ai-tools/ covers the tool evaluation basics.
For each agent-like system, map the data flow. Where does input come from? Where does the processed output go? Is any data stored outside the firm's infrastructure? If the answer to that last question is yes, that agent shouldn't touch privileged material until the data handling is locked down.
Implement a privilege classification layer. Before an agent accesses any document, that document needs a privilege tag. Privileged materials should only be processable within on-premises or contractually confined environments. The agent should never be able to send privileged content to an external API without explicit human approval at that specific step.
Build audit logs for everything. Every document an agent accesses, every API call it makes, every output it generates. If opposing counsel challenges privilege, your defense is the log. No log, no defense. The AI governance policy framework covers the full checklist for internal policies that address these requirements.
The Bottom Line: AI agents are the most useful and most dangerous AI tools a law firm can deploy. Use them, but don't let them anywhere near privileged material without confined infrastructure, privilege tagging, and full audit trails.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
