Every time you paste client information into an AI tool, you're making a privilege decision -- whether you realize it or not. The Heppner analysis suggests that inputting privileged communications into consumer AI tools that train on user data could constitute voluntary disclosure, waiving privilege for the entire subject matter under the broad waiver doctrine.

This isn't hypothetical risk. It's the kind of issue that shows up in discovery motions, and once privilege is waived, you can't un-waive it. Here's the playbook for protecting privilege while still getting the benefits of AI.


The Privilege Framework for AI: What's at Stake

Attorney-client privilege protects confidential communications made for the purpose of obtaining legal advice. The privilege belongs to the client, and the attorney has a duty to protect it. Three elements are required: (1) a communication, (2) made in confidence, (3) for the purpose of legal advice.

AI creates a privilege problem at element (2) -- confidentiality. When you input privileged information into an AI tool, you're sharing it with a third party (the AI vendor). Whether that disclosure destroys confidentiality depends on the nature of the tool and the relationship with the vendor.

The critical distinction: - Enterprise AI tools with DPAs: The vendor is a service provider with contractual confidentiality obligations. This is analogous to sharing privileged information with a cloud storage provider, litigation support vendor, or outside copy service -- all of which courts have held do not waive privilege when proper safeguards exist. - Consumer AI tools: The vendor's terms of service typically allow use of inputs for model training. This is a voluntary disclosure to a third party without confidentiality protections -- exactly the scenario that waives privilege.

The *Heppner v. Doe* analysis (applied to AI by multiple commentators) holds that when a party voluntarily discloses privileged information to a third party without a reasonable expectation of confidentiality, the privilege is waived. Consumer AI tools fit this description.

The Kovel Doctrine: Does It Protect AI Use?

The Kovel doctrine (from *United States v. Kovel*, 2d Cir. 1961) extends attorney-client privilege to communications with agents who assist the attorney in providing legal services -- accountants, translators, investigators, and similar professionals.

Can an AI tool qualify as a Kovel agent? The legal community is split:

Arguments that AI qualifies: - AI tools function as agents assisting the attorney in providing legal services, similar to a legal research assistant or document review vendor - The attorney directs the AI's work and uses its output to serve the client - Enterprise AI vendors with DPAs have confidentiality obligations similar to other Kovel agents

Arguments against: - Kovel agents are typically human professionals with independent duties of confidentiality - AI vendors are corporate entities with commercial interests that may conflict with privilege protection - The 'agent' (the AI model) may process information in ways that compromise confidentiality even with contractual protections - Courts haven't ruled on AI as a Kovel agent, and extending the doctrine requires judicial action

The current reality: No court has definitively ruled on whether AI tools qualify as Kovel agents. Until case law develops, don't rely on Kovel as your primary privilege protection. Use it as a backup argument, but build your privilege strategy on contractual protections (DPAs) and operational controls.

The Privilege Protection Playbook: 7 Rules

Rule 1: Enterprise tools only. Never use consumer AI tools for privileged information. No ChatGPT free/Plus, no Claude free, no Gemini free. Enterprise tiers with signed DPAs are the minimum threshold.

Rule 2: DPA before data. Sign a data processing agreement with every AI vendor before any attorney inputs client data. The DPA must include no-training commitments, minimal data retention, and confidentiality obligations. No DPA, no client data -- period.

Rule 3: Anonymize when possible. Before inputting information into AI, remove client names, case numbers, party names, and specific identifying details. Ask the legal question abstractly: 'In a breach of fiduciary duty claim in Texas, what is the statute of limitations?' instead of 'My client John Smith was defrauded by his business partner Bob Jones in Houston...'

Rule 4: Document the Kovel relationship. In your engagement letters, include language identifying AI tools as agents of the attorney used for the purpose of providing legal services. This creates a record supporting Kovel protection if challenged.

Rule 5: Engagement letter disclosure. Inform clients that your firm uses AI tools with enterprise-grade security for legal analysis. Get informed consent. This isn't just good practice -- some jurisdictions require it.

Rule 6: Segregate AI interactions. Keep AI-related work in systems that can be logged and audited. If a privilege challenge arises, you need to show exactly what was input, what tool was used, and what protections were in place.

Rule 7: Monitor and update. AI vendors change their terms of service, data handling practices, and model training approaches. Review your vendors' practices quarterly. A tool that was safe last quarter may not be safe today.

What to Do When Opposing Counsel Challenges AI Privilege

Expect this motion: 'Plaintiff's counsel waived privilege by inputting privileged communications into [AI tool], which processes data on shared servers and uses inputs for model improvement.'

Your response framework:

If you used enterprise tools with DPAs: 1. Produce the DPA showing no-training, no-retention commitments 2. Cite the vendor's SOC 2 Type II report showing security controls 3. Argue that the relationship is analogous to other third-party service providers (cloud hosting, litigation support) that courts have held don't waive privilege 4. Reference the Kovel doctrine as additional support 5. Show your firm's AI policy demonstrating systematic privilege protection

If you used consumer tools (worst case): 1. Assess the scope of disclosure -- what specific information was input? 2. Consider whether subject-matter waiver applies or only the specific communication is waived 3. Evaluate whether the information was truly privileged or could be characterized as non-privileged work product 4. Prepare for the possibility that the court finds waiver 5. Immediately transition to enterprise tools and document the change

Prevention is everything. The best response to a privilege challenge is one you never have to make. If your firm has a written AI policy, approved tool list, DPAs on file, and documented verification workflows, the motion has almost no chance of succeeding.

Work Product Doctrine: A Separate (and Stronger) Protection

Don't confuse attorney-client privilege with work product protection. They're separate doctrines, and work product may actually be easier to protect in the AI context.

The work product doctrine protects documents and tangible things prepared in anticipation of litigation. AI-assisted research memos, draft briefs, and analytical summaries prepared for litigation purposes qualify as work product -- and unlike privilege, work product protection doesn't depend on the confidentiality of the communication.

Opinion work product (documents reflecting attorney mental processes, conclusions, opinions, and legal theories) gets near-absolute protection. If your AI interaction involves analyzing litigation strategy, evaluating case strengths, or developing legal theories, the output is likely opinion work product regardless of the tool used.

Ordinary work product (factual investigation documents) gets qualified protection that can be overcome by showing substantial need. AI-assisted fact compilation falls here.

The practical implication: Even if a court found that privilege was waived by AI use (unlikely with enterprise tools), work product protection provides an independent basis for keeping AI-assisted litigation materials confidential. This is your backup argument -- always assert both privilege and work product protection.

The Bottom Line: Protect privilege by using only enterprise AI tools with signed DPAs, anonymizing client information before input, documenting AI tools as Kovel agents in engagement letters, and getting client consent. Consumer AI tools create serious waiver risk under the Heppner analysis. No court has ruled definitively on AI and privilege, so build your protection on contractual safeguards, not untested legal theories.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.