In early 2025, a federal magistrate in the Eastern District of Virginia ruled that a party waived work-product protection after an attorney uploaded discovery documents into a consumer AI tool without enterprise data safeguards. The court treated the upload as voluntary disclosure to an unprotected third party. That ruling wasn't an outlier — it was the logical extension of privilege law applied to new technology.

The Morgan v. V2X protective order in the Southern District of Indiana had already established the framework in late 2024: AI tools used in litigation must meet the same confidentiality standards as any other vendor handling protected materials. Courts now require parties to identify AI tools, confirm data retention policies, and certify compliance with protective orders. If your firm uses AI in discovery without these guardrails, privilege waiver isn't a theoretical risk. It's already being litigated.


How AI Creates Privilege Waiver Risk in Discovery

Attorney-client privilege and work-product doctrine both depend on maintaining confidentiality. Voluntarily sharing protected information with a third party who isn't covered by the privilege destroys the protection. AI tools operated by third-party vendors are, in every legal sense, third parties.

The analysis is straightforward. When an attorney inputs privileged case strategy, client communications, or litigation analysis into a consumer AI tool, that data flows to the vendor's servers. If the vendor's terms of service permit data retention, model training, or access by vendor employees, the confidentiality required for privilege is broken. The intent of the attorney doesn't matter. The disclosure happened.

Enterprise-tier AI tools change the analysis but don't eliminate the risk. ChatGPT Enterprise, Claude via API with enterprise terms, and legal-specific platforms like Harvey AI offer contractual guarantees against training on inputs and retaining data. Under the Kovel doctrine and its modern extensions, these vendors can qualify as agents of the attorney if the contractual relationship is properly structured — preserving privilege. But the burden is on the producing party to prove those protections exist.

The Morgan v. V2X Framework Courts Are Adopting

The Morgan v. V2X protective order is the most significant judicial framework for AI in discovery. Issued in the Southern District of Indiana in late 2024, it established four requirements that courts are now replicating.

Tool identification. Parties must disclose the specific AI tools used for document review, privilege screening, or production. A general reference to "technology-assisted review" is not sufficient. Courts want the product name, the service tier, and the vendor.

Data retention confirmation. Parties must certify that AI tools used in discovery do not retain client data beyond the processing session and do not use inputs for model training. This goes beyond the vendor's marketing claims — it requires contractual verification.

Confidentiality compliance. AI tools must meet the same confidentiality standards as human reviewers under the protective order. If the order prohibits disclosure to non-parties, feeding data into a consumer AI tool violates the order even if no human at the vendor reads it.

Audit trail. Courts are requiring parties to maintain logs of AI-assisted review, including which documents were processed, when, by which tool, and who conducted the supervising human review. As of early 2026, at least 15 federal courts have adopted elements of this framework in standing orders or case-specific protective orders.

Building a Privilege-Safe AI Discovery Workflow

The goal isn't to avoid AI in discovery — it's faster and often more accurate than manual review for large document sets. The goal is to build the workflow so privilege survives challenge.

Use enterprise tools with signed vendor agreements. Every AI tool touching discovery materials needs a vendor contract with explicit terms: zero data retention, no model training on inputs, encryption at rest and in transit, SOC 2 Type II certification, and breach notification within 24 hours. No exceptions.

Document the chain of custody. For every batch of documents processed through AI, log the tool, date, operator, input count, and output. This creates the audit trail courts are requiring under Morgan v. V2X-style orders. Your firm's incident response plan should cover what happens if the audit trail reveals a problem.

Layer human review on privilege determinations. AI handles first-pass review to flag potentially privileged documents. A licensed attorney makes every final privilege call. Document that review. Courts have rejected privilege logs where AI made the determination without attorney verification.

Request a 502(d) order in every case. A Federal Rule of Evidence 502(d) order protects against inadvertent privilege waiver. Draft it to explicitly cover AI-related inadvertent disclosures — including scenarios where AI incorrectly designates a privileged document as non-privileged. This is your safety net, and it costs nothing to request.

What This Means for Your Firm

AI-assisted discovery reduces cost and improves accuracy on large document sets. But it introduces a privilege waiver vector that traditional review workflows don't have, and opposing counsel are learning to exploit it.

The playbook for challenging AI-assisted privilege is already emerging: subpoena the vendor agreement, demand the audit trail, challenge whether the AI tool's data handling met the protective order's confidentiality requirements. If your firm can't produce those records, the privilege argument collapses.

Review your current discovery workflow against the Morgan v. V2X framework. Confirm every AI tool has a signed vendor agreement with data protection terms. Ensure your privilege review process includes documented attorney oversight. And request a 502(d) order in every case where AI touches discovery materials. The firms doing this are saving 40-60% on document review costs without exposure. The firms that aren't are one motion to compel away from a waiver finding.

The Bottom Line: AI in discovery isn't the risk. Ungoverned AI in discovery is. Build the workflow to survive a privilege challenge, or assume privilege is waived the moment you hit enter.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.