Morgan v. V2X is the first major ruling on AI and discovery confidentiality. A Colorado federal court held that uploading confidential discovery materials to consumer AI tools violates protective orders unless specific safeguards are in place. Every attorney handling confidential documents needs to understand this ruling because it bans standard consumer AI tools from touching protected material.
Background
Morgan v. V2X started as a routine employment discrimination case in the District of Colorado, filed under Case No. 25-1991. The plaintiff, proceeding pro se (without an attorney), received confidential discovery materials from the defendant, V2X, Inc., under a standard protective order. These materials contained the kind of sensitive information that protective orders exist to safeguard.
The plaintiff uploaded confidential discovery materials to consumer AI tools for help analyzing the documents. This is increasingly common: parties use ChatGPT, Claude, or other AI assistants to help understand complex legal documents, draft responses, or identify relevant information. For a pro se litigant without legal training, the appeal of AI assistance is obvious.
V2X discovered the plaintiff's AI use and moved to restrict the practice, arguing it violated the protective order. The defendant's concern was specific: consumer AI tools process user inputs on remote servers, and some use that data to train their models. Uploading confidential discovery materials to these platforms could expose protected information to third parties or embed it in AI training data permanently.
What Happened
Magistrate Judge Maritza Dominguez Braswell faced a question no court had squarely addressed: does uploading confidential discovery materials to AI tools violate a protective order, and if so, what safeguards are needed? The court took the question seriously, issuing a detailed ruling on March 30, 2026, that addressed both the work product implications and the practical risks.
The court first held that a party's use of AI in litigation may reflect mental impressions and legal theories protected as work product under Rule 26(b)(3). What you ask an AI, how you frame your questions, and what documents you select for analysis can reveal your litigation strategy. That's protected.
But the court also found that the identity of the AI tool used is not protected. Opposing parties and the court have a right to know which AI platforms are processing confidential materials. This distinction matters: your strategy is protected, but your tool choices aren't.
The Ruling
The court amended the protective order to impose three specific requirements for any use of AI tools with confidential discovery materials. First, the AI provider must be contractually barred from using input data for training purposes. Second, the provider cannot disclose inputs to third parties. Third, the provider must allow deletion of all confidential data upon request.
These three requirements effectively ban standard consumer AI tools from processing confidential discovery materials. Free-tier ChatGPT, for example, uses conversations for model training by default. Even paid versions of consumer AI tools don't necessarily provide contractual guarantees about data handling. Only enterprise-grade AI platforms with specific data processing agreements qualify.
The court also required the plaintiff to disclose which AI tools had been used with confidential materials. This disclosure requirement creates accountability: parties can't quietly upload protected documents to AI tools and hope nobody notices. The ruling creates a framework that balances AI's practical benefits with the confidentiality obligations that discovery depends on.
Outcome: The court held that AI use by litigants is protected by the work product doctrine under Rule 26(b)(3), but ordered the plaintiff to disclose the identity of any AI tools used with confidential materials. The court amended the protective order to impose three requirements: AI providers must be contractually barred from training on inputs, cannot disclose inputs to third parties, and must allow deletion of all confidential data.
Why This Case Matters
This ruling changes how every litigator handles confidential documents. Before Morgan v. V2X, there was no clear rule about using AI tools with discovery materials. Some attorneys used consumer AI freely. Others avoided it entirely. Now there's a framework: you can use AI, but only with platforms that meet the court's three requirements.
The practical impact is immediate. Any firm handling cases with protective orders needs to audit which AI tools their attorneys and staff are using. Consumer-grade ChatGPT, free Claude, Google Gemini without enterprise agreements: all are off-limits for confidential materials under this framework. Firms need enterprise AI platforms with proper data processing agreements or risk violating protective orders.
Other courts are already citing Morgan v. V2X when drafting AI-aware protective orders. The three requirements (no training, no third-party disclosure, deletion rights) are becoming the standard template. This isn't a one-court anomaly. It's the beginning of a national framework for AI and confidentiality in litigation.
Lessons for Attorneys
Every managing partner needs to issue clear guidance on AI tool use with confidential materials. The Morgan v. V2X framework gives you the standard: enterprise platforms with contractual data protections only. No consumer AI tools for anything covered by a protective order. Train your attorneys and staff on the distinction, because a single upload to the wrong platform can violate a court order.
For litigators specifically: review every active case with a protective order and assess whether any AI tools have been used with protected materials. If they have, determine whether the tools meet the court's three requirements. If they don't, you may need to disclose the issue to the court and opposing counsel. Self-reporting is painful, but it's better than having it discovered later.
For solo practitioners and small firms without enterprise AI budgets: this ruling creates a cost barrier. Enterprise AI platforms with proper data agreements are more expensive than consumer tools. But the alternative is risking sanctions for violating protective orders. Factor AI platform costs into your litigation budget the same way you factor in Westlaw or document review tools.
The Bottom Line
Morgan v. V2X bars consumer AI tools from processing confidential discovery materials and establishes a three-part test for AI platforms that can: no training on inputs, no third-party disclosure, and deletion rights. Every firm handling protected materials needs enterprise-grade AI platforms or needs to stop using AI for those documents.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.