When Morgan v. V2X landed in October 2025, the Eastern District of Virginia didn't just create a protective order for one case. It wrote the blueprint for how every law firm should evaluate AI vendor relationships. The order required confined computing environments, zero training on case data, full audit trails, and data deletion on demand. Most AI vendor contracts in circulation today don't meet a single one of those requirements.
That's the gap. Law firms are signing AI vendor agreements drafted by the vendors themselves, with terms designed to protect the vendor's business model. Data retention clauses are vague. Training opt-outs are buried in settings pages instead of contract language. Liability caps sit at the subscription fee. And firms are feeding confidential client data into these systems daily.
The fix isn't complicated, but it requires treating AI procurement like what it is: a data handling relationship with direct malpractice exposure. Here's what your contracts need to say.
The Morgan v. V2X Baseline
The protective order in Morgan v. V2X (E.D. Va., Case No. 1:24-cv-01492, October 2025) established five core requirements for AI tools handling confidential litigation materials. These aren't suggestions. They're court-imposed conditions that set the floor for what courts expect. Full case analysis is at /legal/ai-case-law/morgan-v-v2x/.
First, confined computing environments. The AI must process data within infrastructure that's isolated from other clients and the vendor's general systems. No shared multi-tenant processing of confidential legal data. Second, no model training on case data. The vendor can't use any input or output from the firm's usage to train, fine-tune, or improve its models. Third, complete audit trails. Every query, every document processed, every output generated must be logged and available to the firm.
Fourth, data deletion on demand. When the engagement ends or when the firm requests it, all data must be purged from the vendor's systems, including backups, logs, and any derived data. Fifth, disclosure compliance. The vendor must support the firm's ability to comply with court-ordered AI disclosure requirements, including providing documentation of what AI was used and how.
These five points should be the starting position for any AI vendor negotiation. If a vendor won't agree to them, they're telling you their infrastructure can't support law firm use. Believe them.
Contract Clauses Most Firms Are Missing
Beyond the Morgan v. V2X baseline, there are specific contract provisions that most law firms overlook when signing AI vendor agreements.
Sub-processor restrictions. Many AI vendors use third-party infrastructure providers (AWS, Azure, GCP) and sometimes additional sub-processors for specific features. Your contract should list every sub-processor, require notice before any change, and give you the right to terminate if a new sub-processor doesn't meet your security requirements. Without this clause, your client data can end up processed by companies you've never evaluated.
Data residency requirements. If your firm handles matters with cross-border implications, the contract must specify where data is processed and stored. The EU AI Act (effective August 2026) and GDPR require that EU citizen data stays within approved jurisdictions. Even for US-only firms, some state data privacy laws impose residency requirements. See the full EU AI Act analysis at /legal/eu-ai-act-law-firms-2026/.
Breach notification timelines. Standard vendor contracts often allow 72 hours or more for breach notification. For law firms handling privileged data, that's too long. Push for 24-hour notification for any unauthorized access to your firm's data. Include specific requirements for what the notification must contain: scope of data affected, timeline of the breach, and remediation steps.
Indemnification for AI-specific failures. Generic indemnification clauses don't cover AI-specific risks like hallucinated citations in legal research, incorrect case summaries that lead to missed deadlines, or training data contamination. Your contract should include specific indemnification for losses arising from AI output errors when the firm relied on the tool within its documented use cases.
Red Flags in Existing AI Vendor Agreements
Review your current AI vendor contracts for these specific red flags. If you find them, renegotiate before your next renewal.
"Aggregated and de-identified data" exceptions. Many vendors carve out an exception allowing them to use "aggregated and de-identified" data from your usage for product improvement. In legal contexts, this is dangerous. Legal documents contain unique fact patterns, party names, and strategic language that's difficult to truly de-identify. If the vendor is keeping any derivative of your data, you need to know exactly what and why.
Unilateral terms changes. SaaS agreements commonly include a clause allowing the vendor to modify terms with 30 days' notice. For a law firm relying on specific data handling commitments, this means the protections you negotiated can disappear with an email. Lock your key provisions (data handling, training opt-out, audit rights) into a master agreement that can't be modified unilaterally.
Liability caps at subscription fees. If your firm pays $50,000/year for an AI tool and a data breach exposes client information worth millions in malpractice exposure, a liability cap at the subscription amount is meaningless. Negotiate liability provisions that reflect the actual risk: the value of the data being processed, not the price of the software.
No right to audit. If you can't verify what the vendor is doing with your data, the contract's promises are unenforceable. Insist on annual audit rights, including the ability to commission third-party security assessments of the vendor's infrastructure. SOC 2 Type II reports are a minimum, not a substitute for audit rights.
What This Means for Your Firm
Pull every AI vendor contract your firm has signed in the last 24 months. That includes the enterprise tools your IT department procured, the research platforms your associates use, and the consumer AI subscriptions partners expense on their cards. The AI audit guide covers the full discovery process.
For each contract, check the five Morgan v. V2X baseline requirements. Mark which ones are met and which aren't. Any tool that processes client data without meeting all five should be flagged for renegotiation or replacement.
Create a standard AI vendor addendum that includes the additional provisions covered above: sub-processor restrictions, data residency, breach notification timelines, AI-specific indemnification, and audit rights. Attach this addendum to every new AI vendor agreement going forward. Most vendors will push back on some points. That pushback tells you which risks they're not willing to own.
Set a review cadence. AI vendor contracts shouldn't be signed and forgotten. Quarterly reviews of data handling compliance, annual security audits, and immediate review whenever the vendor changes its terms, sub-processors, or infrastructure. The firms that treat AI vendors like any other data custodian will avoid the malpractice exposure that's coming for firms that don't.
The Bottom Line: Your AI vendor contract is your first line of defense against malpractice exposure. If it doesn't meet the Morgan v. V2X baseline, you're operating on borrowed time.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
