Every AI tool your firm uses is a data pipeline. The question isn't whether it works well. It's where client data goes after you hit enter. In January 2025, the Morgan v. V2X protective order gave federal courts their first framework for answering that question. It listed specific criteria any AI tool must meet before touching confidential information. Most firms still haven't applied those criteria to the tools their attorneys use daily.
The gap between "this tool is useful" and "this tool is safe for client data" is where malpractice exposure lives. An associate who uses an AI tool to summarize a deposition transcript just sent that transcript somewhere. If the firm can't say exactly where, how long it's retained, and who else can access it, the firm has a confidentiality problem that no disclaimer can fix.
The Six-Factor Risk Framework
Before any AI tool processes client data, evaluate it against six factors. This framework is derived from the Morgan v. V2X protective order requirements and maps to ABA Model Rule 1.6 confidentiality obligations.
1. Data Retention. Does the vendor retain your inputs? For how long? Is retention configurable? Consumer-grade tools typically retain inputs indefinitely. Enterprise tools should offer zero-retention or configurable retention windows. Get this in writing.
2. Training Exclusion. Does the vendor use your inputs to train or fine-tune their models? If yes, your client data becomes part of the model and can surface in other users' outputs. This is a non-negotiable. Any tool that trains on inputs is disqualified for confidential data.
3. Access Controls. Who at the vendor can access your data? Is it encrypted at rest and in transit? Are there role-based access controls? Can you restrict access by matter or practice group? Look for SOC 2 Type II certification as a baseline.
4. Audit Logging. Can you track who used the tool, when, on which matter, and what data was processed? Without audit logging, you can't demonstrate compliance, investigate incidents, or respond to bar inquiries. This is table stakes for any enterprise deployment.
5. Subprocessor Transparency. Does the vendor use third-party subprocessors? Where are they located? What data do they access? A tool that routes data through five subprocessors in three countries creates jurisdictional complexity that most firms aren't equipped to evaluate.
6. Contractual Commitments. Are all of the above guaranteed in a legally binding agreement, or just described in a terms-of-service page that the vendor can change unilaterally? If it's not in the contract, it's not a commitment.
How to Apply This Framework in Practice
Start with your existing tool inventory. List every AI tool in use at the firm, including the ones IT didn't approve. Every firm that's done a thorough audit has found tools they didn't know about. Consumer AI subscriptions on personal devices, browser extensions with AI features, AI-powered transcription services used in depositions.
Score each tool against the six factors. Use a simple pass/fail for each criterion. A tool that fails on any of the first three factors (retention, training, access controls) should be removed from client work immediately. Failures on factors four through six (audit logging, subprocessors, contracts) are serious but can be remediated through vendor negotiation.
Create three tiers. Approved for all client data: tools that pass all six factors with contractual commitments. Approved for non-confidential use: tools that pass factors 1-3 but lack full audit logging or contractual depth. Prohibited: everything else. Every attorney should know which tier each tool falls into.
Document the evaluation. When a bar association, court, or client asks how your firm evaluated its AI tools, you need to show the analysis. A one-page evaluation sheet per tool, updated annually, creates a defensible record.
Common Failures and Red Flags
"We don't train on your data" with an asterisk. Some vendors exclude your data from model training but still use it for "product improvement," "safety research," or "aggregate analytics." Read the full data processing terms. If the vendor touches your data for any purpose beyond providing the service, that's a flag.
Terms of service instead of contracts. A consumer tool's TOS is a unilateral document the vendor can change at any time with notice buried in an email. Enterprise contracts with negotiated data processing addendums are the standard for confidential data. If the vendor won't sign a DPA, the tool isn't ready for law firm use.
Vague subprocessor disclosures. "We use industry-standard cloud providers" tells you nothing. You need named subprocessors, their roles, their locations, and the data they access. The EU AI Act and GDPR make this a compliance requirement for firms with European exposure.
No incident response commitment. Ask the vendor: if your data is breached, when will you be notified? Some vendors commit to 24-hour notification. Others have no commitment at all. For a law firm holding privileged data, a vendor without a breach notification timeline is an unacceptable risk.
What This Means for Your Firm
Build the evaluation framework once and apply it to every new tool. Make it part of your procurement process, not an afterthought. The time to evaluate risk is before the tool is deployed, not after a bar complaint or a breach.
Assign ownership. Someone at the firm needs to own the AI tool evaluation process. This isn't a one-time project. New tools emerge monthly. Existing vendors change their terms. The person who owns this needs authority to pull tools that fall out of compliance and enough technical literacy to evaluate vendor claims.
Share results with your clients. Sophisticated clients, especially in financial services, healthcare, and government contracting, are asking outside counsel about their AI data handling. A firm that can produce a documented evaluation framework wins trust that competitors can't match. This isn't just risk management. It's competitive positioning.
The Bottom Line: If you can't explain exactly where client data goes when it enters an AI tool, you don't have an AI strategy. You have an unmanaged liability.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
