In January 2025, Judge Michael Baylson issued a protective order in Morgan v. V2X that split AI into two categories: enterprise-grade and consumer-grade. The distinction wasn't academic. It determined what AI tools attorneys could use to review classified and confidential discovery materials. For the first time, a federal court drew a bright line between a $20/month ChatGPT subscription and a governed enterprise deployment.

That ruling matters because most law firms are still operating in the gray zone. Partners approve enterprise contracts with legal AI vendors while associates paste client facts into free-tier chatbots during lunch. The court in Morgan v. V2X didn't care about branding or price. It cared about data handling, access controls, and whether the AI vendor could be held accountable. That's the standard now.


What Morgan v. V2X Actually Established

The Morgan v. V2X protective order required that any AI tool used on discovery materials meet specific criteria: no training on input data, audit logging, access controls tied to individual users, and contractual data protection commitments from the vendor. Consumer AI tools failed every one of those tests.

The court didn't ban AI. It banned ungoverned AI. Consumer tools like free-tier ChatGPT, Gemini, and Claude send data to shared infrastructure with no guarantees about retention, training, or access. Enterprise deployments from the same providers offer data isolation, SOC 2 compliance, and contractual commitments. Same model, completely different risk profile.

This is the distinction most firms miss. The model isn't the risk. The system around the model is.

Where Consumer AI Creates Real Exposure

A 2024 survey by the American Bar Association found that 35% of attorneys had used generative AI in their practice, but only 12% of firms had formal AI policies. That gap is where exposure lives. Without a policy, every attorney makes their own judgment call about what counts as "safe enough."

Consumer AI tools create three categories of risk for law firms. First, data leakage: inputs to consumer models are often used for training or stored without clear retention limits. Second, privilege waiver: if confidential client communications pass through a third-party consumer service without proper safeguards, opposing counsel has an argument that privilege was waived. Third, bar compliance: multiple state bars including Florida, California, and New York now have guidance requiring attorneys to understand the technology they use. "I didn't know it was consumer-grade" isn't a defense.

The practical problem is that consumer AI is easy. It's fast, it's free, and it works well enough for a first draft. That's exactly why it's dangerous. The path of least resistance leads directly to ungoverned use.

What Makes Enterprise AI Actually Enterprise

Enterprise isn't a marketing label. It's a set of verifiable commitments. Here's what to look for: data isolation (your inputs don't train the model or get shared), SOC 2 Type II compliance, audit logging (who used what, when, on which matter), role-based access controls, contractual data processing agreements, and incident response commitments.

Vendors like Harvey AI, CoCounsel by Thomson Reuters, and the enterprise tiers of OpenAI, Anthropic, and Google all offer some version of these controls. But the details vary significantly. Harvey provides matter-level data isolation. CoCounsel runs on Azure's government cloud for certain deployments. Anthropic's Claude for Enterprise offers zero data retention. Each has trade-offs in cost, capability, and lock-in.

The real question isn't "which vendor is best." It's whether the tool fits the firm's actual workflow and whether the contractual protections match the data sensitivity. A personal injury firm processing intake forms has different requirements than an AmLaw 100 firm handling M&A due diligence.

What This Means for Your Firm

Start with an audit. Find out what your attorneys are actually using. Every firm that's done this has found consumer AI tools running on matters they shouldn't touch. That's not a training problem. It's a governance problem.

Then draw the line Morgan v. V2X drew. Create an approved tools list with enterprise-grade options for specific use cases: research, drafting, document review, summarization. Block or restrict consumer-grade tools on firm devices. And put the policy in writing, because the next court to address this will look at what your firm knew and what it did about it.

The firms that treat this as a compliance checkbox will miss the point. The real advantage goes to firms that use the enterprise/consumer distinction to build governed workflows that actually make attorneys faster, without creating liability.

The Bottom Line: Consumer AI in a law firm isn't just a bad idea. After Morgan v. V2X, it's a documented, citable liability that opposing counsel will use against you.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.