The privilege question isn't specific to Claude. It applies to Harvey, CoCounsel, Lexis+ AI, and every other tool that processes client communications on third-party infrastructure. The question is how to configure any tool to maintain defensible confidentiality, and what the analysis actually looks like — not which vendor's logo offers the most comfort.
Attorney-client privilege protects communications made in confidence for the purpose of legal advice. The question when using any AI tool is whether transmitting client information to a third-party server breaks that confidence. Courts haven't reached consensus. The most defensible position right now: treat AI platforms the same as cloud storage — reasonable precautions, zero-training options enabled, and no input of genuinely privileged communications without client consent.
The Anthropic API offers a zero-training option: inputs aren't used to train future models. Claude.ai Pro also allows disabling conversation history. Both configurations are privilege-defensible under the current standard of reasonable precautions. They're available to any Claude subscriber.
Enterprise legal AI tools typically offer explicit data processing agreements and retention controls as core parts of their offering. That's real value — particularly for firms with compliance requirements or external audit exposure. The question is whether those controls are substantively different from what Anthropic's DPA already provides.
The Confidentiality Problem With Any AI Tool, Enterprise or Consumer
Every AI tool that processes client information operates on the same basic confidentiality question: does transmitting that information to a third-party server break the confidentiality that privilege requires? The answer under current law is: not if you take reasonable precautions.
Courts treat AI platforms analogously to cloud storage and email encryption. The reasonable precautions standard doesn't require perfect security — it requires documented, deliberate choices that show the attorney understood the risk and mitigated it. Using zero-training mode on the API, disabling conversation history on Claude.ai, and reviewing Anthropic's privacy policy and DPA are the documentation steps that satisfy this standard.
The enterprise premium on this axis is primarily about documentation convenience, not substantive difference. Harvey's SOC 2 Type II certification and explicit data processing agreements with defined retention terms are easier to show a bar disciplinary committee than a printout of Anthropic's DPA. For firms with external audit exposure — publicly traded company clients, regulatory matters, government work — that documentation convenience has real value. For a solo doing estate planning, it's a premium without a corresponding benefit.
Claude Pro is $20/month. Claude Team Standard is $20/seat annually ($25 monthly). Those pricing tiers give you access to the same Constitutional AI-governed model as enterprise deployments, with configuration options that satisfy the confidentiality requirement when used correctly.
Does Using Claude Waive Attorney-Client Privilege? What the Analysis Says
The short answer: no automatic waiver. The long answer: the analysis depends on how you've configured Claude and what you've transmitted.
Privilege waiver requires either an intentional disclosure or conduct so inconsistent with maintaining confidentiality that waiver is implied. Using Claude with zero-training mode enabled and conversation history disabled is inconsistent with implied waiver — you've taken affirmative steps to maintain confidentiality. That's the reasonable precautions standard.
What creates waiver risk is careless use: inputting privileged client communications into a consumer AI account with conversation history enabled and default data practices, without checking whether Anthropic's standard terms allow training on inputs. That's not a Claude-specific risk; it's a configuration risk that applies to every AI tool.
The privilege analysis also distinguishes between types of material. Work product — the attorney's mental impressions, legal theories, and litigation strategy — gets stronger protection than fact-gathering communications. Inputting an attorney's mental impressions into an AI tool with unclear data practices creates stronger waiver risk than inputting publicly available documents for analysis. Segregate the categories in your workflow.
Claude's Data Retention Options vs. What Enterprise Legal AI Provides
The configuration options for privilege-safe Claude use are more granular than most attorneys realize. Claude.ai Pro users can disable conversation history entirely, which means Claude doesn't retain prior conversations between sessions. API users can invoke the zero-training clause in Anthropic's DPA, which contractually prohibits Anthropic from using inputs to train future models. Both options are available without enterprise contracts.
Enterprise legal AI tools typically add: explicit data retention schedules (e.g., inputs deleted after 30/60/90 days), firm-level admin controls that enforce these policies across all attorney accounts, audit logs that record who used AI on what matter, and a vendor relationship that creates contractual accountability if something goes wrong.
Those additions are legitimate and useful for firms with compliance obligations. The DPA protections from Anthropic are substantively similar to what enterprise vendors offer; the difference is that enterprise tools make those protections easier to document and enforce firm-wide. For a 10-attorney firm, configuring Claude correctly and training attorneys on the protocol takes a day. For a 500-attorney firm with 50 practice groups, the managed deployment structure is worth paying for.
How to Configure Claude for Zero-Retention, Privilege-Safe Legal Work
Two configurations, depending on your access level:
Claude.ai Pro ($20/month): Go to Settings → Privacy → disable "Improve Claude for everyone." This disables Anthropic from using your conversations to train models. Also disable conversation history for client matters. This is the consumer-tier configuration that satisfies reasonable precautions for most bar jurisdictions.
Anthropic API: Sign Anthropic's DPA, which contractually prohibits training on API inputs by default. The zero-training guarantee is contractual rather than a privacy setting. For firms that need to produce documentation of their confidentiality protocols, the API DPA is the cleaner paper trail. Claude Pro is $17/month billed annually; Claude Max starts at $100/month for high-volume API users.
Beyond configuration: never input full client names, case numbers, or identifying information that would allow someone reading the transcript to identify the client. Use placeholders: "Client A" instead of the client name, "the underlying transaction" instead of the specific deal. This is the same hygiene you'd apply to cloud storage of sensitive files — compartmentalize before you upload.
What Mythos-Level Capability Means for Inadvertent Disclosure Risk
Anthropic's Claude Mythos Preview demonstrated that a Claude-based system can process and reason about thousands of complex technical inputs autonomously. For the privilege analysis, that capability cuts both ways.
The upside: a more capable model produces more useful work product from the same inputs. A Mythos-level architecture handling a complex corporate transaction can surface issues a less capable model would miss, reducing the probability that attorney review is required to catch errors.
The risk: if privileged material is inadvertently input into a retained session — because the attorney didn't disable conversation history, or because the firm didn't enforce the protocol consistently — a more capable model has processed it more completely. The information is more thoroughly embedded in the session context, potentially more accessible if there's a later disclosure dispute.
This doesn't argue for avoiding capable models. It argues for stricter input hygiene as capability goes up. The same principle applies to any powerful tool: the higher the capability, the more important the protocol. Claude Mythos Preview is restricted to Project Glasswing partners and isn't what legal practitioners access. But the principle applies to the production model as well. Configure it correctly and enforce the protocol consistently.
My take: Privilege risk with Claude is configuration risk, not product risk. The API zero-training option and Claude.ai privacy settings handle the confidentiality requirement. Enterprise legal AI tools add managed compliance workflows and audit trails that are genuinely useful for firms with external accountability requirements — but they're not the only path to privilege-defensible AI use.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes, email me directly.
