The privacy comparison between Claude and Copilot isn't about which company you trust more -- it's about which data architecture matches your ethical obligations. Claude Enterprise retains zero data. Copilot processes data within your M365 tenant. Both claim they don't train on your data. But the mechanisms are different, the guarantees are different, and after Heppner, the legal implications of choosing wrong are real.

Every managing partner needs a privacy decision tree, not a vendor preference. What type of data are you inputting? Is it privileged? Is it PHI? Does your client have outside counsel guidelines restricting AI use? The answers to those questions determine which tool -- and which tier -- is appropriate. Brand loyalty is irrelevant. Architecture is everything.


Claude Enterprise Data Handling: Zero Retention Architecture

Claude Enterprise processes your inputs and generates outputs without retaining either after the session ends. Your data passes through Anthropic's systems for processing but is not stored, logged, or accessible after the response is delivered. Anthropic cannot retrieve your conversations, and your data cannot be subpoenaed from Anthropic because it doesn't exist on their systems after processing.

The zero-retention guarantee is contractual, not just technical. Anthropic's Enterprise terms explicitly state that customer data is not retained post-session. This contractual commitment gives firms a legal basis for asserting that disclosure to Claude Enterprise is not a "voluntary disclosure" to a third party for privilege waiver purposes.

Claude Enterprise also offers BAAs for HIPAA-covered data. If your firm handles PHI in any context -- healthcare litigation, insurance defense, employee benefits work -- the BAA is a regulatory requirement. Claude's BAA covers the processing of PHI through its systems under HIPAA's business associate rules.

The limitation: zero retention means no conversation history. You can't return to a previous Claude session and continue where you left off. Each session starts fresh. For attorneys accustomed to persistent chat histories, this requires a workflow adjustment -- save important outputs locally.

Copilot Data Handling: M365 Tenant Architecture

Microsoft 365 Copilot processes data within your firm's M365 tenant. Your inputs and outputs are subject to your existing M365 data retention and security policies. Microsoft states that Copilot data is not used to train foundation models and is protected by the same enterprise commitments that cover all M365 data.

The architecture is different from Claude's. Your data doesn't leave Microsoft's infrastructure in the traditional sense -- it's processed within the same environment that hosts your email, documents, and files. Microsoft's privacy commitment is: "Your data stays in your tenant, subject to your policies."

For firms already trusting Microsoft with their entire document management, email, and communication infrastructure, Copilot's data handling doesn't introduce new risk. Your contracts, emails, and privileged communications are already in M365. Copilot processes them within the same security perimeter.

Microsoft's Enterprise Data Protection covers Copilot interactions. Conversations are subject to your existing M365 retention policies, eDiscovery holds, and compliance controls. This means Copilot conversations can be searched, held, and produced in litigation -- which is both a feature (compliance) and a risk (discoverable AI interactions).

Heppner Implications for Both Platforms

The Heppner ruling established that using consumer AI tools without adequate data protections can waive attorney-client privilege. Both Claude and Copilot need to be evaluated through the Heppner framework, but the analysis differs.

For Claude: Consumer Claude ($0 or $20/month Pro) retains data and may use it for training. This potentially triggers Heppner's privilege waiver analysis. Claude Team ($25/month) doesn't train on your data but retains it for 30 days -- an improvement but not zero risk. Claude Enterprise ($30/month) with zero retention likely satisfies Heppner because there's no "disclosure" if no data is retained.

For Copilot: Copilot is only available as an enterprise product ($30/seat/month, requires M365 Enterprise). Microsoft's enterprise data protections -- no training on customer data, data processed within the customer's tenant -- likely satisfy Heppner's requirements. There's no consumer Copilot tier that would trigger the Heppner concern.

The practical takeaway: both Claude Enterprise and Copilot address the Heppner concern adequately. The risk sits in lower-tier Claude products (Consumer and Team) and in any unauthorized AI tool use by attorneys. Your firm's AI policy needs to mandate appropriate tiers, not just approved vendors.

The Privacy Decision Tree for Law Firms

Question 1: Does the data include privileged communications or client-identifying information? - No: Claude Team, Copilot, or any enterprise AI tool is acceptable. - Yes: Proceed to Question 2.

Question 2: Does the data include PHI or other regulated information? - Yes: Claude Enterprise (with BAA) or Copilot GCC (with appropriate compliance certifications). Consumer and Team tiers are disqualified. - No: Proceed to Question 3.

Question 3: Does your client's outside counsel guidelines restrict AI tool use? - Yes: Follow the guidelines. Many corporate clients now specify approved tools or prohibit AI entirely for their matters. Violating OCGs risks the relationship and potentially the engagement. - No: Proceed to Question 4.

Question 4: Does your firm's malpractice insurance cover AI-related claims? - Check your policy. Some carriers have added AI exclusions or requirements. Your tool choice may be constrained by your coverage terms.

Final decision: For most firms, Claude Enterprise for substantive legal work (zero retention, BAA available) and Copilot for operational tasks (M365 integration, enterprise data protection) is the defensible approach.

Building a Defensible AI Privacy Policy

Your firm's AI privacy policy needs to address five areas to be defensible in a malpractice or privilege dispute.

First, approved tools and tiers. List exactly which AI tools are approved and at which subscription level. "Claude Enterprise" is approved. "Claude" is not specific enough.

Second, data classification rules. Define what data categories can be input into each tool. Privileged communications: Enterprise tier only. General research: any approved tier. PHI: BAA-covered tools only.

Third, client-specific restrictions. Maintain a list of clients whose outside counsel guidelines restrict or prohibit AI use. Make this list accessible to all attorneys and update it with each new engagement.

Fourth, incident response. If an attorney inputs privileged data into an unapproved tool, what happens? Document the remediation steps: notification to the client, assessment of privilege waiver risk, potential notification to the court if litigation is pending.

Fifth, annual review. AI tools, pricing tiers, and privacy architectures change rapidly. Commit to reviewing and updating your policy annually. A 2025 policy may not address 2026 risks.

Make the policy a condition of AI access. Require attorneys to acknowledge the policy before their accounts are provisioned. Build accountability into the system.

The Bottom Line: Claude Enterprise's zero retention and Copilot's M365 tenant processing both address Heppner concerns -- the dangerous choice is using consumer-tier tools or having no AI privacy policy at all.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.