United States v. Heppner (SDNY, February 17, 2026) ruled that written exchanges between criminal defendant Bradley Heppner and consumer Claude were not protected by attorney-client privilege or work-product doctrine. Per Harvard Law Review's coverage, Judge Jed Rakoff called it a "question of first impression nationwide." The fact pattern was specific: consumer Claude, used by the defendant, generating defense strategy materials, shared with attorneys after creation. The ruling left a forward-looking question open. What does the privilege defense stack look like for firms deploying enterprise Claude — claude.ai Team or Enterprise, AWS Bedrock, Vertex AI, or Microsoft Foundry — on matters where privilege is the entire game? This is the architectural answer, not a relitigation of Heppner. Per Anthropic's pricing, enterprise tiers carry different data-handling commitments than the consumer product Heppner used. The defense stack design follows from those commitments.


What Heppner actually decided — and what it didn't

Per the Debevoise Data Blog analysis and Inside Privacy's takeaways, Heppner's holding rested on two grounds:

- No attorney-client privilege. Claude isn't an attorney. The privilege protects communications between client and attorney for the purpose of obtaining legal advice. A consumer chatbot doesn't satisfy the attorney prong. - No work-product doctrine. Work product requires materials prepared in anticipation of litigation at the direction of, or by, an attorney. Heppner generated the materials independently of counsel direction.

What Heppner did NOT decide:

- Whether enterprise Claude deployments carry different privilege facts. - Whether scratchpad files generated by attorneys at counsel direction would be work product. - Whether other circuits will adopt SDNY's reasoning. - Whether output generated through firm-controlled API access changes the analysis.

The second-order read: the ruling is narrow on its facts. Reading Heppner as "AI use defeats privilege" overstates the holding. Reading it as "consumer AI used by clients without counsel direction defeats privilege" stays inside the actual ruling.

The third-order read: per Verdict's analysis, Heppner reached the right outcome on these facts but the reasoning has analytical weaknesses other circuits may revisit. The split possibility is real but probably 12-24 months out. The Heppner explainer page covers the case-specific facts in depth.

The privilege defense stack: four architectural layers

Firms deploying enterprise Claude on privileged matters should think of the privilege defense as a four-layer stack, each layer addressing a different exposure surface:

- Layer 1: Surface control. Use enterprise tier (Claude Team, Enterprise, API, Bedrock, Vertex, or Foundry) — never consumer Pro for matter work. Per Anthropic's terms, Team/Enterprise/API inputs are not used for model training. The deployment surface decision is the foundational privilege fact: it's what distinguishes the firm's use from Heppner's consumer use. - Layer 2: Direction-of-use control. Counsel directs the AI use. Materials generated through Claude on a matter are produced at attorney direction, for attorney work product, with attorney supervision. The work-product analysis depends on this. Materials generated by clients independently and shared with counsel afterward inherit Heppner's work-product gap. - Layer 3: Documentation control. The matter file shows attorney direction at the time of AI use, not retroactive justification. Engagement letter clauses, matter-management entries, and file notes establish the direction-of-use record. - Layer 4: Surface-specific data handling. Cross-border matters use deployment surfaces with appropriate data residency. Per the Microsoft Foundry deployment spoke, Foundry inherits Azure's compliance posture. Bedrock inherits AWS's. Vertex inherits Google Cloud's. The choice depends on the firm's primary cloud relationship and the matter's data residency requirements.

The second-order read: each layer addresses a different appellate hook. A privilege challenge that defeats Layer 1 (e.g., proves attorney used consumer Pro by accident) collapses the entire stack. A challenge to Layer 2 alone (work-product attack on attorney direction) leaves Layers 1, 3, 4 standing. The architectural redundancy matters because privilege challenges in 2026-2027 will get more sophisticated.

Layer 1 deep-dive: why surface choice changes the privilege analysis

Heppner's holding rested in part on the consumer Claude product's terms. Per Anthropic's published terms at the time of the matter, consumer Claude inputs could be used for model training and review. Enterprise tiers carry different commitments.

The practical privilege difference:

- Claude Team Standard at $20-25/seat/month per Anthropic's pricing — admin controls plus explicit no-training commitment on team inputs. Sufficient surface for routine matter work. - Claude Enterprise at $20/seat/month + usage at API rates — custom terms; advanced security and compliance; data residency negotiable. Right surface for high-confidentiality matters and large legal departments. - Microsoft Foundry, AWS Bedrock, Vertex AI — Claude runs through the firm's existing cloud provider with that provider's data handling and audit posture. Right surface for firms with deep cloud relationships and matters requiring cloud-provider-specific compliance. - Direct API — full control over inputs, logging, and deployment. Right surface for firms building internal tooling with explicit privilege protocols.

The second-order read: the privilege analysis depends on facts the firm controls. Counsel choosing consumer Pro for a privileged matter creates Heppner-equivalent facts. Counsel choosing Team or above creates materially different facts that Heppner's reasoning doesn't reach.

The third-order read: malpractice carriers will start asking firms to disclose deployment surface in their applications by 2027. Firms that can show enterprise-tier-only deployment for matter work get better terms than firms whose policy permits consumer tier use.

Layer 2 deep-dive: work-product doctrine and attorney direction

Per Heppner, work-product protection failed because Heppner generated materials independently of counsel direction. The forward-looking design challenge: what does "attorney direction" look like operationally when AI is doing some of the drafting?

The operational standard that makes Layer 2 robust:

- Counsel initiates the AI use for the specific matter task. Not associates running ad-hoc queries; not clients generating materials they share later; counsel of record directing the work. - Counsel reviews and revises the AI output before the output becomes part of the matter file. Without this step, the AI output is not yet attorney work product — it's raw model generation. - Documentation captures the direction. Matter-management entry, file note, or scratchpad annotation showing counsel directed the specific AI use for the specific task. - Counsel exercises judgment in the final output. Per the integrity rule on AI hallucination sanctions — citation verification, fact verification, legal-reasoning review — the lawyer's judgment is what makes the work product attorney work product.

The second-order read: per the AI sanctions tracker context, 1,227 documented sanctions cases globally show what happens when associates skip Layer 2 — counsel didn't review, didn't verify, didn't direct. The same pattern that produces sanctions also defeats work-product protection.

The third-order read: counsel-direction documentation isn't just privilege defense — it's the same operational discipline that prevents the malpractice claims firms are now seeing in 2026. The two protections converge architecturally.

Layer 3 deep-dive: scratchpad files and the new evidence category

Per Anthropic's Opus 4.7 release docs, the multi-session memory feature lets Claude hold context across sessions via a scratchpad/notes file. For long-running matters (M&A diligence, multi-day depositions, white-collar matters), this saves substantial context-loss tax. The files contain matter-specific reasoning, party identities, and analysis pathways.

The privilege architecture for scratchpad files:

- Storage location. Firm document management system (NetDocuments, iManage, or equivalent), not personal devices, not personal cloud accounts. Per the firm AI policy template, this is non-negotiable for privilege defense. - Access controls. Same as the underlying matter — partner, associates on the matter, conflict-walled per matter assignment. Cross-matter access defeats the matter-specific privilege analysis. - Retention. Matches the firm's standard matter retention period. - Discovery preservation. Scratchpad files fall under standard preservation hold protocol when triggered. The IT and litigation support functions need explicit scratchpad-file inclusion in their hold procedures. - Review for privilege before discovery production. Scratchpad files contain attorney work product mixed with raw model output. Privilege review before production is operational hygiene.

The second-order read: scratchpad files are a new evidence category that didn't exist before April 2026 (when Opus 4.7's multi-session memory shipped per the multi-session memory M&A spoke). Treating them as ordinary work product without explicit protocol invites both privilege exposure and discovery surprises.

The third-order read: opposing counsel with sophisticated AI litigation practice will request scratchpad files in discovery within 12 months. The firms with the protocol in place get to argue from privilege; the firms without get to argue from "we didn't think about it."

Layer 4 deep-dive: cross-border matters and data residency

Multinational matters add a privilege-adjacent complication: data residency rules in the matter's jurisdictions can independently defeat privilege analysis even when the four-layer stack is intact at home.

The per-jurisdiction architecture:

- EU matters. GDPR data residency requirements affect which deployment surfaces work for European clients or European-located data. Microsoft Foundry's EU regions, Vertex AI's EU regions, or AWS Bedrock's EU regions are typical solutions. Claude Team Standard's default deployment may not satisfy strict GDPR posture. - California, Colorado, Texas matters. State-level privacy laws are evolving fast. Firms with significant practice in these states should track state-level AI requirements quarterly per the firm AI policy template. - Cross-border M&A. Diligence work spanning multiple jurisdictions needs deployment surface choice per matter, not firm-default. The multi-session memory M&A spoke covers diligence-specific patterns. - Sealed proceedings and high-confidentiality matters. Even within domestic US jurisdictions, certain matter types require dedicated deployment configurations — typically Enterprise tier with custom data residency and audit logging.

The second-order read: cross-border privilege defense is the area where firms most often default to consumer surfaces because the enterprise procurement was scoped for domestic work only. The exposure is real because cross-border matters are exactly where privilege challenges land hardest.

The third-order read: deployment surface choice per matter is a discipline that compounds over years. Firms that build the muscle now get cleaner privilege records over time. Firms that default to firm-wide single-surface deployment inherit edge-case exposure they may not realize until a challenge surfaces.

The Bottom Line: My take: US v. Heppner (SDNY, February 17, 2026) didn't end attorney-client privilege for AI use — it ended privilege for consumer AI use without counsel direction. The forward-looking firm response is a four-layer privilege defense stack: surface control (enterprise tier, never consumer Pro), direction-of-use control (counsel directs, reviews, revises), documentation control (matter file shows direction at the time, not retroactive), and surface-specific data handling (cross-border matters get jurisdiction-appropriate deployment). Each layer addresses a different appellate hook. The architectural redundancy is the protection. Firms that operationalize all four layers can argue privilege from a different fact pattern than Heppner; firms that don't inherit Heppner's reasoning whether they intended to or not.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.