Microsoft 365 Copilot is now sitting inside Word, Outlook, and Teams at 90%+ of US law firms. Most of those firms have not written a privilege-aware policy for it. *United States v. Heppner* (SDNY, February 17, 2026) made one thing operational: privilege does not attach automatically when an attorney pastes facts into a generative AI surface. The Heppner facts involved consumer Claude, but the privilege analysis that mattered — work-product doctrine, agency, third-party disclosure — applies to any model the firm hasn't formally locked down. Copilot's $30/user/month enterprise add-on (per Microsoft 365 enterprise pricing) ships with strong default protections, but the privilege story sits in how the firm configures, deploys, and audits it.


What the Heppner ruling means for any in-firm AI surface

Judge Jed Rakoff's February 17, 2026 ruling in *United States v. Heppner* (SDNY) held that written exchanges between criminal defendant Bradley Heppner and consumer Claude were not protected by attorney-client privilege or work-product doctrine. (read the Heppner explainer) The court's reasoning: Claude isn't an attorney, so the privilege test fails at step one. The materials weren't generated at counsel's direction, so work product fails too.

Heppner involved a consumer chatbot. The privilege analysis still translates to enterprise Copilot in three steps:

1. The model is not the lawyer. Anything sent to a generative AI is not protected by attorney-client privilege at the moment of transmission. Privilege attaches to attorney-client communications, not to attorney-tool communications. 2. Work product needs counsel direction. Materials Copilot produces qualify as work product only if they're generated at counsel's direction in anticipation of litigation. An associate using Copilot to summarize a deposition transcript on their own initiative may not clear the bar. 3. Disclosure to third parties waives privilege. If Copilot's data path includes any third party — a contractor, an audit log reviewer, a support engineer — that creates waiver risk unless the engagement is structured to preserve confidentiality.

Microsoft's enterprise data protection commitments (covered below) address point 3 for most deployments. Points 1 and 2 are firm-policy work, not vendor work.

Microsoft's enterprise data protections — what Copilot actually does with firm content

Per Microsoft's Copilot privacy and data protection documentation, Copilot for Microsoft 365 inherits the same data-handling commitments as the underlying M365 tenant:

- Prompts, responses, and grounding data stay inside the tenant boundary. Microsoft does not use enterprise tenant data to train its foundation models. This is a contractual commitment, not a best effort. - Data is processed under the existing Microsoft 365 Data Processing Addendum (DPA), which most firms already have in place via their E3 or E5 license. No new DPA is required for Copilot specifically. - Encryption at rest and in transit uses the same FIPS 140-2 compliant infrastructure as Exchange, SharePoint, and OneDrive. - Audit logging flows to the M365 unified audit log, which means any prompt issued by an attorney is theoretically discoverable in the firm's own internal compliance review.

The operational caveat: these defaults apply when Copilot is grounded in tenant data. The moment an attorney pastes outside content into the prompt — text from a personal email, a client-supplied document not yet uploaded to SharePoint, a screenshot from a witness — the data path is the user's input, not the tenant's storage. The privilege boundary is the prompt itself, not just the source files.

Five privilege risks every firm should map before rolling out

Every firm rolling out Copilot needs an enterprise data residency and governance framework covering five concrete risks:

1. The cross-matter prompt risk. An associate working on Matter A asks Copilot "summarize the contract review work we did last year for our largest tech client." If Copilot's grounding pulls SharePoint content from Matter B (different client), that's potential conflict and confidentiality exposure. Conflict-check isolation is policy work the vendor doesn't do for you.

2. The retention-overlap risk. Copilot grounding sees content based on the user's existing Microsoft 365 permissions. If a paralegal has access to a SharePoint folder they technically shouldn't (a stale permission grant from a closed matter), Copilot will surface it. The audit lift is finding stale permissions before Copilot does.

3. The third-party content risk. Discovery productions, client-supplied PDFs, witness interviews recorded in Teams — none of this content was authored by the firm, but all of it lives inside the tenant once uploaded. Copilot will index it. Whether it should be in the grounding pool is a per-matter call.

4. The export risk. Copilot output pasted into an external email, an unencrypted PDF, or a Slack message creates a privilege boundary the AI surface didn't impose. Train associates on what leaves the tenant.

5. The audit-log readability risk. M365 audit logs capture that a prompt happened, not always the prompt content itself. For privilege defense, firms need to confirm what their tenant configuration actually logs — and whether that's enough for an *in camera* review three years later.

The privilege-aware policy framework — what to write before deployment

A defensible Copilot policy covers five clauses. None of these are vendor-supplied; they're firm-authored, reviewed by the GC and risk-management committee, and embedded in attorney onboarding:

- Authorized use clause. Name the practice areas, matter types, and content categories where Copilot use is preauthorized. Name what's excluded — typically: privileged communications with criminal defendants, opinion-of-counsel work product, regulator-facing submissions, settlement strategy documents. - Data classification clause. Tier firm content into categories (public, internal, client-confidential, privileged, restricted). Define which categories are acceptable as Copilot grounding sources. Restrict the rest at the SharePoint label level so Copilot can't index them. - Disclosure clause. Define when Copilot use must be disclosed — to clients in engagement letters, to opposing counsel in discovery responses where applicable, to courts under any standing orders that name AI tools (see the federal court AI disclosure rules guide for jurisdiction-specific requirements). - Review clause. No Copilot output goes into a client deliverable without attorney review. This isn't optional. It's the malpractice firewall. - Audit clause. Specify how the firm reviews Copilot logs, on what cadence, and who has access. This is the discoverable-trail that protects the firm in a privilege-defense scenario.

The data residency policy template covers the structural clauses. This privilege framework is the doctrinal layer that sits on top.

First-party data: what aivortex.io's Bing AI Performance shows about firm visibility inside Copilot

Most firms haven't enabled the Bing AI Performance dashboard, so they can't see what Copilot is recommending about them — or about competitors. Bing AI Performance has been free to verified domain owners since 2025.

aivortex.io's first-party data over the last 30 days: 2,100+ Copilot citations, top grounding query "Harvey AI legal," with Spellbook and Everlaw following. That visibility comes from publishing FAQ-first vertical content with clean schema. The relevance to privilege policy is direct: Copilot will surface external content about your firm, your competitors, and your practice areas to your own attorneys when they prompt inside Word or Outlook. That's a vendor due-diligence channel that didn't exist 24 months ago.

The second-order read: associate research about a vendor ("how does Harvey AI handle privilege?") increasingly happens inside Copilot, not Google. The third-order read: a firm whose privilege-policy doctrine is poorly indexed in Bing AI is invisible at the moment its own attorneys are evaluating tools. Read the Bing AI Performance dashboard guide for the visibility audit framework.

Recommendations by firm size and practice area

Solo and small firms (2-10 attorneys). Default to Microsoft 365 Business Premium + Copilot at $32/user/month annual (per Microsoft pricing). Treat the privilege policy as a 2-page document, not a 20-page document. Authorized-use clause, review clause, no-export-to-personal-email clause. Train once, audit quarterly.

Mid-size firms (10-50 attorneys). The privilege policy carries more weight because conflict checks across matters get harder. Pair the Copilot rollout with a SharePoint sensitivity-label cleanup — every matter folder gets tagged before Copilot is enabled in production. Run a 90-day pilot in one practice group before firm-wide deployment. Budget legal-ops time for monthly audit-log review.

BigLaw and AmLaw 100. The procurement question is which Copilot deployment surface fits your existing IT posture. M365 E5 + Copilot at $30/user/month enterprise add-on is the most common path. The risk-and-ethics committee should sign off on the privilege framework before deployment, not after. Consider a Microsoft Premier engagement to scope custom data-loss-prevention rules. Compare against Claude Cowork's privilege posture and ChatGPT Enterprise before locking in.

By practice area. Litigation and white-collar practices carry the heaviest privilege load — restrict Copilot to research and drafting tasks, exclude it from strategy and witness-prep workflows. Transactional practices with M&A diligence flows benefit most from Copilot's document comparison capability — but watch the cross-matter risk if multiple deals run concurrent. In-house counsel embedded in client organizations should treat host-tenant Copilot the same as any third-party tool: confirm the data-handling chain before use.

The Bottom Line: My take: Copilot's enterprise data protections are strong on the technical layer. The privilege risk that actually matters is firm-side policy work — authorized-use scope, data classification, conflict isolation, audit logging. Heppner didn't change the privilege analysis; it surfaced what was always true. Write the policy before the deployment, not after.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.