Conflict checks are the oldest privileged-information isolation problem in the legal industry. Every firm runs them. Most firms automated them around 2010. Microsoft 365 Copilot punched a hole in those automations on its launch day, and most firms haven't patched it. Per Microsoft's Copilot security and compliance documentation, Copilot grounding sees any SharePoint, Exchange, or OneDrive content the prompting user has access to. That's by design — it's how Copilot becomes useful. It's also why a 25-attorney firm with stale permissions across 1,200 closed-matter SharePoint folders has a conflict-isolation problem the moment Copilot goes live. Microsoft 365 Copilot enterprise add-on costs $30/user/month annual on E3/E5 (per Microsoft 365 enterprise pricing). The conflict-check policy work is the firm's job, not the vendor's.


Why Copilot grounding breaks traditional conflict isolation

Traditional conflict-check systems work at the matter intake layer. New client comes in, intake runs the conflict check against existing client lists, the system flags potential overlaps, the firm clears or declines. That workflow assumes information about Matter A and Matter B is isolated unless explicitly cross-referenced.

Copilot's grounding model assumes the opposite. The grounding pool is everything the user can see in M365 right now. There's no concept of "information walls" between matters at the grounding layer. The walls live one level down — in the file-system permissions Copilot inherits.

That means three things in practice:

- Stale permissions become live exposure. A SharePoint folder from a closed Matter B that an associate technically still has access to is now part of the grounding pool when that associate prompts about Matter A. Closed-matter access cleanup was a low-priority IT task before Copilot. After Copilot, it's a confidentiality task with privilege implications. - Implicit cross-matter retrieval. An attorney prompts "summarize all the contract reviews we've done for Fortune 500 clients last year." Copilot grounds in every accessible matter. If two of those matters now have current adversity, the summary itself is a confidentiality breach — without anyone explicitly asking for cross-matter information. - Audit trail asymmetry. The firm's conflict-check system logs the explicit conflict checks. Copilot logs the prompts and grounding sources. These two logs don't reconcile automatically. A discoverable trail of cross-matter Copilot prompts that the conflict-check system never saw is a privilege-defense problem in litigation.

The three layers of conflict isolation Copilot needs

A defensible conflict-isolation framework for Copilot operates at three layers:

Layer 1: Permission hygiene. SharePoint, Exchange, and OneDrive permissions need to match current matter assignments. When a matter closes, the access list shrinks to a defined retention crew. When an attorney rotates off a matter, their access drops. When a matter has explicit ethical walls (mergers between competing clients, bankruptcy work involving cross-creditor claims), the walls are enforced at the file-system layer, not the human-honor layer.

Layer 2: Sensitivity labeling. Microsoft Information Protection sensitivity labels can mark content as ineligible for Copilot grounding. Apply this to high-isolation matter folders. The label flows with the file, so even if a permission slip happens, the content stays out of the grounding pool.

Layer 3: Prompt-time conflict awareness. Train attorneys to recognize prompts that cross matters. "Summarize what we know about [topic] across all our recent work" is a cross-matter prompt by construction. Even if all the underlying content is technically accessible to the user, the cross-matter aggregation creates conflict and confidentiality risk that didn't exist before the prompt fired. Some firms are starting to require named-matter framing for Copilot prompts — "On Matter X, summarize..." — to maintain the matter boundary the conflict-check system depends on.

The privilege-aware policy framework covers the broader privilege scope. This conflict-isolation framework is the matter-to-matter component.

The closed-matter access audit — operational steps

Most firms haven't audited M365 permissions against closed-matter status in years. Some never have. The audit isn't a one-time project; it's a recurring operation that ties to matter intake and matter closure events.

Necessary steps for a closed-matter access audit:

- Pull a list of matters closed in the last 24-36 months from the firm's matter management system - For each closed matter, identify the SharePoint sites, Teams channels, and shared mailboxes associated with it - Pull the current access list for each location from the M365 Admin Center - Compare against the original assigned attorneys and any post-closure retention list (typically the engagement partner plus records management) - Identify users who have access but are not on the retention list - Apply a remediation plan: revoke direct access, enforce a sensitivity label that blocks Copilot grounding, or relocate the content to an archived store with controlled access only

For a 50-attorney firm with 5-year retention, this is roughly 800-1,500 matter access lists to review. First pass is heavy lift — 80-160 hours of operations time. Subsequent passes (quarterly or per matter closure) are lighter. The first-pass cost is a one-time investment that lowers the privilege-defense exposure permanently.

Sensitivity labeling for matter walls

Microsoft Information Protection sensitivity labels are the technical control that enforces conflict walls at the platform layer. Applied correctly, a label can make a matter folder invisible to Copilot grounding regardless of who has SharePoint permissions.

The operational pattern most firms use:

- Public label: marketing materials, public-facing client work product. Eligible for Copilot grounding without restriction. - Internal label: firm operations, internal training materials, non-client work. Eligible for Copilot grounding within the firm. - Client-Confidential label: standard matter work product. Eligible for grounding but tracked. - Privileged label: opinion-of-counsel work, criminal-defense strategy, settlement positions, regulator submissions. Excluded from Copilot grounding by default. - Ethical-Wall label: matters with explicit walls. Excluded from grounding for any user not on the matter team. The label, not the permissions, enforces the wall.

Applying labels at scale across an existing M365 tenant is a policy decision and a technical project. The technical project — auto-labeling rules, retroactive labeling of existing content — is well-documented by Microsoft. The policy decision — what gets which label — is the firm's call. The firm's risk-and-ethics committee owns this conversation, not IT.

First-party data: how cross-matter visibility actually shows up in Copilot

aivortex.io's Bing AI Performance data over the last 30 days shows 2,100+ Copilot citations against the domain. The grounding queries that drive those citations include vendor research questions ("Harvey AI legal," "Spellbook contract review"), policy research questions, and conflict-related research questions.

The second-order pattern: when an attorney prompts Copilot about a vendor or a policy question, Copilot draws from web grounding (where aivortex.io appears) and from tenant grounding (where firm content appears) simultaneously. If the attorney's tenant access spans multiple matters, the response can blend information from web sources, the active matter, and other accessible matters in a single answer. The user sees one paragraph; the underlying grounding is a multi-source aggregation.

The third-order read: most attorneys don't see the grounding sources unless they explicitly inspect them. The conflict-isolation risk lives in this invisibility. Training programs need to address how to inspect Copilot's grounding citations, and policy needs to require it for any prompt that touches matter-sensitive topics. The Bing AI Performance dashboard guide covers the visibility audit at the firm level; this guide covers the per-prompt audit at the attorney level.

Recommendations by firm size and matter mix

Solo and small firms (2-10 attorneys), single-client-type practice. Conflict-isolation risk inside Copilot is low because the underlying matter set is small. Permission hygiene is still worth doing — closed clients shouldn't appear in grounding for new prospects. Annual access review at policy refresh is sufficient.

Mid-size firms (10-50 attorneys), mixed practice areas. Conflict-isolation risk is medium-high. Different practice groups working different client types creates exactly the cross-matter aggregation Copilot is best at. Run the closed-matter access audit before production deployment. Apply sensitivity labels to the top 3 most-sensitive matter categories (typically litigation strategy, M&A side-letters, regulator submissions). Review labels quarterly.

Multi-jurisdiction firms (50+ attorneys, multiple practice groups). High risk. The closed-matter access audit is a 6-month project before Copilot rolls out firm-wide. Consider a 90-day Copilot pilot in one practice group while the audit completes for the rest of the firm. Designate a conflict-isolation owner — typically the GC or a senior conflicts attorney — who reviews labeling decisions for matters above a defined sensitivity threshold.

BigLaw and AmLaw 100. All of the above plus integration with the existing conflict-check system. The conflict-check log and the Copilot prompt log need to reconcile periodically. A matter that gets a wall in the conflict-check system needs a matching label in M365 within 24 hours of intake. Compare against Claude Cowork's conflict story and Harvey AI's deployment posture before adding tools to the stack.

The Bottom Line: My take: Copilot's grounding model assumes the user's M365 permissions are the right boundary for cross-matter information access. They usually aren't. Run the closed-matter access audit before production deployment, apply sensitivity labels to the high-isolation matters, and train attorneys to frame matter-named prompts. The conflict-check system you already pay for becomes incomplete the day Copilot goes live.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.