Thomson Reuters rebuilt CoCounsel on Anthropic Claude in late 2025-early 2026. Freshfields is one of the first publicly named adopters of the rebuilt platform, per Law.com's coverage of the April 23, 2026 Anthropic deal. The rebuild is the structural reason a single Magic Circle firm can simultaneously run direct Claude API access, Cowork agentic workflows, and Westlaw-grounded CoCounsel research, all on the same underlying model layer. For TR's existing customers and competitors alike, the rebuild reshapes the legal AI vendor stack in three ways most current CoCounsel users haven't fully internalized. Here's what changed underneath the product, what it means for current customers, and what it signals for the rest of the legal AI vendor ecosystem.
What Thomson Reuters actually rebuilt
CoCounsel was originally launched after Thomson Reuters acquired Casetext in August 2023 (the Casetext team had built CoCounsel on OpenAI GPT-4). The rebuild moves CoCounsel's underlying model layer to Anthropic Claude while preserving the Westlaw + Practical Law content integration and the workflow surface lawyers already use.
The practical changes:
- Underlying model: Anthropic Claude (likely Sonnet 4.6 and Opus 4.7 depending on task complexity) instead of OpenAI GPT-4-class models - Calibration profile: Claude's calibration improvements (less likely to proceed confidently with a bad plan, per Anthropic's release notes) flow through to CoCounsel responses - Long-document handling: improved long-context retention and multi-session memory features in Opus 4.7 reach CoCounsel users - Content layer unchanged: Westlaw caselaw, KeyCite citator, Practical Law guidance, headnote indexing, all preserved at parity
What didn't change:
- The CoCounsel UX and workflow scaffolding - Pricing structure (industry observers report tier prices of $75 (On Demand) / $220 (Basic Research) / $225 (Core) / $428 (Westlaw Precision + CoCounsel) / $500 (All Access) per Costbench March 2026 and Above the Law August 2025, not vendor-confirmed) - Integration with Westlaw research workflow and Practical Law guidance - Per-seat licensing structure and contract patterns
The second-order point: from a CoCounsel user's perspective, the rebuild is largely transparent: same product, better calibration, new memory capabilities. From a vendor-strategy perspective, it's a major architectural shift that aligns TR with Anthropic for the foreseeable future.
What changes for current CoCounsel users
Three concrete operational changes current customers will notice:
- Fewer hallucinated citations. Claude's calibration improvements reduce the rate at which the model generates confident-but-wrong outputs. For research workflows that already had Westlaw citator validation as a guardrail, the improvement is incremental. For drafting workflows where hallucinated authorities slip through to first drafts, the improvement is structural. - Long-document analysis improvements. Multi-document research, M&A diligence document review, and complex regulatory filing analysis benefit from Opus 4.7's long-context retention. Workflows that previously failed on documents over a certain length now complete successfully. - Better calibration on edge cases. Niche legal questions where the prior model would generate plausible-sounding but unsupported analysis now more often refuse confidently or hedge appropriately. That's a malpractice-relevant improvement, not just a quality improvement.
What doesn't change for users:
- Pricing structure (per the secondary sources noted above; verify directly with TR sales) - Westlaw and Practical Law content moat - Workflow scaffolding and UX - Integration with KeyCite citator validation
The second-order operational note. Firms running CoCounsel through prior workflows shouldn't need to retrain associates on basic usage. The model behavior shifts are subtle for most tasks; the calibration improvements show up most in edge cases. Bills don't change at the seat level; usage-based components within the contract may see modest shifts depending on how TR meters them.
Cross-link: how this connects to Microsoft Copilot ROI math
Firms running both CoCounsel and Microsoft Copilot for Microsoft 365 face a procurement question the rebuild reframes. Copilot's $30/user/month enterprise add-on (per Microsoft's pricing) gives lawyers AI inside Word, Outlook, Teams, and Excel: workflows that overlap with some CoCounsel use cases (document analysis, drafting assistance, summary generation).
The overlap was tolerable when CoCounsel ran on different model architecture than what Copilot exposed. Copilot exposes OpenAI GPT models; CoCounsel pre-rebuild ran on OpenAI as well, but with different content grounding (Westlaw + Practical Law) and workflow surface. After the rebuild, CoCounsel runs on Anthropic Claude: a different model family than what Copilot exposes. The vendor stacks no longer share an underlying model.
For procurement teams modeling total AI spend across Copilot + CoCounsel + direct foundation model access, the rebuild matters because it makes vendor selection a model-family choice as well as a content-grounding choice. Read the Copilot vs CoCounsel vs Claude Cowork ROI breakdown for the per-seat economics across vendor stacks (Cluster 4, spoke 12).
The second-order pattern. As foundation models continue to differentiate (Anthropic on calibration, OpenAI on speed and context window, Google on multimodal), vendor stacks built on top of different model families will diverge in capability. Firms running multiple vendor stacks inherit multiple model families' strengths, but also pay the procurement complexity cost. Cluster 10's Anthropic Legal Ecosystem 90-day map covers the full pattern of which vendors are converging on Anthropic vs which are differentiating against it.
What it signals for the rest of the legal AI vendor stack
The CoCounsel rebuild is part of a broader pattern: vertical legal AI vendors that rebuild on top of foundation models retain their distribution and content moats; vendors that don't end up competing against the underlying model's enterprise offer.
Three categories of legal AI vendor in this pattern:
- Vendors converging on Anthropic. Thomson Reuters CoCounsel rebuilt on Claude. Spellbook publicly uses Anthropic models for many tasks (per their public technical disclosures). LexisNexis Protege architecture appears to use a multi-provider stack. - Vendors converging on OpenAI. Some legacy legal AI tools and earlier-generation contract review systems remain primarily OpenAI-grounded. - Vendors maintaining multi-provider stacks. Harvey AI publicly uses both Anthropic and OpenAI models, selecting per task. This preserves flexibility but makes per-vendor model improvements less directly attributable.
The second-order pattern. Foundation model providers are competing for vertical vendor allegiance because vertical vendors aggregate user feedback at scale and carry distribution moats foundation model providers can't easily replicate. Anthropic's expanding legal vendor footprint (Spellbook, CoCounsel rebuild, direct enterprise deals like Freshfields) gives Anthropic feedback loops on legal work product that compound model quality on legal tasks.
The third-order pattern. Vendors that don't rebuild on a leading foundation model face structural capability lag. Their products improve only as their internal engineering teams integrate new techniques; competitors with foundation model relationships inherit improvements at the model layer. Over 3-5 years, this gap widens. The legal AI vendors that don't have foundation model partnerships now will face procurement-level capability shortfalls in 2027-2028.
Procurement implications for current Westlaw + CoCounsel customers
Most BigLaw and mid-market firms with Westlaw spend already evaluate CoCounsel as the AI add-on to that relationship. The rebuild changes the evaluation calculus in three ways:
- Calibration improvement justifies seat-level upgrade. Firms on CoCounsel On Demand or Basic Research tiers may see enough quality improvement to justify upgrading to Core or Westlaw Precision + CoCounsel bundles. The marginal cost difference vs the marginal calibration benefit shifts the math for disputes-heavy practices specifically. - Multi-session memory unlocks new use cases. Long M&A diligence, multi-day deposition prep, and matter-spanning regulatory work benefit from the long-context retention that pre-rebuild CoCounsel handled poorly. Firms whose CoCounsel usage stayed narrow because of context limits should re-evaluate use case scope. - Procurement renewal terms warrant fresh review. TR's pricing structure didn't change with the rebuild publicly, but enterprise contracts negotiated pre-rebuild may have terms that no longer match the underlying capability. Firms with multi-year CoCounsel contracts up for renewal in 2026-2027 should re-negotiate against current capability, not pre-rebuild capability.
For firms not currently on CoCounsel: the rebuild changes the buy-vs-build calculus. Pre-rebuild, building on direct foundation model access plus a separate citation verification step (Westlaw or Lexis API) could match CoCounsel's capability with more flexibility. Post-rebuild, CoCounsel runs on the same Claude model with content grounding that internal builds struggle to replicate. The buy case strengthened; the build case weakened modestly. Verify pricing directly with TR sales before locking in a procurement decision (industry secondary sources cited above are not vendor-confirmed).
The Bottom Line: The verdict on rebuild impact: The CoCounsel rebuild on Anthropic is structurally larger than the press treatment suggests. For current customers, the calibration improvements reduce malpractice-grade hallucination risk and unlock long-document use cases that pre-rebuild CoCounsel handled poorly. For procurement teams, the rebuild aligns TR with Anthropic for the foreseeable future, which makes vendor stack model-family choice a real consideration. For competitors, the rebuild is one signal in a broader pattern: vertical vendors that rebuild on foundation models retain distribution moats; vendors that don't face structural capability lag in 2027-2028. Verify pricing directly with TR sales before quoting any of the secondary-source tier prices.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
