Anthropic's `/review-contract` skill — shipped February 2026 inside the open-source Cowork legal plugin — does clause-by-clause contract review against a configured negotiation playbook, returning GREEN, YELLOW, or RED flags plus suggested redlines. The market read in February was severe: Thomson Reuters dropped 16%, RELX dropped 14%, Wolters Kluwer dropped 13%, and roughly $285 billion in combined market cap was wiped in a single trading session per Canadian Lawyer's coverage. This is the operator's read on what `/review-contract` actually does, how it compares to Spellbook and Harvey on the same task, and where the open-source plugin fits in a real firm's contract review stack. Most coverage stayed on the market reaction. Here's the workflow.


What /review-contract actually does, clause by clause

Per the Anthropic plugin documentation and the GitHub repo, the `/review-contract` workflow runs in three stages:

Stage 1 — Configure the playbook. The firm or in-house team writes a YAML playbook defining acceptable indemnity caps, preferred forum selection, IP assignment defaults, payment terms, limitation of liability ceilings, termination triggers, and any deal-specific red lines. The playbook is the firm's negotiation doctrine in machine-readable form. A typical playbook for a SaaS reseller might be 15-30 lines; for an M&A indemnity package, 100-200.

Stage 2 — Feed Claude the contract. Drop the document into Claude (via Cowork, claude.ai, or Claude For Word inside Microsoft Word). Invoke `/review-contract`. Claude reads the contract, walks each substantive clause, and compares it against the playbook.

Stage 3 — Get the flagged output. Each clause comes back with one of three flags: - GREEN — clause matches playbook, no negotiation needed - YELLOW — clause deviates from playbook in a negotiable way; Claude proposes redlines - RED — clause violates a hard playbook rule; Claude flags for counsel review and proposes a counter-clause

The output ships with the original clause text, the playbook rule that triggered the flag, the suggested redline, and a short rationale. For a typical SaaS MSA with 40-60 clauses, the review runs in 60-180 seconds depending on Claude tier and effort level.

The second-order effect: a contract reviewer's first pass — typically 45-90 minutes for a senior associate on a mid-complexity contract — gets compressed to a 5-10 minute review of Claude's flagged output. That's the time-saving math that drove the $285B market reaction in February.

How the playbook configuration actually works

The killer feature is configurability, not the workflow. Most clause-by-clause review tools ship with vendor-defined defaults that don't match firm-specific risk tolerance. `/review-contract` ships with no defaults — the firm writes the playbook.

Playbook structure is straightforward YAML:

```yaml indemnity: max_cap: 12_months_fees carve_outs: [gross_negligence, willful_misconduct, ip_infringement] flag_red_if: cap_exceeds_24_months forum: preferred: New York or Delaware flag_red_if: jurisdiction_other_than_us ip_assignment: default: work_for_hire flag_yellow_if: license_only flag_red_if: contractor_retains_ownership ```

The firm checks the YAML into git, version-controls negotiation doctrine across the team, and updates it when a partner revises a position. New associates inherit the senior partner's playbook automatically — the document is the institutional knowledge.

For in-house legal ops teams that have spent 18 months trying to build internal contract review tools through engineering tickets, this is an end-run around the build cycle. The playbook lives in YAML, not in a vendor's proprietary database. Migration cost between Claude tiers or even between LLM providers is the cost of re-pointing a YAML file.

The third-order effect: firms that use `/review-contract` build a tangible asset (the playbook) over time. After 12-18 months of refinement, the YAML captures the firm's negotiation IP — the kind of institutional knowledge senior partners traditionally carried in their heads. That's a knowledge management win, not just a workflow win.

How /review-contract compares to Spellbook and Harvey clause review

All three tools do contract clause review. The structural differences:

Spellbook. Native Word add-in, ships with a precedent library and the Spellbook Library precedent learning feature. Pricing is quote-only per Spellbook's site — industry estimates per Artificial Lawyer reporting suggest $180-$300/seat/month with a $199/seat/month enterprise minimum at 10 seats, not vendor-confirmed. Strong on transactional contracts (NDAs, MSAs, M&A docs). Configurability is good but works through Spellbook's UI, not raw YAML.

Harvey. Quote-only enterprise pricing — industry estimates per Artificial Lawyer's June 2025 reporting suggest $1,200-$2,000+/seat/month at AmLaw scale, not vendor-confirmed. Sized for AmLaw 100 with deep workflow customization, multi-tool integration (DMS, deal rooms, contract repositories), and dedicated customer success. The fit is BigLaw-grade matter management more than solo clause review.

Anthropic /review-contract via Cowork. Open source, hosted on GitHub, free. Cost is the underlying Claude tier ($17-$20/user/month for Pro at annual billing or $25/seat/month for Team at monthly billing per Anthropic pricing). Configurability is YAML-based. Works inside Claude For Word, claude.ai, or Cowork directly.

Where /review-contract wins on FIT: solo and small-firm transactional work, in-house legal ops teams under 50 contracts/month, and any team that values configurable doctrine over vendor-defined defaults. Where it doesn't win on FIT: AmLaw 100 deal teams running multi-billion-dollar M&A closings with dedicated deal-room integration and 24/7 customer success — that's still Harvey or CoCounsel territory. (detailed comparison spoke)

The pickable side: at sub-$25/seat/month with playbook configurability the user owns, `/review-contract` is the cheapest competent contract review tool in the market for solo through mid-market firms in 2026. Spellbook and Harvey ship more pre-built workflows but at 8-80x the cost depending on tier.

Privilege and the 'assistance not advice' disclaimer

The Cowork plugin's outputs ship with explicit "assistance not advice" disclaimer language. That's not boilerplate — it's the integrity guardrail Anthropic built in deliberately. The plugin is a workflow tool, not counsel.

The context that matters: *United States v. Heppner* (SDNY, February 17, 2026) ruled that written exchanges between criminal defendant Bradley Heppner and consumer Claude were not protected by attorney-client privilege or work-product doctrine. The court reasoned that Claude isn't an attorney, so privilege doesn't attach, and Heppner generated the materials independently of counsel direction, so work-product doesn't either. (Heppner explainer)

Applying Heppner to `/review-contract`: the contract being reviewed is a client deliverable. The flagged output is Claude's analysis. If the analysis is shared between attorney and client at attorney's direction, work-product privilege likely attaches under standard doctrine. If the client uses `/review-contract` independently and shares the output with the attorney, the analytical content is closer to client-prepared work product — protection is fact-specific.

The operational fix: firms running `/review-contract` should run it on the Claude Team or Enterprise tier (Anthropic does not train on Team/Enterprise/API inputs per Anthropic's data handling page) and document the deployment surface in the engagement letter. For privileged matters, the Team tier ($20-$25/seat/month) is the minimum. The free or Pro consumer tier carries the Heppner-style risk that defaults to no privilege protection.

The second-order effect: most firms haven't updated their AI use policies to name model versions and deployment tiers since 2024. Cowork plugin deployment forces the policy update — the policy now needs to specify Claude Team or higher, the playbook governance, and the documentation expectation. (firm AI policy template spoke)

Real-world deployment: a 5-step rollout for a 25-attorney firm

Translation of the Cowork plugin into a deployable workflow for a typical mid-market firm:

Step 1 — Procure Claude Team. 25 attorneys at $20/seat/month annual = $500/month or $6,000/year (per Anthropic pricing). The Team tier includes admin controls and explicit data-protection guarantees Anthropic doesn't extend to Pro or consumer accounts. Setup time: 2 hours including SSO configuration.

Step 2 — Write the firm playbook. Senior partner or contracts-focused partner spends 4-6 hours documenting the firm's negotiation doctrine in YAML. Cover the top 8-10 clause categories the firm encounters most: indemnity, limitation of liability, IP, payment terms, termination, forum selection, change of control, confidentiality. The first version is rough; iterate over 6-8 weeks of real reviews.

Step 3 — Pilot on inbound NDAs and MSAs. Pick the 2-3 attorneys handling the most contract review work. Run `/review-contract` against the next 20-30 inbound contracts in parallel with traditional review. Track time savings and flag-accuracy errors. Iterate the playbook.

Step 4 — Document privilege and policy. Update the firm's AI use policy to specify Claude Team as the deployment surface, document the playbook governance process, and add a notation in matter-opening checklists about Claude usage on the matter. Engagement letters reference AI use generally per the firm's standard language.

Step 5 — Roll out firm-wide. After 6-8 weeks of pilot, expand to the full 25-attorney roster. Rollout time: 1 week including training. Total deployment cost: ~$6,000 in Claude licensing, ~12 hours of partner time on playbook configuration, plus a 6-8 week pilot window.

Compare to Spellbook ($180-$300/seat/month industry estimate per Artificial Lawyer = $54,000-$90,000/year for 25 attorneys, not vendor-confirmed) or Harvey ($1,200-$2,000+/seat/month industry estimate = $360,000-$600,000/year, not vendor-confirmed). The Cowork-on-Claude-Team stack runs at roughly 1-15% the cost depending on which incumbent you compare against. The tradeoff is the partner-time investment in playbook configuration. For most mid-market firms that's a multi-year ROI. (read the procurement checklist for mid-market firms)

The Bottom Line: My take: `/review-contract` is the cheapest competent clause-by-clause review tool in legal AI right now for solos through mid-market firms. The playbook configurability means the firm owns its negotiation doctrine in YAML rather than renting it from a vendor's proprietary database. For AmLaw 100 deal teams running multi-billion-dollar closings with dedicated customer success expectations, Spellbook or Harvey still win on workflow integration. For everyone else under that ceiling, the Cowork-on-Claude-Team stack at $6,000/year-for-25-attorneys is the procurement conversation that actually closes.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.