Anthropic's open-source legal skills — published February 2026 to github.com/anthropics/knowledge-work-plugins/tree/main/legal under permissive license — are the procurement bypass nobody's talking about. The repo ships `/review-contract` and `/triage-nda` as the headline skills, but the architecture matters more than the pre-built workflows: any firm with a YAML editor and a senior counsel can fork the repo, configure its own negotiation doctrine, and ship custom legal skills inside Claude without paying a vendor or signing an MSA. While Harvey and CoCounsel ship locked-in proprietary workflows at quote-only enterprise pricing — industry estimates of $1,200-$2,000+/seat/month for Harvey and $75-$500/user/month for CoCounsel per Costbench March 2026 secondary-source data, neither vendor-confirmed — Anthropic published the workflow primitives free. This is the operator's read on what open-source legal skills actually unlock for firm customization in 2026.
What 'open source' means for a legal skill — and why YAML is the unlock
The Cowork legal plugin's open-source release isn't just "the code is on GitHub." It's a structural shift in how legal AI workflows get built. The repo ships three layers:
- The skill primitives (`/review-contract`, `/triage-nda`) — the prompt engineering, output structure, and integration patterns that make a clause-by-clause review or rapid NDA triage actually work. - Example YAML playbooks — sample configurations covering common contract types (NDA, MSA, SaaS subscription, professional services). - Documentation — installation, configuration, and customization guidance.
The firm's customization happens in YAML. A typical playbook is 50-200 lines of declarative configuration covering acceptable indemnity caps, preferred forum, IP assignment defaults, payment terms, and any deal-specific red lines. The senior counsel writes the YAML; the firm's IT or legal-ops team checks it into git and version-controls it; every subsequent skill invocation runs against that doctrine.
The second-order effect: the firm owns its negotiation doctrine in machine-readable, portable form. If Anthropic raises Claude pricing in 2027, the firm migrates to a competing model by re-pointing the YAML — the doctrine isn't locked into a vendor's proprietary database. Compare to Spellbook or Harvey, where the firm's customizations live in vendor-controlled UI and don't port to alternative tools.
The third-order effect: the YAML playbook becomes an institutional knowledge asset. Senior partners traditionally carried negotiation doctrine in their heads. After 12-18 months of `/review-contract` use, the firm has a documented, version-controlled, reviewable representation of how it actually negotiates. When the senior partner retires or rotates, the playbook stays. New associates inherit the firm's doctrine automatically.
Customization patterns: forking the skills for firm-specific workflows
The repo is forkable under permissive license. Three customization patterns law firms are running:
Pattern 1 — Practice-area-specific skills. Fork the base `/review-contract` and rename to `/review-employment-agreement`, `/review-saas-msa`, or `/review-ma-purchase-agreement`. Configure the YAML playbook with practice-area-specific clause libraries and risk thresholds. A litigation boutique might create `/review-protective-order`. An M&A firm might create `/review-term-sheet`. The skill primitives are the same; the configurations are domain-specific.
Pattern 2 — Client-specific playbooks. A firm with major clients running standardized vendor terms can configure a per-client playbook. For Client A's SaaS reseller agreements, indemnity caps at 12 months fees with carve-outs for IP and gross negligence. For Client B's enterprise customer agreements, tighter limitation of liability and stricter forum selection. Each playbook is a YAML file in the firm's repo; the senior associate invokes the right one for the matter.
Pattern 3 — Risk-tier escalation. Configure the playbook to flag clauses by risk tier rather than binary GREEN/YELLOW/RED. A more granular risk model — RISK_TIER_1 through RISK_TIER_4 with different routing — works for high-volume in-house teams that want graduated handoffs (Tier 1 = paralegal execute, Tier 4 = GC review). The skill code stays the same; the YAML adds tiers.
For each pattern, the firm checks the customization into its own git repository, separate from the upstream Anthropic repo. The firm pulls upstream updates when Anthropic ships skill improvements; the firm-specific YAML doesn't conflict.
The pickable side on FIT: for any firm with senior counsel willing to invest 6-15 hours of initial playbook configuration time, open-source legal skills are the cheapest path to custom workflows in legal AI right now. (deeper read on Cowork vs vendor plugins firm decision)
What firms can build that they couldn't before
The skill primitives unlock workflows that previously required custom development through an internal engineering team or a $50K-$300K vendor implementation engagement. Concrete examples:
In-house NDA pipeline at a 200-person tech company. Configure `/triage-nda` against the company's risk tolerance. Auto-approve mutual NDAs from approved counterparty types. Route one-way NDAs from large-revenue counterparties to the senior contracts attorney. Flag M&A-adjacent NDAs to the GC. Total deployment time: 3-5 days including playbook configuration and pilot. Total annual cost: ~$1,200-$3,000 in Claude Team licensing for a 5-person legal ops team. Compare to LinkSquares or Ironclad enterprise CLM at $30,000-$100,000+/year industry estimates.
Plaintiff's contingency-fee firm reviewing inbound retainer agreements. Fork `/review-contract` and configure for retainer-specific risk thresholds (acceptable contingency percentages, fee disputes, costs allocation, termination triggers). Senior attorney spends 4 hours on initial YAML configuration; subsequent retainer reviews compress from 30-45 minutes to 5-10 minutes per agreement. For a firm reviewing 30+ inbound retainers/month, that's 12-17 hours/month of senior-attorney time recovered.
Multi-jurisdictional firm normalizing playbooks across offices. A firm with offices in New York, London, and Singapore configures three playbook variants — one per jurisdiction's standard clause expectations. Senior associates can invoke the right playbook per matter; the firm's doctrine is consistent within jurisdiction even when individual associates differ.
Public defender's office running indigent intake. Configure a custom skill `/triage-civil-rights-case` that pre-screens incoming civil rights matters against the office's prioritization criteria (statute of limitations, evidentiary strength signals, jurisdictional fit). Skill is built once and reused across the office's intake pipeline.
In each case, the open-source repo lowers the engineering barrier. The firm doesn't need to hire a vendor, sign an MSA, or wait 6 months for an implementation. The firm forks the repo, configures YAML, and runs. (read the in-house counsel deployment checklist)
Governance: who owns the playbook and how to version-control it
Open-source legal skills shift governance from "the vendor maintains it" to "the firm maintains it." That's a procurement win and a governance responsibility.
Ownership. The playbook YAML lives in the firm's git repository. A senior partner or designated GC owns the negotiation doctrine the YAML encodes. The legal-ops team or IT manages the technical deployment (forking, version control, CI/CD if the firm wants automated playbook validation).
Version control. Every playbook change is a git commit. When the firm updates indemnity caps from 12 months fees to 18 months fees, the change is visible in the diff, attributable to the partner who approved it, and reversible if it doesn't work in practice. That's an audit trail most law firms have never had on negotiation doctrine.
Review cadence. Quarterly partner review of the playbook against the firm's actual deal flow. Did the GREEN flags hold up? Did YELLOW flags lead to negotiable outcomes or hard objections? Should any clause categories be moved up or down a tier? The review takes 2-3 hours per quarter and produces incremental playbook updates.
Privilege and confidentiality. The playbook itself is firm work product. It encodes the firm's negotiation strategy and risk tolerance. Don't publish it externally. The git repo should be private. Access controlled per partner roles.
AI use policy alignment. The firm's AI use policy needs to name Cowork plugin deployment specifically — naming Claude Team or higher as the deployment tier, the playbook governance process, and the documentation expectation for any contract categorized as RED or escalated to FULL REVIEW. Most firm AI policies haven't been updated since 2024 and don't yet specify Cowork deployment. (firm AI policy template spoke)
The second-order effect: firms that govern playbook updates well end up with a better-documented negotiation doctrine than firms that don't. The audit trail is positive evidence for ethics inquiries, internal partner reviews, and post-matter analysis. The third-order effect: firms can selectively share anonymized playbook patterns with industry counterparts (in CLE settings, bar association working groups) — building the firm's reputation as a sophisticated AI deployment shop.
Cost comparison: open-source skills + Claude Team vs proprietary vendor stacks
The procurement math is the cleanest argument for open-source legal skills. Calibrated by firm size:
Solo practitioner. Claude Pro at $17/user/month annual = $204/year. Open-source Cowork plugin: free. Total: $204/year. Compare to Spellbook quote-only with $199/seat/month enterprise minimum at 10 seats per industry estimates (Spellbook pricing is quote-only; not vendor-confirmed) — a solo can't typically buy Spellbook seats anyway.
5-person in-house legal team. Claude Team at $20/seat/month annual × 5 seats = $1,200/year. Open-source plugins: free. Total: $1,200/year. Compare to Spellbook industry estimate $180-$300/seat/month × 5 seats = $10,800-$18,000/year (not vendor-confirmed), or LinkSquares mid-market enterprise estimate $30,000-$80,000/year. The Anthropic stack runs at 6-65x cheaper.
25-attorney mid-market firm. Claude Team at $20/seat/month annual × 25 seats = $6,000/year. Plus 4-6 hours of partner time on playbook configuration. Compare to Spellbook industry estimate $54,000-$90,000/year for 25 seats (not vendor-confirmed), or Harvey AI industry estimate $360,000-$600,000/year (not vendor-confirmed). The Anthropic stack runs at 9-100x cheaper.
100-attorney firm. Claude Team at $20/seat/month annual × 100 seats = $24,000/year. Plus 12-15 hours of senior-counsel time on playbook configuration spread across practice areas. Compare to Harvey industry estimate $1,440,000-$2,400,000/year for 100 seats (not vendor-confirmed). The Anthropic stack runs at 60-100x cheaper at this scale.
The pickable side on FIT: the open-source Cowork stack is the cheapest path to custom legal AI workflows for any firm with senior counsel time available for playbook configuration. The tradeoff against Spellbook or Harvey is pre-built workflow depth and customer success. Solos and mid-market firms typically benefit more from cost savings than from pre-built workflow depth. AmLaw 100 firms with active Anthropic deals (Freshfields is the public reference) increasingly run open-source skills alongside vendor stacks during a 6-12 month transition. (read the Freshfields × Anthropic analysis)
The Bottom Line: My take: open-source legal skills shift custom legal AI workflows from a vendor-implementation conversation to a YAML-configuration conversation. For solos and mid-market firms, that's the cheapest path to custom workflows in 2026. For BigLaw, the open-source skills run alongside vendor stacks during transition. The firm-customization upside (practice-area variants, client-specific playbooks, jurisdictional normalization) is real but requires senior-counsel time and disciplined playbook governance. The procurement savings are 9-100x against Spellbook and Harvey at industry-estimate pricing. The institutional-knowledge upside (documented, version-controlled negotiation doctrine) compounds over 12-18 months.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
