By April 2026, 300+ federal judges had AI-related standing orders or local rules per Bloomberg Law's tracker, and the Charlotin AI Hallucination Cases Database catalogued 1,227 documented sanctions cases globally — accelerating at roughly 5-6 new cases per day. A firm AI policy that doesn't address those facts is operationally stale. This isn't a copy-paste policy template — fabricating specific clause text would invite the exact problems the policy is meant to prevent. It's the framework plus the necessary clauses, with citations to current authoritative sources, that a managing partner can hand to firm counsel and IT to operationalize before activating Claude seats.
Clause 1: Approved deployment surfaces (named, not generic)
AI policies that say "approved AI tools" without naming surfaces are stale on day one. Per Anthropic's pricing, Claude ships through five surfaces — claude.ai (Pro, Max, Team, Enterprise), the Claude API, AWS Bedrock, Vertex AI on Google Cloud, and Microsoft Foundry — and the privilege analysis differs across them.
The clause needs to specify:
- Named approved surfaces (e.g., "Claude Team Standard via claude.ai or Claude through Microsoft Foundry"). Not "Claude" alone. - Tier minimums (e.g., "Claude Team Standard or higher; Claude Pro consumer tier is not approved for matter work"). Per Anthropic's terms, Team, Enterprise, and API inputs are not used for model training; Pro inputs may be. - Excluded surfaces (e.g., "Consumer Claude Free, Claude Pro on personal accounts, third-party Claude wrappers without firm vetting"). - Surface change protocol. New surfaces require AI committee approval before deployment. Anthropic ships new surfaces faster than typical procurement cycles update; the policy needs a fast-path approval pattern.
The second-order read: per US v. Heppner (SDNY, Feb 17, 2026), the privilege ruling addressed consumer Claude specifically. Enterprise deployments carry different facts. The policy makes the deployment surface explicit so associates don't default to the consumer product they used at home. The Heppner meets enterprise Claude privilege defense spoke covers the architecture.
Clause 2: Effort levels and model versions named explicitly
Per Anthropic's Opus 4.7 release docs, Claude Code defaults to xhigh effort level on all paid plans. AI policies that name "Claude" without naming effort levels and versions are stale on day one of any new release.
The clause needs to specify:
- Approved model versions by name and date (e.g., "Claude Opus 4.7 as of April 16, 2026 release; Claude Sonnet 4.6 for non-complex matters"). - Approved effort levels by use case. xhigh for complex multi-step legal reasoning; high for standard contract review; medium for routine drafting. Effort levels affect output token spend at $25 per million per Anthropic's pricing. - Version update protocol. New Claude versions require AI committee evaluation before approval — typically 30-60 day review window with named pilot users. - Documentation requirement for matter-specific effort and version choices when those choices materially affect output quality or cost.
The second-order read: associates running Claude Code without firm authorization are getting xhigh by default, which means the firm is paying premium output token rates without the procurement decision being explicit. The policy makes that explicit. The third-order read: AI policies without version specificity become legally meaningless as soon as new versions ship — and Anthropic has shipped 3+ Claude versions in the past 12 months. Version specificity is non-optional in a 2026 policy.
Clause 3: Citation verification protocol
Per the Charlotin AI Hallucination Cases Database, 1,227 documented sanctions cases globally as of early 2026, accelerating at roughly 5-6 new cases per day. Notable recent sanctions include Alabama Supreme Court (April 2026) — Mobile attorney W. Perry Hall ordered to pay $17,200 plus barred from solo filing, ironically apologized for AI hallucinations then cited two more nonexistent cases in the apology footnote. Cherry Hill (NJ federal, April 27, 2026) — Attorney Raja Rajan sanctioned, reportedly wasn't sure whether he used Claude, ChatGPT, or Grok.
The clause needs to specify:
- Verification step required for every AI-generated citation before filing. Westlaw, Lexis, Bloomberg Law, or Google Scholar. The verification step is non-negotiable regardless of the model's calibration improvements. - Verification documentation. Lawyer's file note or matter-management entry recording the verification step. Becomes part of the audit trail for malpractice defense. - Specific verification responsibilities. Drafting attorney verifies primary citations; reviewing partner spot-checks; firm librarian or knowledge-management staff handles final pre-filing audit on high-stakes matters. - Sanctions reporting protocol. When sanctions are imposed against firm attorneys, internal reporting cycle to the AI committee within 48 hours.
The second-order read: per GPT-5.5's calibration improvements, models are getting better — but the verification step is still required because the failure mode (sanctioned attorney) is asymmetric to the cost (verification time). Better calibration reduces the rate; it doesn't change the necessity of verification.
Clause 4: Scratchpad and notes file storage
Per Anthropic's Opus 4.7 multi-session memory feature, Claude can hold context across sessions via a scratchpad/notes file. For long-running matters (M&A diligence, multi-day depositions, white-collar matters), this saves substantial context-loss tax. The files contain matter-specific reasoning, party identities, and analysis pathways. They're firm data assets and they're discoverable.
The clause needs to specify:
- Approved storage locations. Firm document management system (NetDocuments, iManage, or equivalent). Not personal devices, not personal cloud accounts, not third-party note services. - Retention schedule. Match the firm's standard matter-file retention. Scratchpad files inherit the underlying matter's retention period. - Access controls. Same as the underlying matter — partner, associates on the matter, conflict-walled per matter assignment. - Discovery preservation hold. Scratchpad files fall under standard preservation hold protocol when triggered. The IT and litigation support functions need explicit scratchpad-file inclusion in their hold procedures. - Cross-matter restriction. Scratchpad files are matter-specific. They don't carry over to unrelated matters. Engagement with new matters starts with a fresh scratchpad.
The second-order read: scratchpad files create a new evidence category that didn't exist before April 2026. Treating them as ordinary work product without explicit protocol invites both privilege exposure and discovery surprises. The third-order read: opposing counsel with a sophisticated AI litigation practice will request scratchpad files in discovery within 12 months. Firms without the protocol in place will be playing catch-up.
Clause 5: Court disclosure language and federal standing order tracking
Per the federal court AI disclosure landscape, 300+ federal judges have AI-related standing orders or local rules, with the first being Judge Brantley Starr (NDTX) in 2023. The requirements vary dramatically — some require tool name (ChatGPT-4 vs Claude vs Spellbook); some require sections drafted by AI; some require certification that AI-generated citations were verified.
The clause needs to specify:
- Default firm AI-disclosure language for filings in jurisdictions that require disclosure but don't specify form. Pre-approved by the firm's professional responsibility partner. - Per-court tracking. The firm maintains a current list of judges with AI-related standing orders and the specific disclosure each requires. Updated quarterly minimum. - Filing review step. Before filing in any new federal court, associates check the assigned judge's standing orders for AI-disclosure requirements. - Engagement letter notice. Clients are informed of AI use on their matters and the disclosure obligations the firm will follow. The default engagement letter clause covers this.
The second-order read: 300+ judges with AI-related orders is a fast-moving target. A policy that requires per-court tracking with quarterly updates puts the operational burden on the right place — the firm's central professional responsibility function — rather than expecting individual associates to track 300+ judges' orders correctly under filing pressure. The third-order read: the firms that get sanctioned in 2026-2027 will be the ones whose policy required individual associates to track standing orders. Centralized tracking is operational hygiene.
Clause 6: Engagement letter template language
Clients need to know what AI use looks like on their matters before the matter starts, not when sanctions surface. The engagement letter clause is the policy's contact point with the client.
The clause needs to specify:
- Default engagement letter language covering: the firm uses AI tools for first-pass drafting, document review, and research; the firm verifies AI-generated citations before filing; the firm follows applicable court disclosure requirements; the client may opt out of AI use on their matter at any time. - Opt-out protocol. When clients opt out, how the matter team operationalizes the choice — typically by using AI for non-substantive tasks (administrative, scheduling) but not for matter substance. - Sensitive-matter carve-outs. Some matters (sealed proceedings, attorney-client privileged investigations, high-confidentiality M&A) get default-no-AI status regardless of client position. Named in the matter intake form. - Update protocol. When AI use protocols change materially, existing engagement letters get an update memo to clients.
The second-order read: engagement letter language is the artifact that determines whether the firm's AI use is compliant with ABA Model Rule 1.4 (communication with client) and analogous state rules. Without explicit client notice, AI use on matters where the client would reasonably expect human-only handling creates an MR 1.4 exposure that compounds over time. The third-order read: clients increasingly ask about AI use proactively. Firms with template language ready answer those questions in 30 seconds; firms without spend 30 minutes per inquiry redrafting language ad hoc.
Clause 7: AI committee charter and review cadence
Policy enforcement requires named ownership. The AI committee is the standing function that maintains the policy, approves new surfaces and tools, reviews sanctions reporting, and updates court tracking.
The clause needs to specify:
- Committee composition. Typically a managing partner, the firm's professional responsibility partner, the IT director, the head of legal-ops or knowledge management, and 1-2 senior associates with AI deployment experience. Five-to-seven members; smaller for sub-200 lawyer firms. - Quarterly review cadence. Standing orders updated, sanctions reports reviewed, new model versions evaluated, vendor relationship status reviewed. - Emergency review trigger. New federal court AI rule, sanctions case against firm attorney, vendor security incident, new Claude version with material behavior change. Committee convenes within 5 business days. - Annual policy review. Full policy walkthrough with firm-wide circulation of updates. Most firms underweight this; the policy that gets updated quarterly with documentation is structurally stronger than the policy that gets rewritten annually with poor change tracking.
The second-order read: AI committees that exist on paper but don't meet quarterly produce the same risk as no committee at all. The charter needs operational teeth — named members, scheduled meetings, written meeting notes, escalation protocols. The third-order read: firms that take the AI committee seriously will be the ones whose policy is defensible to malpractice carriers when the inevitable claim surfaces.
The Bottom Line: My take: A firm AI policy in 2026 has seven non-negotiable clauses — approved deployment surfaces, named effort levels and model versions, citation verification protocol, scratchpad file storage, court disclosure language with per-court tracking, engagement letter template, AI committee charter. This template is the framework, not boilerplate. Firms that copy-paste a generic AI policy without operationalizing each clause inherit the same risk exposure as firms with no policy at all. The firms that get this right will be the ones whose AI committee meets quarterly, tracks 300+ federal court orders centrally, instruments the deployment dashboard from day one, and updates the policy when Anthropic ships a new Claude version (which has been every 4-6 months recently). Operational discipline beats clause length.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
