The NIST AI Risk Management Framework (AI RMF 1.0) is the most comprehensive AI governance framework available in the US, and it's completely free. Problem is, it was written for tech companies and government agencies, not law firms. The framework's four core functions — GOVERN, MAP, MEASURE, MANAGE — translate directly to law firm operations, but nobody's done the translation work until now.
ISO 42001 (the international AI management system standard) gets all the press, but it costs money to certify and requires formal audit processes most mid-size firms can't justify. NIST AI RMF gives you 80% of the substance with zero certification cost. For managing partners who need an AI governance structure that satisfies clients, insurers, and bar regulators, this is the practical starting point.
GOVERN: Building Your Firm's AI Decision-Making Structure
NIST's GOVERN function establishes who makes AI decisions and how. For law firms, this translates to three concrete requirements.
First, designate an AI governance owner. This doesn't have to be a full-time role. In most firms, it's a tech-savvy partner combined with your IT director and a compliance/ethics point person. What matters is that someone owns AI policy decisions and has authority to approve or block AI tools.
Second, create an AI acceptable use policy. NIST calls this "organizational policies and processes." For law firms: which AI tools are approved, what data can enter them, who can use them, what verification is required, and what documentation must be maintained. This isn't optional paperwork — it's the supervisory framework ABA Opinion 512 requires under Rules 5.1 and 5.3.
Third, establish an AI risk tolerance statement. NIST requires organizations to define how much AI risk they'll accept. For law firms, this means: Will you use AI for client-facing work product? For internal research only? For drafting that always gets human review? Your risk tolerance drives every downstream decision.
MAP: Identifying Where AI Creates Risk in Your Practice
The MAP function requires you to catalog every AI use in your firm and assess the risk each one creates. Most firms skip this step and jump straight to policy — which is like writing a security plan without knowing what you're securing.
Start with a use-case inventory. Every AI application in your firm falls into a risk tier:
- High risk: AI generating client-facing work product (briefs, memos, contract language), AI processing privileged communications, AI making or influencing case strategy decisions. - Medium risk: AI-assisted legal research with human verification, AI drafting internal communications, AI summarizing depositions or discovery. - Low risk: AI for scheduling, administrative tasks, marketing content, internal knowledge management.
Each risk tier gets different controls. High-risk uses need mandatory verification, documentation, and partner review. Medium-risk uses need verification protocols. Low-risk uses need basic data handling awareness. Map every AI tool to its use cases, and every use case to a risk tier. This inventory is also what your malpractice insurer will ask for.
MEASURE: Tracking Whether Your AI Controls Actually Work
GOVERN sets the rules. MAP identifies the risks. MEASURE tells you whether your controls are working. This is where most governance frameworks become shelfware — firms write policies but never check if anyone follows them.
For law firms, MEASURE means tracking four metrics:
1. Hallucination encounter rate: How often do attorneys find errors in AI output? Track this per tool and per practice group. If your hallucination encounters are zero, people aren't checking. 2. Policy compliance rate: Are attorneys using only approved tools? Are they documenting AI use as required? Random audits of AI usage logs (if your tools provide them) catch drift. 3. Verification completion rate: What percentage of AI research outputs get the required human verification? Require sign-off and track it. 4. Client disclosure rate: Are engagement letters and matter communications including AI disclosure language? Audit a sample quarterly.
NIST calls these "metrics and monitoring." For law firms, they're the evidence that your governance framework isn't just a document — it's an operational reality.
MANAGE: Responding When AI Risk Materializes
The MANAGE function covers what you do when something goes wrong. Not if — when. An AI tool hallucinates a citation that makes it into a filing. A vendor has a data breach. An associate uses an unapproved tool with privileged information.
Your MANAGE protocols need three tiers:
Tier 1 — Incident containment: Who gets notified? How fast? What's the escalation path? For AI hallucinations that make it into filed documents, you need a same-day response protocol that includes checking for sanctions exposure, notifying the supervising partner, and evaluating whether the court needs to be informed under Rule 3.3.
Tier 2 — Root cause analysis: Was this a tool failure, a process failure, or a training failure? Did the attorney skip verification? Did the tool perform worse than expected? Root cause determines whether you need a tool change, a process change, or a personnel action.
Tier 3 — Systemic response: If the incident reveals a pattern, what changes? Update the approved tool list, revise verification protocols, mandate retraining. Document every incident and every response. This documentation is your defense when bar counsel or a malpractice insurer asks what happened and what you did about it.
ISO 42001 Crosswalk: How NIST Maps to the International Standard
If your firm has international clients or wants formal AI management certification, ISO 42001 is the international standard for AI management systems. Here's how it maps to NIST AI RMF:
- ISO 42001 Clause 5 (Leadership) = NIST GOVERN. Both require executive commitment, policy creation, and role assignment. NIST is more prescriptive on organizational structure. - ISO 42001 Clause 6 (Planning) = NIST MAP. Both require risk identification and assessment. ISO adds "opportunities" assessment that NIST handles less formally. - ISO 42001 Clause 9 (Performance evaluation) = NIST MEASURE. Both require monitoring, measurement, and internal audit. ISO requires formal audit cycles; NIST is more flexible. - ISO 42001 Clause 10 (Improvement) = NIST MANAGE. Both cover corrective action and continuous improvement. ISO requires documented procedures; NIST recommends them.
The practical difference: ISO 42001 requires third-party certification audit. NIST AI RMF is self-assessed. For most US law firms, NIST provides the substance without the certification cost. If a client requires ISO 42001 compliance, building on a NIST foundation makes the certification audit straightforward.
The Bottom Line: NIST AI RMF gives law firms a free, comprehensive AI governance framework that satisfies ABA Opinion 512 obligations, impresses institutional clients, and creates an auditable record for malpractice insurers. You don't need ISO certification to have serious AI governance. You need the four functions — GOVERN, MAP, MEASURE, MANAGE — operating in your firm.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
