You can't manage AI risk you haven't inventoried. That's not philosophy — it's the basic principle behind every risk management framework from NIST to ISO 27001. Yet most law firms have adopted 3-7 AI tools with zero structured documentation of what those tools access, what risks they create, and who's responsible for monitoring them.

A risk register fixes this. It's a structured inventory of every AI tool in your firm, the risks each one creates, the controls you've put in place, and who owns the ongoing supervision. It takes a day to build and an hour per quarter to maintain. It's also the document you'll wish you had when a client asks about your AI governance, a bar investigator asks about your supervision practices, or a data breach requires you to identify exactly what was exposed.


What Goes in an AI Risk Register

Each entry in your register captures seven data points per AI tool. Tool name and vendor: What it is and who provides it. Risk tier: Low, moderate, or high based on data sensitivity and decision impact. Data access: Exactly what data the tool can access — client files, internal documents, email, billing records. Use cases: What the tool is approved for and what it's explicitly prohibited from. Controls: What safeguards are in place — enterprise licensing, data handling agreements, access restrictions, output verification requirements. Supervision owner: A named individual (not a committee) responsible for ongoing oversight. Review cadence: How often this entry is reviewed and updated — quarterly for high-risk, semi-annually for moderate, annually for low. Seven fields. Multiply by however many tools you use. That's your register.

Risk Tiering: How to Classify Your AI Tools

High risk: Tools that process client-identifiable information, access privileged communications, or generate content that will be filed with courts or sent to clients. Examples: legal research AI used for brief drafting, contract review tools processing client agreements, any tool with access to your document management system. Controls required: enterprise licensing with data handling agreement, mandatory output verification, named supervision attorney, quarterly review. Moderate risk: Tools that process firm internal data or assist with work product that undergoes significant human revision. Examples: AI writing assistants for internal memos, time entry tools, practice management AI features. Controls: enterprise licensing, periodic output sampling, semi-annual review. Low risk: Tools that process only public information or provide general assistance. Examples: AI-powered legal research on public databases, scheduling assistants, marketing content tools. Controls: standard vendor due diligence, annual review.

NIST AI Risk Management Framework: What Law Firms Should Borrow

NIST's AI RMF (AI 100-1) provides a structured approach that translates well to law firm use. Four functions: Govern — establish AI policies, roles, and accountability structures. Your AI policy and risk register are your Govern artifacts. Map — identify where AI is used and what risks each use creates. Your risk register is your Map output. Measure — assess risk levels using consistent criteria. Your risk tiering system is your Measure methodology. Manage — implement and monitor controls. Your supervision assignments and review cadence are your Manage activities. You don't need to implement the full NIST framework — it's designed for AI developers, not users. But adopting its vocabulary and structure makes your risk register defensible when regulators, clients, or insurers ask how you manage AI risk. Saying 'we follow a NIST-aligned approach' carries weight.

Building Your Register: The One-Day Sprint

You can build a functional risk register in one day with the right people in the room. Morning (3 hours): Inventory session. Gather your IT director, managing partner, and 2-3 practice group leaders. List every AI tool in use — approved and unapproved. Be honest. Include the ChatGPT accounts attorneys are paying for personally, the AI features embedded in your existing tools (Westlaw, Lexis, Microsoft 365 Copilot), and any browser extensions. Most firms discover 2-3 tools they didn't know were being used. Afternoon (3 hours): Classification and assignment. For each tool, assign a risk tier, document data access, define approved use cases, identify required controls, and assign a supervision owner. Output: A completed register in spreadsheet format, ready for quarterly review. Don't overthink the format — a well-organized spreadsheet beats an elaborate GRC platform that nobody updates.

Maintaining the Register: Quarterly Review Process

A register that isn't maintained is just a historical document. Quarterly reviews should take 60-90 minutes and cover five questions per entry. Has anything changed with this tool? (New features, updated terms of service, security incidents, vendor acquisition.) Is the risk tier still accurate? (Usage patterns may have changed, new data types may be flowing through.) Are the controls still functioning? (Is the supervision owner actually reviewing output? Is the data handling agreement still current?) Should this tool be removed? (Low adoption, redundant with another tool, unacceptable risk that wasn't addressed.) Are there new tools to add? (Every quarter, ask practice groups what new AI tools they've started using.) Update the register, circulate changes to supervision owners, and file the review notes. This is the documentation that proves your governance is active, not aspirational.

The Bottom Line: An AI risk register is the foundation of defensible AI governance. It takes one day to build, one hour per quarter to maintain, and it answers the questions that clients, regulators, bar investigators, and insurers will ask about your AI oversight. Every firm using AI needs one. The firms that build theirs now will be prepared when the question comes. The firms that don't will be scrambling to document what they should have been tracking all along.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.