Most law firms think they have AI governance because they published a policy. Having a policy and enforcing a policy are two different things. The firms that get sanctioned, lose client trust, or face malpractice exposure aren't the ones without policies — they're the ones whose policies exist on paper but not in practice.
This maturity assessment gives you an honest read on where your firm sits across five levels — from ad hoc (Level 1) to optimized (Level 5). Fewer than 10% of firms operate above Level 3. The assessment isn't about vanity scoring. It's about identifying the specific gaps that create risk and the specific actions that close them.
Level 1: Ad Hoc — No Formal Governance
Where you are: Individual attorneys use AI tools based on personal preference. There's no firm-wide policy on which tools are approved, how outputs should be verified, or what client data can enter AI systems. Some attorneys use ChatGPT. Some use Claude. Some use nothing. Nobody knows who's using what. The risks: Data enters consumer AI tools with no confidentiality protections. AI-generated content reaches clients and courts without documented verification. If a hallucinated citation makes it into a filing, there's no governance trail showing the firm took reasonable precautions. With 300+ judges now requiring AI disclosure, attorneys at Level 1 firms may be violating standing orders without knowing they exist. What to do: This is a 30-day fix. Draft a basic AI policy covering three things: which tools are approved for firm use, what data can and cannot enter AI systems, and the verification requirement for all AI-generated work product. Circulate it. Get signatures. Enforce it. You don't need a perfect policy — you need any policy. Level 1 indicators: No written AI policy. No approved tool list. No training on AI use. No disclosure templates. No audit trail for AI-assisted work.
Level 2: Emerging — Policy Exists, Enforcement Doesn't
Where you are: The firm has an AI policy. It was distributed via email or posted on the intranet. Most attorneys read it. Few follow it consistently. There's an approved tool list, but some attorneys still use unauthorized tools for convenience. There's a verification requirement, but nobody checks whether it's being followed. The risks: The policy creates a false sense of security. If something goes wrong, the firm has a policy that proves it knew about the risk but can't demonstrate it took reasonable steps to manage it. That's worse than Level 1 for malpractice defense purposes. What to do: Add enforcement mechanisms. Implement a quarterly audit — randomly sample 10-15 AI-assisted work products per practice group and verify that verification procedures were followed. Require AI disclosure certifications on every filing in courts with standing orders. Add AI governance to the annual compliance training, not as a standalone email but as a tracked module with a completion requirement. Assign an AI governance owner — one person responsible for policy enforcement. Level 2 indicators: Written AI policy exists. Approved tool list exists. Training was delivered once. No audit process. No enforcement mechanism. No designated governance owner.
Level 3: Defined — Processes Are Documented and Enforced
Where you are: AI governance is a defined operational function with documented processes. There's a governance owner (usually a senior partner or the CIO). Approved tools are provisioned through IT. Usage is tracked. Verification procedures are documented and audited. New AI tools go through a vetting process before anyone uses them. Training is recurring, not one-time. The risks at this level are operational, not existential. The firm has reasonable controls but may lack the metrics to prove they're working. If a regulatory investigation or malpractice claim arises, you can show what you did — but you may not be able to quantify how well it worked. What to do: Add measurement. Track AI usage metrics (which tools, how often, by whom). Track compliance rates (what percentage of AI-assisted work follows verification procedures). Track incident rates (errors caught before they reached clients or courts). Start benchmarking against industry standards. Build the dashboard that tells the managing committee exactly where you stand. Level 3 indicators: Designated governance owner. IT-provisioned approved tools. Documented verification procedures. Recurring training program. Audit process in place. New tool vetting process. Missing: quantitative metrics, benchmarking, continuous improvement cycle.
Level 4: Managed — Metrics-Driven Governance
Where you are: AI governance is measured, reported, and actively managed. You know your compliance rate (percentage of AI-assisted work following procedures). You know your incident rate (errors per quarter, trending down). You track tool utilization (are attorneys using the approved tools effectively?). You report to the managing committee quarterly with data, not anecdotes. Vendor security reviews are thorough and recurring. The governance framework is aligned with NIST AI RMF or an equivalent standard. This is where the competitive advantage starts. Level 4 firms can demonstrate to clients, regulators, and insurers that their AI governance isn't just a policy — it's an operational program with measurable outcomes. For firms responding to client RFPs that ask about AI governance, Level 4 is the answer that wins business. What to do: Focus on optimization and integration. Integrate AI governance metrics into the firm's broader risk management framework. Automate compliance monitoring where possible. Build predictive analytics that identify governance gaps before they become incidents. Extend governance to cover emerging use cases — agentic AI, automated document generation, client-facing AI tools. Level 4 indicators: Quantitative metrics tracked and reported. Quarterly governance reports to leadership. NIST AI RMF or equivalent alignment. Automated compliance monitoring. Vendor security review cycle. Incident trend analysis.
Level 5: Optimized — Continuous Improvement and Industry Leadership
Where you are: AI governance is embedded in firm culture, not just firm policy. Every attorney understands their responsibilities. Governance adapts in real time to new regulations, new tools, and new risks. The firm contributes to industry standards through bar association committees, published guidance, and client advisories. Governance metrics show continuous improvement quarter over quarter. AI risk is integrated into the firm's enterprise risk management framework, reported alongside cybersecurity, conflicts, and financial risk. Fewer than 5% of firms operate at Level 5. This level isn't required for most firms — but it's where firms differentiate in competitive pitches, attract AI-savvy talent, and position themselves as trusted advisors on AI governance for their clients. What to do at Level 5: Maintain it. The biggest risk at Level 5 is complacency — assuming your governance is 'done' and deprioritizing investment. AI technology and regulation evolve faster than any other area of legal practice. A Level 5 firm in Q1 can slide to Level 3 by Q4 if governance investment doesn't keep pace with the rate of change. Allocate dedicated budget for ongoing governance evolution. Participate in industry groups shaping standards. Publish your approach to attract clients who value governance maturity. Level 5 indicators: Continuous improvement cycle documented. Industry leadership (publications, committee participation). AI risk integrated into ERM framework. Real-time governance adaptation. Culture of governance beyond compliance.
The Bottom Line: Assess honestly. Most firms are at Level 1 or 2 — policy exists but isn't enforced or measured. Level 3 is the minimum viable governance posture for a firm using AI in client work. Level 4 is where competitive advantage begins and client confidence grows. Level 5 is industry leadership. The path from Level 1 to Level 3 takes 90 days with focused effort. From Level 3 to Level 4 takes 6-12 months of measurement and iteration. Don't aim for Level 5 until Level 4 is fully operational. Move up one level at a time and don't skip steps.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
