AI governance for law firms is the set of policies, oversight structures, and risk management practices that control how attorneys and staff use artificial intelligence tools. It covers everything from which tools are approved to how output gets verified, who's responsible when something goes wrong, and how client data stays protected.

Right now, 53% of law firms have no formal AI policy at all. That's not a governance gap — it's a liability waiting to trigger. Without documented policies, every attorney in your firm is making independent decisions about which AI tools to use, what data to feed them, and whether to verify output. That inconsistency is where malpractice claims, data breaches, and ethics violations originate.


Why 53% Having No Policy Is a Firm-Wide Risk

When a firm has no AI governance policy, individual attorneys default to whatever's convenient. One partner uses ChatGPT to draft motions. An associate feeds client documents into Claude. A paralegal uses an AI summarization tool that stores data on third-party servers. None of them checked whether these tools comply with confidentiality obligations, and none of them documented what they did.

This isn't hypothetical. ABA Formal Opinion 512 (2024) established that lawyers have an ethical obligation to understand the capabilities and limitations of AI tools before using them. That includes knowing where data goes, how it's stored, and whether client confidentiality is maintained. A firm without a policy is a firm where every AI interaction is an unmanaged ethics risk — and the managing partner is the one who answers for it.

ABA Formal Opinion 512: What It Actually Requires

Opinion 512 didn't create new rules. It applied existing Model Rules to AI tools and spelled out what competence, confidentiality, and supervision mean in an AI context.

Competence (Rule 1.1): Lawyers must understand the AI tools they use well enough to recognize their limitations. You don't need to be an engineer, but you need to know that AI can hallucinate, that outputs require verification, and that different tools have different reliability profiles.

Confidentiality (Rule 1.6): Client information entered into AI tools must be protected. Free-tier AI tools that use input data for model training are a confidentiality violation. Period.

Supervision (Rules 5.1/5.3): Partners must ensure associates and staff use AI tools competently. If your associate files a brief with hallucinated citations, you're responsible — not the associate, and certainly not the AI.

Communication (Rule 1.4): Depending on context, clients may need to know that AI tools are being used in their matter. Particularly when the engagement involves sensitive data or when court rules require disclosure.

The Minimum Viable AI Governance Framework

You don't need a 50-page policy. You need a framework that covers the actual risk vectors. Here's what a minimum viable AI governance policy includes:

Approved tools list. Which AI tools are authorized for use? Which are prohibited? Who approves additions? At minimum, distinguish between enterprise tools with data protection (Westlaw AI, Lexis+ AI) and consumer tools that may use input for training (free ChatGPT).

Data classification rules. What types of information can be entered into which tools? Client-identifying information, privileged communications, and confidential business data each need clear handling rules.

Verification requirements. Every AI-generated output used in a filing or client deliverable must be independently verified. Define what "verified" means — checking citations against primary sources, reviewing holdings in the actual opinions, confirming statutory currency.

Disclosure protocols. When do you disclose AI use to clients? To courts? Document the decision framework, not just individual decisions.

Incident response. What happens when someone discovers an AI error after filing? Who gets notified, what gets documented, and what remediation steps follow?

Implementing Governance Without Killing Productivity

The firms getting AI governance right aren't banning AI tools — they're channeling usage through controlled pathways. Enterprise-grade tools with data protection agreements become the default. Training sessions (not just email memos) ensure everyone understands the verification requirement. Workflow integration embeds verification as a checkpoint, not an afterthought.

The most effective approach is tiered governance. Low-risk tasks (summarizing public documents, generating initial research outlines) get lighter oversight. High-risk tasks (drafting filings, analyzing privileged documents, advising on case strategy) require full verification and documentation. This prevents the governance framework from becoming so burdensome that attorneys route around it — which is worse than having no policy at all.

What Happens to Firms That Wait

The regulatory trajectory is clear: more jurisdictions are issuing AI-specific rules, more courts are requiring disclosure, and more bar associations are incorporating AI competence into CLE requirements. Firms that build governance frameworks now are establishing baseline compliance that can be adapted as requirements evolve.

Firms that wait face a different calculus. When (not if) an AI-related incident occurs — a hallucinated citation, a confidentiality breach, a client complaint — the first question will be "what was your policy?" The answer "we didn't have one" transforms a manageable incident into an existential one. Insurance carriers are already asking about AI governance during renewals. Firms without policies are going to face higher premiums or coverage exclusions. The cost of implementing governance now is a fraction of the cost of explaining its absence later.

The Bottom Line: AI governance isn't bureaucratic overhead — it's the difference between managed innovation and unmanaged liability. The 53% of firms without policies aren't avoiding governance costs. They're deferring them, with interest. A minimum viable framework takes days to implement and covers the risks that matter most. Waiting for a perfect policy while your attorneys freelance with AI tools is the most expensive option on the table.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.