Three things happened in 12 months that turned AI governance from "nice to have" into "urgent." Claude Mythos proved AI finds vulnerabilities humans miss, raising the bar for any firm using AI tools. Morgan v. V2X created the first federal framework for AI use in litigation. And the EU AI Act set an August 2026 compliance deadline that affects every firm with international clients or operations. If your firm doesn't have a written AI governance policy, you're already behind.
The problem isn't that firms don't care about governance. It's that they don't know where to start. Most governance "frameworks" from consultants are 50-page documents that nobody reads. What firms actually need is a set of specific, enforceable rules that cover how AI tools get approved, how data flows through them, and who's responsible when something goes wrong.
The Five Policies Every Firm Needs in 2026
1. AI Tool Approval Policy. No AI tool touches client data without formal vetting. This means a documented process for evaluating new tools, a list of approved tools by use case, and a clear prohibition on unapproved tools. The Morgan v. V2X framework gives you the evaluation criteria: data isolation, audit logging, access controls, and contractual protections.
2. Acceptable Use Policy. A one-page document that every attorney signs. It defines what AI can and can't be used for, which tools are approved, and what data can go into them. Be specific. "Don't put confidential data into AI" is useless. "Client names, case numbers, financial data, and privileged communications must never be entered into any tool not on the approved list" is enforceable.
3. Disclosure Policy. At least 94 federal districts now have local rules addressing AI-generated work product. State courts are adding their own. Your policy needs to define when and how attorneys disclose AI use, who reviews the disclosure, and how it's documented in the case file.
4. Data Handling Policy. Where does client data go when it enters an AI tool? How long is it retained? Who has access? This policy maps data flows for each approved tool and ensures they match your client confidentiality obligations.
5. Incident Response Policy. When an AI tool hallucinates in a brief, when a vendor has a breach, when an associate puts privileged material into an unapproved tool. These aren't hypotheticals. Your incident response plan needs AI-specific scenarios with clear escalation paths.
How to Build Policies That People Actually Follow
The biggest governance failure isn't having no policy. It's having a policy that attorneys ignore. Every governance effort that starts with a 40-page document written by outside counsel ends the same way: it sits in a SharePoint folder and nobody reads it.
Effective AI policies share three traits. They're short (one page per policy, maximum). They're specific (named tools, named data categories, named consequences). And they're enforced (violations have real outcomes, and leadership follows the same rules).
Start with the acceptable use policy. Make it one page. List the approved tools by name. List the prohibited actions by example. Have every attorney sign it. Then build out from there. A one-page policy that attorneys follow beats a comprehensive framework that collects dust.
Review and update quarterly. The AI landscape moves fast. A tool that was consumer-grade six months ago may now have enterprise features. A vendor that was compliant may have changed their terms of service. Your policies need a review cadence tied to the speed of change.
The Regulatory Pressure That's Forcing the Issue
The EU AI Act hits full enforcement in August 2026. It classifies AI systems by risk level and imposes transparency, documentation, and human oversight requirements. Any firm that serves EU clients, has EU offices, or handles matters with EU jurisdictional reach needs to comply. The fines are up to 35 million euros or 7% of global turnover.
In the US, regulatory pressure is fragmented but accelerating. The ABA's Formal Opinion 512 (2024) addressed attorneys' ethical obligations when using AI, focusing on competence, confidentiality, and supervision. State bars in California, Florida, New York, Texas, and at least 15 others have issued their own guidance. Federal courts are adding disclosure requirements on a district-by-district basis.
The direction is clear even if the details vary by jurisdiction. Regulators expect firms to have governance in place. "We were waiting for final rules" isn't a defense when the bar files a complaint. The firms that build governance now will adapt to final regulations easily. The firms that wait will scramble.
What This Means for Your Firm
Don't try to build a perfect governance framework. Build five functional policies, one page each, and put them in place this quarter. Assign an AI governance lead, whether that's a partner, the CTO, or a dedicated role. Give them authority to approve tools, enforce policies, and update the framework.
Tie your governance to your existing risk management. AI governance isn't a separate function. It's an extension of your data security, ethics compliance, and client confidentiality obligations. The firm already has infrastructure for these. AI governance plugs into it.
The competitive angle matters too. Clients are starting to ask about AI governance in RFPs and outside counsel guidelines. A firm that can demonstrate governed workflows wins work over a firm that can't. Governance isn't just risk management. It's a business development asset.
The Bottom Line: AI governance isn't about restricting AI use. It's about making AI use defensible when a court, a client, or a bar association asks what your firm's system looks like.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
