53% of law firms still have no formal AI use policy. That's according to the ABA's 2026 Legal Technology Survey, and it's not just a gap — it's a malpractice risk walking around in a suit.

Every attorney at your firm is already using AI. ChatGPT, Claude, Copilot, Gemini — they're using it for research, drafting, brainstorming, and client communications. The question isn't whether your firm uses AI. It's whether anyone is supervising how. A comprehensive AI policy isn't optional anymore. It's the minimum standard of care, and the firms without one are one hallucinated citation away from a bar complaint.


Why 'No Policy' Is the Riskiest Policy

Without a formal AI policy, you have an informal one — whatever each attorney decides to do on their own. That means some attorneys are pasting client confidential information into ChatGPT's free tier (which trains on your input). Others are submitting AI-generated briefs without verification (see: Mata v. Avianca, Wadsworth v. Allied Professionals, and the growing list of sanctioned attorneys). Others aren't using AI at all and billing 10x what they should for tasks a tool handles in minutes. All three scenarios create liability. A policy doesn't prevent AI use — it channels it. It tells your attorneys which tools are approved, what data can go in, what supervision is required, and when disclosure is mandatory. That's not bureaucracy. That's risk management.

The Seven Sections Every AI Policy Needs

1. Scope and Definitions: What counts as AI? Which tools? Who does this apply to (partners, associates, paralegals, staff)? 2. Approved Tools: Specific tools the firm has vetted and approved. Everything else is prohibited until vetted. 3. Acceptable Use: What AI can be used for (research assistance, draft generation, document review) and what it cannot (final work product without review, client communications, billing entries). 4. Data Handling: What information can be input into AI tools. Client-identifiable information in unapproved tools should be an absolute prohibition. 5. Supervision and Verification: Every AI output must be reviewed by a licensed attorney. Specific verification steps for different work types. 6. Billing and Disclosure: How AI-assisted work is billed. When and how to disclose AI use to clients and courts. 7. Compliance and Enforcement: Consequences for policy violations. Annual review and update schedule. Training requirements.

Data Handling: The Section That Saves You From Malpractice

This is the section most firms get wrong — or skip entirely. Your policy needs three tiers of data classification for AI use. Tier 1 — Unrestricted: General legal questions, hypothetical scenarios, publicly available information. Can be used with any approved tool. Tier 2 — Anonymized: Client matters with identifying information removed. Can be used with enterprise-tier approved tools that don't train on inputs. Tier 3 — Prohibited: Client names, case numbers, privileged communications, confidential strategy, personally identifiable information. Cannot be input into any external AI tool regardless of tier. Make the tiers simple enough that a first-year associate can apply them without asking for guidance every time. If your data handling rules require a flowchart to understand, they won't be followed.

Billing and Disclosure: Where Ethics Meets Revenue

Your policy must address two uncomfortable questions. Can you bill for AI-assisted work? Yes, but not at the same rate as purely human work, and not for time you didn't actually spend. If AI reduces a 4-hour research task to 30 minutes, billing 4 hours is fraud. Bill the 30 minutes of attorney time, plus reasonable time for verification and refinement. Some firms are moving to value-based billing for AI-assisted work — charge for the outcome, not the hours. When must you disclose AI use? At minimum: when a court requires it (growing list of jurisdictions), when an engagement letter requires it, and when a client asks. ABA Formal Opinion 512 provides guidance but your state bar may impose additional requirements. Build disclosure language into your engagement letter template so it's automatic, not an afterthought.

Implementation: Getting Attorneys to Actually Follow the Policy

A policy nobody follows is worse than no policy — it creates a false sense of security. Three implementation requirements. Training: Every attorney and staff member completes AI policy training within 30 days of the policy's effective date. New hires complete it in their first week. Annual refresher required. Accessibility: The policy is available on the firm intranet, summarized in a one-page quick reference card, and referenced in the employee handbook. If people have to search for the policy, they won't consult it. Enforcement: The first violation gets a conversation. The second gets a written warning. Violations involving client data get immediate escalation to the managing partner and ethics counsel. Document everything. A policy without enforcement is just a suggestion.

The Bottom Line: An AI policy isn't a luxury for 'innovative' firms — it's the minimum standard of competent practice in 2026. The 53% of firms without one are betting that none of their attorneys will make a mistake with AI before they get around to writing a policy. That's a bet against math. Build the seven sections, implement with training and enforcement, and update annually. It takes less time than responding to a single bar complaint.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.