An AI policy for law firms is a formal document that governs how attorneys and staff can use artificial intelligence tools in legal practice. It defines which tools are approved, what data can be entered, when disclosure is required, and who's responsible for verifying AI outputs.
53% of law firms still don't have one — and that's not a technology gap, it's a malpractice risk. The ABA's Formal Opinion 512 (2024) made clear that lawyers have ethical obligations when using AI, and "we haven't gotten around to it" isn't a defense when client data ends up in a training dataset or an AI-generated hallucination makes it into a court filing.
Why 53% Without a Policy Is Dangerous
The number comes from the 2025 ABA Legal Technology Survey: 53% of law firms reported having no formal AI usage policy. That's despite 78% of Am Law 200 firms using AI in some capacity. The gap between adoption and governance is where sanctions, malpractice claims, and bar complaints live. Without a policy, any attorney at the firm can paste confidential client information into free ChatGPT (which may use it for training). Any paralegal can submit AI-generated research without verification. Any associate can file a brief with hallucinated citations. Each of these has already resulted in sanctions or disciplinary action at firms that lacked clear guidelines. A policy doesn't prevent all risk, but it establishes the firm's standard of care — and that matters when something goes wrong.
ABA Formal Opinion 512 Requirements
ABA Formal Opinion 512 (July 2024) established four core obligations for lawyers using AI. Competence (Rule 1.1): lawyers must understand the capabilities and limitations of AI tools they use — "I didn't know it could hallucinate" isn't a defense. Confidentiality (Rule 1.6): client information entered into AI systems must be protected, which means understanding each tool's data retention and training policies. Supervision (Rules 5.1/5.3): partners and supervisory lawyers must ensure that associates and staff using AI are doing so appropriately. Communication (Rule 1.4): in some circumstances, clients should be informed that AI is being used in their matter. Opinion 512 doesn't ban AI — it establishes that using AI without understanding it violates existing ethical rules. The opinion applies to every jurisdiction that follows the ABA Model Rules, which is nearly all of them.
Components of a Minimum Viable AI Policy
A workable AI policy doesn't need to be 50 pages. It needs seven components. 1. Approved tools list: which AI platforms are authorized (e.g., Claude Pro, CoCounsel) and which are prohibited (e.g., free-tier consumer AI for client work). 2. Data classification rules: what information can be entered into AI tools (public info = yes, client PII = only in approved enterprise tools, privileged communications = restricted). 3. Verification requirements: all AI-generated legal research must have citations verified against primary sources before use. 4. Disclosure protocol: when and how to disclose AI use to courts and clients. 5. Supervision chain: who reviews AI-assisted work product and at what level. 6. Incident response: what to do when an AI error is discovered in filed work product. 7. Training requirements: mandatory training for all attorneys and staff before using approved AI tools.
The Disclosure Landscape in 2026
Over 300 federal judges have issued standing orders or local rules requiring AI disclosure in court filings. These orders vary in scope — some require disclosure only when AI "substantially" contributed to the filing, others require disclosure for any AI use beyond basic tools like spell-check. Key patterns: most orders require a certification that all citations have been verified by a human attorney. Some require identification of the specific AI tool used. A few require disclosure to opposing counsel as well as the court. Your firm's AI policy must include a disclosure decision tree — a clear framework for determining when disclosure is required in each jurisdiction where the firm practices. This isn't optional anymore. Filing without required disclosure when a standing order exists is itself sanctionable.
Implementation: The 30-Day Timeline
You don't need a committee. You need a deadline. Week 1: Draft the policy using the seven components above. Assign one attorney as AI Policy Lead. Inventory which AI tools are already being used at the firm (you'll be surprised). Week 2: Review with managing partner and ethics counsel. Classify all current AI tools as approved, restricted, or prohibited. Set up enterprise accounts for approved tools (Claude Team/Enterprise, CoCounsel). Week 3: Distribute policy to all attorneys and staff. Conduct a 90-minute training session covering the policy, approved tools, and verification requirements. Week 4: Implement monitoring. Review a sample of AI-assisted work product. Collect feedback and adjust the policy. The firms that implemented policies in 2024-2025 are now iterating on version 2.0 and 3.0. The firms starting now are already two years behind.
The Bottom Line: An AI policy defines which tools are approved, what data can be entered, and who verifies AI outputs. 53% of firms don't have one despite ABA Opinion 512 requiring competence, confidentiality, supervision, and communication when using AI. A minimum viable policy has seven components and can be implemented in 30 days. Every day without one is unmanaged risk.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
