Every law firm needs an AI policy. Most don't have one. A 2025 ABA survey found that fewer than 30% of law firms had a written AI use policy — despite 75% of lawyers reporting they use AI in some capacity. That gap between usage and governance is a liability waiting to detonate.
Writing an AI policy isn't complicated. Enforcing it is. This guide gives you the step-by-step framework — what to include, what to skip, and how to write a policy that actually gets followed instead of filed away and forgotten.
Step 1: Define the Scope — What Counts as 'AI Use'
Your policy must start with a clear definition of what tools it covers. This is where most firms fail immediately — they either define AI too broadly (covering spell-check and autocomplete) or too narrowly (covering only ChatGPT while ignoring AI features in Westlaw, Clio, and Microsoft 365).
The practical definition: Your AI policy should cover any tool that generates, analyzes, or processes legal work product using artificial intelligence or machine learning. This includes:
- General AI assistants: ChatGPT, Claude, Gemini, Copilot - Legal AI platforms: Harvey, CoCounsel, Lexis+ AI, Luminance, Spellbook - AI features in existing tools: Westlaw AI, Clio Duo, Microsoft Copilot, Relativity aiR - AI-powered legal research: Perplexity, Google AI search when used for legal research
What to exclude: Spell-check, autocomplete, grammar tools (Grammarly), standard search engines, and predictive text in email. Drawing the line here keeps the policy focused on AI that generates substantive content rather than AI that assists with mechanics.
Step 2: Establish Approved and Prohibited Uses
Create three categories: Approved, Approved with Restrictions, and Prohibited.
Approved uses (green light): - Drafting first drafts of internal memos and research summaries - Summarizing long documents and deposition transcripts - Organizing and categorizing case facts - Generating initial outlines for briefs and motions - Brainstorming legal arguments and strategies
Approved with restrictions (yellow light): - Drafting court filings (requires partner review and citation verification) - Analyzing contracts (requires human verification of all flagged provisions) - Client communications (requires attorney review before sending) - Due diligence document review (requires quality control sampling)
Prohibited uses (red light): - Submitting any AI-generated content to a court without attorney review and citation verification - Inputting client confidential information into free-tier AI tools - Using AI to make final decisions on legal strategy without human judgment - Relying on AI-generated citations without verification in a legal database - Using AI tools not approved by the firm's technology committee
The key principle: AI generates, humans verify and decide. No AI output goes to a client, court, or opposing counsel without attorney review.
Step 3: Write the Confidentiality and Data Protection Section
This is the section that protects you from malpractice claims and ethics complaints.
Required provisions:
1. Approved tools only: Identify specific AI tools approved for use with client data. Only tools with enterprise-grade data protection terms (no training on user inputs, data encryption, SOC 2 compliance) should be approved.
2. Data classification: Define what client information can and cannot be entered into AI tools. A simple framework: - Public information (publicly filed documents, published decisions): Any approved AI tool - Confidential information (client communications, draft documents, case strategy): Enterprise-tier AI tools only - Highly sensitive information (trade secrets, merger discussions, sealed documents): No AI tools without specific client consent
3. Client consent: Require informed client consent for AI use on highly sensitive matters. Include AI disclosure language in engagement letters.
4. No free tiers: Explicitly prohibit use of free AI tiers for any client-related work. The risk of data exposure and privilege waiver isn't worth the $20/month savings.
Step 4: Build the Verification and Quality Control Framework
Your policy must specify how AI output gets verified before use. Vague instructions like 'review AI output carefully' are useless. Be specific.
Citation verification protocol: - Every case citation generated or suggested by AI must be verified in Westlaw, Lexis, or another authoritative legal database - Verification includes confirming: (1) the case exists, (2) the citation is correct, (3) the holding matches what the AI claims, (4) the case hasn't been overruled or distinguished - The verifying attorney must initial or log each verified citation
Legal analysis verification: - AI-generated legal analysis must be reviewed by an attorney with subject matter expertise - The reviewing attorney must confirm that the analysis accounts for jurisdiction-specific rules, recent developments, and case-specific facts - For court filings, a second attorney should review AI-assisted sections
Quality control sampling: - For high-volume AI-assisted tasks (document review, contract analysis), implement random quality checks on at least 10% of AI-processed documents - Track error rates to identify when AI tools are underperforming and need recalibration or replacement
Step 5: Compliance, Training, and Updates
A policy nobody reads is worse than no policy — it creates liability without protection.
Training requirements: - All attorneys and staff must complete AI policy training within 30 days of hire and annually thereafter - Training must cover: approved tools, prohibited uses, verification requirements, confidentiality obligations, and disclosure requirements - Training records must be maintained
Compliance monitoring: - Designate an AI compliance officer (partner-level) responsible for policy enforcement - Implement periodic audits of AI tool usage across the firm - Create a reporting mechanism for policy violations without punitive consequences for good-faith mistakes
Update schedule: - Review and update the AI policy quarterly — the technology and regulatory landscape changes too fast for annual reviews - Monitor judicial standing orders, state bar ethics opinions, and regulatory developments that affect AI use - Communicate policy updates to all personnel within 7 days of adoption
The enforcement reality: The hardest part isn't writing the policy — it's getting busy lawyers to follow it. Keep the policy short (under 5 pages), make the rules clear and actionable, and build verification steps into existing workflows rather than creating new ones. A policy that adds 5 minutes to existing processes gets followed. A policy that adds 30 minutes gets ignored.
The Bottom Line: Write it in a day, enforce it every day. Your AI policy needs five components: scope definition, approved/prohibited uses, confidentiality rules, verification protocols, and training requirements. Keep it under 5 pages, review it quarterly, and embed compliance into workflows rather than adding separate compliance steps. The firms that get this right will use AI confidently. The firms that don't will either avoid AI entirely or use it recklessly — both paths lead to competitive disadvantage.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
