In January 2026, a midsize litigation firm in Texas discovered that 14 of its 22 attorneys were using ChatGPT on personal accounts to draft client correspondence. No one had told them not to. No one had told them how. The firm had no AI acceptable use policy, and by the time leadership found out, six months of client data had passed through consumer-grade AI tools with zero confidentiality protections.

That's not an edge case. A 2025 Thomson Reuters survey found that 51% of law firms had no formal AI use policy in place. The attorneys weren't reckless. They were doing what made sense in the absence of guidance. The problem wasn't the tool. It was the vacuum.

An AI acceptable use policy (AUP) is the single most important governance document a law firm can produce right now. Not because it stops all risk, but because it draws the line between governed use and ungoverned exposure. Courts are already looking at whether firms had policies in place when things go wrong. If you don't have one, you're building your defense after the breach.


Why a One-Page Policy Beats a 40-Page Manual

The firms that get this right don't produce compliance binders. They produce a single, clear document that every attorney can read in five minutes and actually follow.

A useful AUP covers three things: what tools are approved, what data can go into them, and what review is required before anything AI-assisted goes out the door. That's it. The 40-page version sits in a SharePoint folder and gets ignored. The one-page version gets pinned to the intranet and referenced weekly.

The goal isn't to restrict AI use. It's to channel it. Attorneys are going to use these tools whether you have a policy or not. The AUP makes the difference between supervised leverage and shadow AI. Shadow AI is already one of the biggest unmanaged risks in legal practice, and a clear AUP is the first line of defense against it.

What Your AI Acceptable Use Policy Must Include

Approved tools list. Name every AI tool the firm has vetted and approved. Include the specific tier or account type. "ChatGPT" is not sufficient. "ChatGPT Team with enterprise data protections, accessed via firm SSO" is. Consumer subscriptions on personal accounts should be explicitly prohibited for any work involving client data.

Data classification rules. Define what can and can't go into AI tools. At minimum, three tiers: public information (fine), internal work product (approved tools only), and confidential client data (restricted or prohibited depending on tool). The Morgan v. V2X protective order framework is the benchmark here. That case established that AI tools processing litigation materials must meet the same confidentiality standards as any other vendor. Your AUP should reference this standard directly.

Output review requirements. Every AI-assisted work product must be reviewed by a licensed attorney before it goes to a client or a court. This isn't optional. After Mata v. Avianca in 2023, where an attorney submitted ChatGPT-fabricated case citations to a federal court and was sanctioned, courts expect verification. Your policy should specify who reviews, what they check for, and how they document the review.

Disclosure obligations. At least 17 federal courts now require disclosure of AI use in filings. Your AUP should include a default position: disclose when required, and when in doubt, disclose. The AI disclosure rules vary by district, and your policy should point attorneys to the current requirements for every jurisdiction they practice in.

Consequences for violations. A policy without teeth isn't a policy. Define what happens when someone uses an unapproved tool or puts confidential data into a consumer AI product. Progressive discipline, from retraining to formal reprimand, signals that the firm takes this seriously.

How to Roll It Out Without Killing Adoption

The biggest mistake firms make is treating the AUP as a prohibition document. Attorneys read it, see a list of "don'ts," and either ignore it or stop using AI entirely. Both outcomes are bad.

Frame the policy as enablement. The opening line should be something like: "This firm encourages the responsible use of AI tools to improve efficiency and client service." Then define what responsible means. The tone matters as much as the substance.

Roll it out with a 30-minute training session, not a compliance email. Walk through the approved tools, show how to use them within the policy, and answer questions live. Firms that pair the AUP with hands-on AI training for attorneys see 3x higher compliance rates than those that just circulate a PDF.

Review the policy quarterly. AI tools change fast. Claude, GPT-4o, and Gemini have all updated their enterprise data handling terms multiple times in 2025-2026. A policy written in January that hasn't been updated by July is already outdated. Assign one person to own the review cycle.

What This Means for Your Firm

If you don't have an AI acceptable use policy today, you're exposed on multiple fronts: ethics complaints, malpractice claims, data breaches, and judicial sanctions. The bar associations in New York, California, Florida, and Texas have all issued guidance expecting firms to have AI governance in place. "We didn't have a policy" is not a defense. It's an admission.

Start with the one-page version. List your approved tools, define your data boundaries, require human review of all output, and set disclosure defaults. Get it signed by every attorney. Then build from there.

The firms that treat AI governance as a living practice rather than a one-time document are the ones building real operational compound interest. The AUP is where that practice starts.

The Bottom Line: An AI acceptable use policy isn't a compliance checkbox. It's the difference between your firm governing AI and AI governing your firm's risk exposure.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.