Shadow AI is the unauthorized use of consumer AI tools by attorneys and staff without firm knowledge or approval — and 53% of law firms have no AI policy to prevent it. According to Clio's 2024 Legal Trends Report, the majority of law firms operate without formal AI governance, which means every attorney is making independent decisions about which AI tools to use, what data to enter, and whether to verify output.

This isn't a theoretical risk. A 2024 Thomson Reuters survey found that 52% of lawyers using AI at work were doing so without their firm's knowledge. They're using free ChatGPT on personal devices, pasting client emails into Claude, running contract language through Gemini. Every one of those interactions is an unmanaged liability — data exposure, privilege waiver, malpractice risk — and the managing partner doesn't even know it's happening.


What Shadow AI Actually Looks Like

Shadow AI isn't attorneys downloading sketchy software. It's mundane, everyday behavior that happens when there's no policy guiding it.

The associate drafting a motion at midnight pastes the client's fact pattern into free ChatGPT to get a first draft. Client-identifying information enters OpenAI's systems. No data protection agreement exists. The associate doesn't think twice about it — they've been using ChatGPT for personal tasks for two years.

The paralegal summarizing deposition transcripts copies 50 pages of testimony into Claude to get a summary. The testimony contains privileged attorney-client communications embedded in the transcript. The paralegal doesn't flag this because nobody told them not to.

The partner preparing a pitch feeds competitor intelligence and client relationship details into an AI tool to generate a business development memo. Client names, deal values, and confidential business information now sit on a third-party server.

In every case, the individual believes they're being efficient. They're not wrong about the efficiency — they're wrong about the risk.

The Three Risks That Keep Malpractice Insurers Up at Night

1. Data leaks through consumer AI. Free-tier AI tools may use inputs for model training. Client information entered into these tools could theoretically surface in responses to other users. Even with opt-out settings, data passes through third-party servers without the protections of a business associate agreement. This is a Rule 1.6 confidentiality breach waiting to be discovered in discovery.

2. Privilege waiver. When privileged communications enter a consumer AI tool, privilege may be waived under the Heppner analysis. One unauthorized AI interaction can expose an entire matter's privilege protections. Opposing counsel who learns about shadow AI use has a roadmap for a privilege challenge that could crack open your case file.

3. Unverified work product. Shadow AI users typically skip verification because they're using AI as a shortcut, not as a formal part of their workflow. When there's no policy requiring verification, there's no verification. Hallucinated citations, incorrect legal analysis, and fabricated facts enter the firm's work product without any quality checkpoint. The firm doesn't know the work was AI-generated, so it doesn't know to verify it.

Why Attorneys Use AI in the Shadows

Attorneys don't hide AI use because they're reckless. They hide it because the firm hasn't given them a legitimate alternative.

No approved tools. If the firm hasn't adopted enterprise AI tools, attorneys who want AI assistance have nowhere to go except consumer platforms. The demand for AI assistance is real — drafting, research, and summarization are tasks where AI delivers genuine time savings. When the firm doesn't provide the tool, attorneys find their own.

Fear of judgment. In many firms, admitting to AI use signals incompetence. "You needed a computer to write that brief?" is a real concern, especially for junior associates trying to demonstrate their value. Until firm leadership explicitly endorses AI use, many attorneys will continue using it quietly.

No policy, no guidance. Without a written policy, attorneys don't know what's allowed. The absence of prohibition feels like permission. The associate who uses ChatGPT isn't violating a rule because no rule exists — which is exactly the problem. The firm's silence on AI is an implicit authorization of whatever happens next.

Speed culture. BigLaw's billing pressure creates incentives to work faster. AI tools deliver immediate productivity gains. When the choice is between spending 4 hours on manual research or 45 minutes with AI, the economics push toward AI — with or without firm approval.

Detecting Shadow AI in Your Firm

You can't manage what you can't see. Here's how firms are identifying shadow AI use:

Anonymous surveys. Ask your attorneys and staff: "Have you used AI tools for client work in the past 6 months? If so, which tools?" Make it anonymous and non-punitive. The goal is data, not discipline. Firms that have run these surveys consistently find that 40-60% of attorneys are using AI — far more than management assumed.

Network monitoring. IT departments can identify traffic to AI platform domains (chat.openai.com, claude.ai, gemini.google.com) from firm networks and devices. This doesn't require reading the content — just tracking which platforms are being accessed. Some firms are surprised to find hundreds of daily connections to AI platforms.

Work product analysis. AI-generated text has detectable patterns — consistent formatting, specific phrase structures, and a tendency toward comprehensive but generic analysis. Partners reviewing associate work can learn to spot AI-assisted drafts. AI detection tools exist but aren't reliable enough for definitive conclusions.

Exit interviews and matter reviews. When attorneys leave or matters close, debrief sessions can uncover AI usage that wasn't documented. This retrospective data informs policy development even if it can't fix past exposure.

From Shadow AI to Sanctioned AI: Building the Policy

The fix isn't banning AI. It's giving attorneys a legitimate, secure pathway to use it.

Step 1: Adopt enterprise AI tools. Give every attorney access to at least one enterprise AI platform — Claude Team, ChatGPT Team, or a legal-specific tool like CoCounsel. The cost is $25-300/user/month. The alternative — unmanaged consumer AI — costs exponentially more when something goes wrong.

Step 2: Publish a clear AI policy. What tools are approved. What data can be entered into which tools. What verification is required. When disclosure is necessary. Make it simple — a 2-page document covers everything. Complex policies get ignored.

Step 3: Train everyone. Not a one-time CLE. Ongoing, practical training that shows attorneys how to use approved tools effectively. If the approved tool is harder to use than the shadow tool, attorneys will keep using the shadow tool. The approved path must be easier than the unauthorized path.

Step 4: Amnesty period. Announce the new policy with a grace period. "Everything before this date is past. Going forward, here are the rules." Punishing past shadow AI use when no policy existed is counterproductive and drives usage further underground.

Step 5: Monitor and adapt. Check usage data quarterly. Survey attorneys on whether the approved tools meet their needs. Update the policy as new tools emerge and courts adopt new rules. AI governance isn't a one-time project — it's an ongoing function.

The Bottom Line: Shadow AI is already happening at your firm — the 53% of firms without policies aren't preventing AI use, they're preventing visibility into it, and the fix is providing enterprise tools with clear rules rather than pretending the problem doesn't exist.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.