A 2025 survey by the International Legal Technology Association found that 68% of attorneys at Am Law 200 firms admitted to using AI tools that weren't approved by their firm's IT department. Not experimental tinkering. Regular use on client matters. That's shadow AI, and it's the single largest ungoverned risk sitting inside most law firms right now.

Shadow AI isn't about bad actors. It's about attorneys solving real workflow problems with the tools available to them. When the firm doesn't provide a governed AI solution for research, drafting, or document review, attorneys go find one themselves. They use personal ChatGPT accounts, free-tier tools, and browser extensions that process text through unknown servers. Client data goes in. Nobody tracks where it goes next.

The problem isn't adoption. It's invisible adoption. You can't govern what you can't see.


How Shadow AI Enters a Law Firm

It starts with a single attorney pasting a contract clause into ChatGPT to get a quick redline suggestion. It works. They do it again. They tell a colleague. Within a month, half the corporate group is using a consumer AI tool with zero data protections for client work.

The most common shadow AI tools in law firms are consumer-tier chatbots (ChatGPT free/Plus, Google Gemini, Perplexity), browser extensions that summarize or rewrite text, AI-powered email plugins, and free document review tools that upload files to third-party servers.

None of these tools have BAAs (Business Associate Agreements). None of them guarantee data isolation. Most of their terms of service explicitly state they can use input data for model training. When attorneys feed client data into these tools, they're potentially breaching Rule 1.6 on confidentiality with every prompt.

The entry point is almost always convenience. Firm-approved tools are slow, locked behind IT tickets, or don't exist yet. Shadow AI fills the gap in hours.

The Specific Risks Most Firms Underestimate

Confidentiality breach is the obvious one. Client data entered into consumer AI tools leaves the firm's control. If that data surfaces in another user's output or a training dataset, the breach is real and the liability follows.

Privilege waiver is the less obvious but more damaging risk. Attorney-client privilege requires that communications remain confidential. Feeding privileged information into a third-party AI tool with no confidentiality agreement is a voluntary disclosure to a third party. Courts in multiple jurisdictions have held that voluntary disclosure to third parties waives privilege, regardless of intent.

Compliance violations compound the problem. Firms handling healthcare data face HIPAA exposure. Firms with EU clients face GDPR exposure. Firms in litigation governed by protective orders like Morgan v. V2X face contempt exposure. Shadow AI tools don't know about any of these constraints.

Malpractice exposure rounds it out. When an attorney relies on an unapproved tool that produces flawed output, the firm carries the liability. The malpractice risk isn't theoretical anymore.

How to Find Shadow AI in Your Firm

Start with network traffic analysis. Your IT team can identify outbound connections to known AI service domains (api.openai.com, gemini.google.com, anthropic.com, perplexity.ai). This won't catch everything, but it establishes a baseline.

Run a confidential usage survey. Make it anonymous. Ask attorneys what AI tools they're using, how often, and for what tasks. The anonymity matters because attorneys won't disclose shadow AI use if they think they'll be punished. You need honest data, not compliance theater.

Review browser extension inventories across firm devices. Chrome and Edge extensions that interact with text content are the most common shadow AI vectors. Many of these extensions send highlighted text to external APIs for processing.

Check expense reports and credit card statements for personal AI subscriptions. Attorneys billing $20/month to ChatGPT Plus for work purposes are telling you exactly where the governance gap is.

For a complete framework, see our guide on how to audit AI tools across your firm.

What This Means for Your Firm

Don't start with prohibition. Start with provision. The number one driver of shadow AI is that firms don't offer approved alternatives. If you give attorneys a governed AI tool that handles research, drafting, and summarization with proper data controls, shadow AI usage drops by 70-80%.

Build an approved tools list with specific use case permissions. "You can use Tool X for research. You can use Tool Y for drafting. You cannot use any tool not on this list for client work." Keep it simple and specific.

Implement technical controls alongside policy. Block known consumer AI domains on the firm network. Require SSO for approved AI tools so usage is logged. Use endpoint management to control browser extensions on firm devices.

Make the policy enforceable but not punitive. Attorneys who've been using shadow AI aren't malicious. They're pragmatic. Give them a 30-day amnesty window to transition to approved tools, then enforce the policy with clear consequences.

Review and update quarterly. New AI tools launch weekly. Your approved list and blocked list both need regular maintenance.

The Bottom Line: Shadow AI isn't a technology problem. It's a governance gap. Firms that provide governed AI tools and enforce clear policies eliminate 80% of the risk. Firms that pretend their attorneys aren't using AI are sitting on the biggest unmanaged liability in legal today.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.