In January 2026, a 120-attorney firm in Dallas discovered that 43 different AI tools were being used across the firm. The managing partner knew about six of them. The IT department had approved four. The rest were personal subscriptions, browser extensions, and free-tier accounts that attorneys had adopted on their own. Client data had been uploaded to at least 12 of those unapproved tools.

This is the shadow AI problem, and it exists at every firm that hasn't done a formal audit. You can't govern what you don't know about. You can't assess risk for tools you haven't identified. And you can't comply with court orders like the Morgan v. V2X framework if you don't know which AI systems are touching client data.

An AI tool audit isn't a one-time cleanup. It's the foundation for every governance policy, vendor contract, and risk assessment your firm will build. Here's how to do it right.


Phase 1: Discovery — Find Every AI Tool in Use

The first phase is the hardest because attorneys don't always recognize what counts as an AI tool. A browser extension that summarizes emails uses AI. A transcription service for depositions uses AI. A research platform that "finds relevant cases" uses AI. You need to cast the net wide.

Start with three data sources. Network traffic logs will show connections to known AI service domains (api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, and dozens of legal-specific platforms). Your IT team or managed service provider can pull 90 days of DNS logs and filter for AI-related domains.

Expense reports and credit card statements reveal personal and department-level AI subscriptions. Search for vendors like OpenAI, Anthropic, Harvey AI, CoCounsel (Thomson Reuters), Casetext, and any tool marketed as "AI-powered." Don't forget app store charges.

A firm-wide survey catches what network logs and expenses miss: tools used on personal devices, free-tier accounts, and AI features embedded in platforms the firm already uses (Microsoft Copilot in Office 365, AI features in Westlaw or Lexis, grammarly-style writing assistants). Keep the survey short and non-punitive. The goal is discovery, not discipline. Make clear that no one is in trouble for using an unapproved tool. They will be in trouble for not disclosing it during the audit.

Phase 2: Classification — Categorize by Risk Level

Once you have the full list, classify each tool across four dimensions.

Data access level. Does the tool process client data? If yes, what type? There's a big difference between a grammar checker that sees email text and a document review tool that ingests entire case files. Rank from Level 1 (no client data) to Level 4 (direct access to privileged materials).

Deployment model. Is it a consumer-grade SaaS tool, an enterprise deployment with contractual protections, or an on-premises/private cloud installation? Consumer tools are the highest risk because the firm has no contractual control over data handling. Enterprise tools with proper contracts (meeting the Morgan v. V2X baseline) are lower risk.

Vendor maturity. Evaluate the vendor's security posture: SOC 2 Type II certification, data processing agreements, breach history, financial stability. A startup offering a free AI tool for legal research is a fundamentally different risk than an established vendor with enterprise security infrastructure.

Usage scope. How many people use it? How often? For what tasks? A tool used by one paralegal for non-client work is different from a platform used firm-wide for case analysis. The AI tool risk evaluation framework covers the detailed assessment criteria for each tool.

Plot each tool on a 2x2 matrix: data sensitivity (vertical axis) vs. deployment security (horizontal axis). Tools in the high-sensitivity/low-security quadrant are your immediate priorities.

Phase 3: Decision — Approve, Restrict, or Remove

For each tool, make one of three decisions.

Approved for general use. The tool meets security requirements, has adequate contractual protections, and serves a legitimate workflow need. It goes on the firm's approved tools list with documented use guidelines.

Approved with restrictions. The tool is useful but has limitations. Common restrictions: no upload of client-identifiable data, use only for non-privileged research, no use on matters with heightened confidentiality requirements (government contracts, M&A, criminal defense). Document the restrictions clearly and communicate them to every user.

Removed. The tool doesn't meet minimum security requirements, the vendor won't agree to adequate contract terms, or the risk-to-benefit ratio doesn't justify continued use. Give users a transition period and a recommended alternative from the approved list.

For removed tools, don't just block access. Follow up with the vendor to request deletion of any firm data in their systems. Document the deletion request and any confirmation received. Under the Morgan v. V2X framework and emerging bar guidance, firms have an obligation to pursue data deletion from tools that processed client information.

The typical result: about 60-70% of discovered tools get removed, 20-25% get approved with restrictions, and 10-15% get approved for general use. These numbers are consistent across mid-market firms.

What This Means for Your Firm

Set a date and run the audit. Don't wait for a breach or a bar complaint to force it. The three-phase process (discovery, classification, decision) takes 4-6 weeks for a firm under 200 attorneys. Larger firms should budget 8-12 weeks.

Assign ownership. The audit needs a lead who has authority across IT, compliance, and practice groups. At most firms, this is the general counsel, the managing partner, or a designated AI governance committee. Without a single point of ownership, the audit stalls.

Document everything. The audit itself is evidence that your firm took reasonable steps to govern AI use. If privilege is challenged, if a data breach occurs, or if a bar disciplinary inquiry arises, the audit record demonstrates due diligence. Save the tool inventory, risk classifications, approval decisions, and user communications.

Make it recurring. Technology changes too fast for a one-time audit. Schedule quarterly reviews of the approved tools list and annual full audits. Require new AI tools to go through the classification process before deployment. Build this into your firm's AI governance policy so it becomes operational routine, not a special project.

The firms that audit now will have governed AI workflows. The firms that don't will discover their AI exposure the hard way.

The Bottom Line: You can't govern AI tools you don't know about. The audit is step one. Every policy, contract, and risk assessment builds on what you find.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.