Consumer AI is risky for confidential legal work. Enterprise AI is safer with proper data processing agreements. The answer isn't binary — it's a tiered safety framework where different tools carry different risk levels, and the right choice depends on what data you're entering.

The Heppner incident (attorney entered privileged client communications into consumer ChatGPT) and multiple bar ethics opinions have made one thing clear: the tool's data policy matters as much as its capabilities. A brilliant AI that trains on your client's data is worse than a mediocre one that doesn't.


The Three-Tier Safety Framework

Tier 1 — Unsafe for Client Work: free consumer AI (ChatGPT free, Gemini free, Claude free). These tools may use your inputs for model training. There's no data processing agreement, no zero-retention guarantee, and no SOC 2 compliance. Never enter client names, case details, privileged communications, or confidential business information. Use only for general legal research, template drafting, and learning. Tier 2 — Safe with Precautions: paid consumer AI (Claude Pro/Team, ChatGPT Plus/Team). These tiers offer data protections — Claude Team guarantees your data won't be used for training, ChatGPT Team offers similar commitments. Suitable for client work when combined with firm AI policies and data classification rules. Tier 3 — Enterprise Safe: purpose-built legal AI (Harvey, CoCounsel, enterprise Claude). SOC 2 Type II certified, zero-retention by default, dedicated hosting options, formal data processing agreements. Designed for the most sensitive legal work.

What Makes AI Unsafe for Confidential Work

Three specific risks matter. Training data exposure: if an AI tool uses your inputs to improve its model, your client's information could theoretically influence responses to other users. This is the primary concern with free-tier consumer AI. Data retention: even tools that don't train on your data may store conversation logs. If those servers are breached, client data is compromised. Enterprise tools offer zero-retention options where data is processed and immediately discarded. Third-party access: AI providers use subprocessors (cloud hosting, monitoring services). Each subprocessor in the chain is a potential exposure point. Enterprise agreements specify exactly which subprocessors handle data and under what conditions. The ABA's Formal Opinion 512 is explicit: lawyers must understand how each AI tool handles data before entering client information. "I didn't read the terms of service" isn't a defense under Rule 1.6.

What the Ethics Opinions Say

Every major bar ethics opinion on AI reaches the same conclusion: the tool's data handling determines whether it's appropriate for client work. The ABA (Opinion 512, 2024) requires lawyers to assess confidentiality risks before using AI, including reviewing the tool's terms of service and data policies. California (Practical Guidance 2024) specifically warns against entering client data into tools that use inputs for training. New York (NYSBA Ethics Opinion 2024) requires "reasonable measures" to protect client information in AI systems. Florida (Advisory Opinion 24-1) emphasizes that the duty of confidentiality extends to AI tools just as it does to any third-party service. The consistent standard: due diligence on the tool's data practices is required before use. This means actually reading the privacy policy and terms of service — not just checking a box.

How to Make AI Safe for Confidential Work

Five practical steps. 1. Classify your data: create a firm-wide classification system. Public information (published case law, statutes) = any AI tool. Client-identifying information (names, case numbers) = Tier 2+ only. Privileged communications and work product = Tier 3 only. 2. Use enterprise tiers: Claude Team/Enterprise, ChatGPT Enterprise, or purpose-built legal AI. The $5-25/month premium over free tiers buys you data protection that prevents six-figure malpractice exposure. 3. Review DPAs: every AI tool used for client work should have a Data Processing Agreement specifying data retention, training exclusions, subprocessor lists, and breach notification obligations. 4. Strip identifiers when possible: instead of "John Smith's divorce involves assets of $2.3M in Harris County," use "Client's divorce involves assets of $X in [County]." Removing PII reduces risk even with safe tools. 5. Audit regularly: review who's using which tools for what purposes quarterly. Shadow AI (attorneys using unauthorized tools) is the biggest uncontrolled risk.

The Real-World Consequences of Getting It Wrong

The Heppner Incident: an attorney used consumer ChatGPT to analyze privileged client documents. The data entered into the model couldn't be retrieved or deleted. The firm faced ethics complaints and client notification obligations. Samsung's ChatGPT Leak (2023): employees entered proprietary source code and meeting notes into ChatGPT, which Samsung confirmed were incorporated into the training dataset. Samsung subsequently banned ChatGPT company-wide. While not a legal case, it demonstrates the data exposure risk. Bar disciplinary actions: multiple state bars have initiated investigations into attorneys who entered client data into unsecured AI tools. No published decisions yet imposing major sanctions specifically for data exposure, but the investigations signal enforcement is coming. The trend line is clear: confidentiality breaches via AI will be treated the same as any other confidentiality breach — because that's exactly what they are.

The Bottom Line: Consumer AI (free tiers) is unsafe for confidential legal work — data may be used for training. Paid tiers (Claude Team, ChatGPT Team) are safe with precautions. Enterprise legal AI (Harvey, CoCounsel) is designed for sensitive work with SOC 2 compliance and zero-retention. The $5-25/month upgrade from free to paid prevents six-figure malpractice exposure. Don't be the next Heppner.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.