The fastest way to lose a client's trust -- and trigger a malpractice claim -- is to feed their privileged information into an AI tool that trains on user inputs. A 2025 survey by the American Bar Association found that 34% of lawyers using AI couldn't confirm whether their tool's vendor retains or trains on client data, and 61% of firms lack a formal data security policy for AI tools.

Attorney-client privilege doesn't survive negligent data handling. If you're using AI on client matters without understanding your vendor's data practices, you're one breach away from a waiver argument. This guide is the security checklist every managing partner needs before approving AI tools at their firm.


The Privilege Problem: Consumer vs. Enterprise AI

The single most important distinction in legal AI security: consumer-grade AI and enterprise-grade AI handle your data completely differently.

Consumer AI tools (ChatGPT free/Plus, Claude free, Gemini free, Copilot free) typically include terms of service that allow the provider to use your inputs for model training. Even when providers offer opt-out toggles, the data still transits through shared infrastructure. The *Heppner v. Doe* analysis suggests that voluntarily inputting privileged information into a consumer AI tool could constitute waiver -- the same way forwarding a privileged email to a non-privileged third party can waive privilege.

Enterprise AI tools (ChatGPT Enterprise, Claude for Business/Enterprise, Harvey, Lexis+ AI, Westlaw CoCounsel) operate under negotiated data processing agreements. They contractually commit to not training on your data, not retaining inputs beyond the session, and providing audit rights. This is the minimum threshold for handling client information.

The bright line rule: Never input client names, case facts, privileged communications, or confidential business information into any AI tool that doesn't have a signed DPA with your firm. No exceptions. Not even 'just to test something quickly.' One privileged document pasted into consumer ChatGPT can become an exhibit in a waiver motion.

Data Processing Agreements: What to Demand

A DPA is not optional -- it's the minimum requirement before any AI vendor touches client data. Here's what your DPA must include:

No training on client data. The vendor must contractually commit to not using your inputs or outputs for model training, fine-tuning, or any form of model improvement. This must be absolute, not subject to opt-out toggles.

Data retention limits. Specify maximum retention periods. For most legal work, inputs should be deleted within 30 days of the session. Some vendors offer zero-retention (inputs processed in memory only, never written to disk). Demand this if available.

Data residency. Know where your data is processed and stored. For firms with international clients, GDPR and cross-border data transfer rules apply. US-only processing is the simplest approach for domestic matters.

Subprocessor disclosure. The vendor must list all subprocessors (cloud providers, infrastructure partners) that handle your data. AWS, Azure, and GCP are standard -- but you need to know if data flows through additional parties.

Breach notification. 72-hour notification requirement for any breach or unauthorized access. Include specific notification procedures and designated contacts.

Audit rights. Your firm (or a designated third party) must have the right to audit the vendor's security practices. Most vendors satisfy this through SOC 2 reports, but direct audit rights matter for high-security clients.

Return/deletion on termination. When the contract ends, all client data must be returned or destroyed, with written certification.

Security Certifications: What Actually Matters

Vendors throw around certifications like confetti. Here's what each one actually means for your firm:

SOC 2 Type II: The gold standard for SaaS security. Tests whether security controls are designed properly (Type I) AND operating effectively over time (Type II). Demand Type II -- Type I alone is meaningless for ongoing security assurance. Covers security, availability, processing integrity, confidentiality, and privacy.

ISO 27001: International information security management standard. Comprehensive but process-focused -- tells you the vendor has a security management system, not necessarily that their specific controls are strong. Good to have alongside SOC 2, not as a replacement.

HIPAA compliance: Required if your firm handles healthcare-related matters. The vendor must sign a Business Associate Agreement (BAA) and implement PHI-specific safeguards. Not all legal AI vendors offer HIPAA compliance -- check before onboarding healthcare clients.

FedRAMP: Required for federal government work. Very few legal AI vendors have FedRAMP authorization. If your firm handles government contracts or government litigation, this narrows your options significantly.

What doesn't matter: Self-certifications, 'enterprise-grade security' marketing claims, and SOC 2 Type I alone. If a vendor can't produce a current SOC 2 Type II report, they're not ready for law firm data.

Privilege Protection: The Kovel Doctrine and AI

The Kovel doctrine (from *United States v. Kovel*, 1961) extends attorney-client privilege to communications with agents necessary for the attorney to provide legal advice -- accountants, translators, investigators. The open question: does an AI tool qualify as a Kovel agent?

The honest answer: we don't have definitive case law yet. But the analysis matters.

Arguments for Kovel protection: AI tools are functionally similar to other agents attorneys rely on -- they process information to help the attorney provide legal services. If the attorney selects an enterprise tool with proper data handling, the AI is operating under the attorney's direction for the purpose of legal representation.

Arguments against: Kovel agents are typically human parties who can be bound by confidentiality obligations. AI vendors are corporate entities with their own interests, and the 'agent' is software, not a person. Courts may treat AI more like a photocopier (no Kovel protection) than a translator (Kovel protection).

The safe approach until case law develops: 1. Use only enterprise AI tools with DPAs that explicitly acknowledge the privileged nature of inputs 2. Document in your engagement letter that AI tools are used as agents of the attorney for Kovel purposes 3. Mark AI interactions as privileged and confidential in your systems 4. Never use consumer AI tools for privileged matters -- the voluntary disclosure argument is much stronger without a DPA 5. Monitor developing case law -- the first circuit court ruling on AI and Kovel will reshape the landscape

The Security Checklist: Before You Approve Any AI Tool

Use this checklist before approving any AI tool for use with client data at your firm:

Vendor Due Diligence: - [ ] SOC 2 Type II report current within 12 months - [ ] Data Processing Agreement signed with no-training, retention limits, and audit rights - [ ] Subprocessor list reviewed and acceptable - [ ] Data residency confirmed (US-only processing for domestic matters) - [ ] Breach notification procedures documented (72-hour requirement) - [ ] BAA signed if handling healthcare data (HIPAA) - [ ] FedRAMP authorized if handling government data

Technical Controls: - [ ] Encryption in transit (TLS 1.2+) and at rest (AES-256) - [ ] SSO/SAML integration with your firm's identity provider - [ ] Role-based access controls (RBAC) -- not everyone needs access to every matter - [ ] Audit logging of all queries and outputs - [ ] API access controls if using programmatic integration - [ ] Zero-retention or minimal retention option enabled

Firm Policy: - [ ] Written AI policy covering approved tools, data handling, and incident response - [ ] Training completed for all users before access granted - [ ] Matter-level opt-in/opt-out based on client consent and sensitivity - [ ] Regular (quarterly) review of AI tool usage and security posture - [ ] Engagement letter language covering AI use and data handling - [ ] Malpractice insurance confirmed to cover AI-related claims

The Bottom Line: Never use consumer AI tools for client data -- enterprise tools with signed DPAs are the minimum requirement. Demand SOC 2 Type II certification, zero-retention options, and explicit no-training commitments. The privilege question around AI isn't settled, so protect yourself with documentation, enterprise tools, and engagement letter disclosures. Use the security checklist before approving any AI vendor.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.