The four primary risks of using ChatGPT for legal work are hallucinated citations (17-33% fabrication rate), client data exposure through training, attorney-client privilege waiver, and court sanctions for unverified output. Every one of these risks has produced real consequences for real attorneys in 2023-2025.

This isn't anti-AI. ChatGPT and similar tools genuinely accelerate legal work when used correctly. The risk isn't in the tool — it's in using the tool without understanding its failure modes. Attorneys who treat ChatGPT like a legal database get sanctioned. Attorneys who treat it like a fast-but-unreliable first draft get competitive advantage.


Stanford's 2024 study found that large language models hallucinate legal citations 17-33% of the time, depending on the model, the complexity of the query, and the jurisdiction. GPT-4 hallucinated less than GPT-3.5, but still produced fabricated cases, invented holdings, and real case names paired with wrong outcomes.

The hallucination problem isn't random. AI is more likely to fabricate citations in niche practice areas with less training data, in state courts with smaller case databases, and when asked for very specific holdings on narrow legal questions. The more specific and useful a citation would be, the more likely it is to be fabricated.

Steven Schwartz in *Mata v. Avianca* is the cautionary example everyone knows. Six fabricated cases. Sanctions. National embarrassment. But he's not alone — *Park v. Kim* (N.D. Cal. 2024), *Kruse v. Karv Automotive* (D. Minn. 2024), and multiple unpublished sanctions orders have all involved hallucinated AI citations. The pattern is always the same: attorney uses AI, doesn't verify, files garbage.

Data Privacy: What Happens to Client Information

When you paste client documents into free ChatGPT, that data enters OpenAI's systems. Under OpenAI's consumer terms of service, inputs to free and Plus tiers may be used to improve models unless you manually opt out through settings. Even with opt-out, the data still passes through OpenAI's servers, gets processed by their infrastructure, and is subject to their data retention policies.

The legal implications are concrete. Rule 1.6 requires attorneys to make reasonable efforts to prevent unauthorized disclosure of client information. Sending client data to a consumer AI service without a data protection agreement isn't a reasonable effort — it's the opposite. You're voluntarily transmitting confidential information to a third party whose business model involves using that data.

ChatGPT Enterprise and Team plans offer data isolation and contractual commitments not to train on inputs. The gap between consumer and enterprise ChatGPT isn't a feature difference — it's an ethics compliance difference. Consumer tier: potential Rule 1.6 violation. Enterprise tier: defensible data handling. The $20-30/month difference in pricing is trivial compared to the exposure.

Privilege Waiver: The Heppner Problem

Attorney-client privilege requires that communications remain confidential. When privileged information enters a consumer AI tool, it's been disclosed to a third party — and the privilege may be waived. The Heppner ruling (2024) confirmed this analysis.

The privilege risk compounds because waiver can be subject-matter wide. In some jurisdictions, waiving privilege on one communication waives it for all communications on the same subject matter. One associate pasting a privileged email into ChatGPT could expose an entire litigation file.

Enterprise tools with Kovel doctrine protections mitigate this risk but don't eliminate it. The law is still developing, and no appellate court has definitively ruled that enterprise AI tools preserve privilege under all circumstances. The safest approach: never enter privileged communications into any AI tool unless the tool operates under a signed data protection agreement and the firm's AI policy explicitly authorizes it.

Sanctions and Malpractice: The Consequences in Practice

The sanctions cases are piling up. Beyond *Mata v. Avianca*:

Courts have imposed monetary sanctions ranging from $5,000 to $30,000+ for filings containing hallucinated citations. Some courts have required attorneys to notify clients about the AI-related failures. Others have referred attorneys to bar disciplinary committees.

Malpractice insurers are paying attention. Major legal malpractice carriers now ask about AI tool usage during policy renewals. Firms without AI governance policies face higher premiums or coverage exclusions for AI-related claims. If your malpractice policy doesn't explicitly cover AI-assisted work product, you may be uninsured for the most likely source of future claims.

Bar disciplinary proceedings are beginning to address AI-related misconduct. While no attorney has been disbarred solely for AI misuse (yet), disciplinary committees have issued reprimands and required CLE completion. The trajectory points toward formal disciplinary rules specifically addressing AI competence.

The malpractice exposure is straightforward: if AI-generated errors in a filing cause harm to a client — a missed deadline based on a fabricated procedural rule, an incorrect legal analysis that leads to a bad settlement — that's malpractice. The AI didn't commit malpractice. The attorney who filed unverified output did.

The Risk Mitigation Matrix

Each risk has a specific mitigation. This isn't complicated — it's disciplined.

Hallucinations → Mandatory verification. Every citation checked in Westlaw or Lexis. Every holding confirmed against the primary source. Every statutory reference verified for currency. Build verification into the workflow as a required step, not an optional one.

Data privacy → Enterprise tools only. ChatGPT Enterprise, Claude Team/Enterprise, or legal-specific platforms (Harvey, CoCounsel, Lexis+ AI) for anything involving client data. Consumer tools only for non-client work: CLE research, business development, internal administration.

Privilege waiver → Data protection agreements. Execute agreements with every AI vendor before authorizing the tool for client work. Document the Kovel basis for privilege preservation. Add AI disclosure to engagement letters.

Sanctions → Documentation. Maintain records of AI usage, verification steps, and compliance with local disclosure rules. When you can demonstrate a systematic verification process, the risk of sanctions drops to near zero.

The bottom-line math: Enterprise AI tools cost $25-300/user/month. A single sanctions order costs $5,000-30,000+. A malpractice claim costs six to seven figures. A privilege waiver can cost a case. The mitigation investments are rounding errors compared to the exposure.

The Bottom Line: ChatGPT's risks in legal work — hallucinations, data exposure, privilege waiver, and sanctions — are all manageable with enterprise tools, mandatory verification, and documented workflows; the attorneys who get burned are the ones who skip these steps to save time.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.