When an AI tool exposes client data, generates a hallucinated citation that makes it into a filing, or violates a regulatory requirement, your firm's response in the first 72 hours determines whether the incident becomes a manageable event or a malpractice claim. Most law firms have no AI incident response plan — they are relying on the same general IT breach protocols that were never designed for AI-specific failures.

AI incidents are categorically different from traditional data breaches. A conventional breach involves unauthorized access to stored data. An AI incident can involve data leakage through model training, fabricated legal authority submitted to a court, or automated outputs that violate client-specific compliance requirements. Each category demands a different response playbook, different notification obligations, and different remediation steps.


The Three Categories of AI Incidents

Category 1: Data Exposure. Client-privileged or confidential information is submitted to an AI tool that retains, trains on, or otherwise processes data beyond the authorized scope. This includes an attorney pasting case strategy into a consumer-grade ChatGPT account, uploading client documents to an unapproved AI summarization tool, or a vendor-side breach exposing AI interaction logs. The Samsung incident in 2023 — where employees submitted proprietary code to ChatGPT — is the template case. For law firms, the stakes are higher because the duty of confidentiality under ABA Model Rule 1.6 creates affirmative obligations that corporate employees do not carry.

Category 2: Output Failure. The AI generates inaccurate, fabricated, or misleading content that is used in legal work product. The landmark case is Mata v. Avianca (S.D.N.Y. 2023), where attorneys submitted a brief containing six fabricated case citations generated by ChatGPT. Judge Castel imposed $5,000 in sanctions. Since then, at least 14 additional courts have sanctioned attorneys for AI-generated hallucinations in filings through early 2025.

Category 3: Compliance Violation. AI tool usage violates a court order, local rule, regulatory requirement, or client engagement terms. This includes using AI in a jurisdiction that has adopted mandatory AI disclosure requirements, failing to comply with a client's outside counsel guidelines that restrict AI usage, or processing data through AI tools in ways that violate GDPR, CCPA, or sector-specific regulations like HIPAA.

The 72-Hour Response Framework

Hours 0-4: Containment. The moment an AI incident is identified, the first priority is stopping ongoing exposure. Revoke access to the compromised tool. If data was submitted to a consumer AI service, submit a data deletion request immediately — OpenAI processes these under their privacy policy within 30 days, but the request timestamp matters for demonstrating reasonable response. Preserve all evidence: screenshots of the AI interaction, tool access logs, and the specific data that was exposed or the specific output that failed.

Hours 4-24: Assessment and Notification. The AI governance committee (or designated incident lead) assesses severity using three factors: what data was involved, how many clients or matters are affected, and what obligations are triggered. For data exposure incidents, determine whether state breach notification laws apply — as of 2025, all 50 states plus DC have data breach notification statutes, with notification windows ranging from 30 to 90 days. For output failures, assess whether the flawed work product has been submitted to a court, delivered to a client, or relied upon in a transaction.

Hours 24-72: Remediation and Documentation. Execute the remediation plan: correct flawed filings with the court, notify affected clients per your engagement terms and ethical obligations, and submit a detailed incident report to the firm's malpractice carrier. Document everything. The incident file should include a timeline, the data or output involved, containment steps taken, notifications made, and remediation actions completed. This documentation becomes your primary evidence of reasonable care if the incident escalates to a malpractice claim or regulatory inquiry.

Who to Notify and When

Internal notifications go to three parties immediately: the managing partner or executive committee, the firm's general counsel or ethics partner, and the IT/security team. If the firm has an AI governance committee, activate it within 4 hours. Do not wait for a complete assessment before making internal notifications — delayed internal communication is how manageable incidents become crises.

Client notification depends on the incident category. For data exposure, ABA Model Rule 1.4 requires prompt communication of information the client needs to make informed decisions about the representation. If privileged material was exposed through an AI tool, the client has a right to know — and to decide whether to seek a waiver determination, change counsel, or take other protective action. Do not wait for certainty before notifying; notify when you have enough information for the client to understand the exposure.

Regulatory and court notification applies in specific circumstances. If an AI output failure affects a pending court filing, notify the court proactively — judges have consistently treated self-correction more favorably than discovery by opposing counsel. At least 35 federal district courts now have standing orders on AI disclosure, and failure to comply with these orders after an incident compounds the original problem. For data exposure incidents involving personal information, evaluate state breach notification requirements against the specific data involved and the number of affected individuals.

Building the Plan Before You Need It

An incident response plan written during an incident is not a plan — it is improvisation. Build the framework now. Assign an incident response lead (a partner with authority to make immediate decisions), define the escalation chain (who calls whom, in what order, within what timeframe), and create incident classification templates that map each category to its specific response steps.

Run a tabletop exercise at least annually. Pick a scenario — an associate submits a client's financial records to a consumer AI tool, or a brief contains a hallucinated case citation — and walk through the full response sequence. Time the exercise. Identify gaps. Most firms discover during tabletop exercises that their notification chains have single points of failure or that no one has actually tested the vendor data deletion request process.

Integrate the AI incident response plan with your firm's existing cybersecurity incident response and malpractice reporting procedures. AI incidents should not create a parallel response infrastructure — they should extend the one you already have with AI-specific decision trees and notification requirements.

What This Means for Your Firm

The question is not whether your firm will experience an AI incident. With attorney AI adoption rates exceeding 65% in 2025, the volume of AI-assisted work product guarantees that errors, exposures, and compliance failures will occur. The question is whether your firm will handle them competently or chaotically.

Build the plan in the next 30 days. Assign the incident response lead this week. Schedule a tabletop exercise within 60 days. Review your malpractice carrier's position on AI incidents — many carriers now require notification of AI-related incidents within specific timeframes as a condition of coverage.

The firms that have response plans will contain incidents quickly, maintain client trust, and demonstrate the reasonable care that prevents malpractice exposure from escalating into malpractice liability. The firms that do not will learn what improvisation costs when the clock is running and the client is calling.

The Bottom Line: Every firm will have an AI incident — the response plan you build before it happens determines whether it stays an incident or becomes a lawsuit.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.