When AI goes wrong in a law firm, the first 24 hours determine everything. Whether it's a hallucinated citation that makes it into a court filing, client data leaked through an AI vendor breach, or a privilege waiver caused by sending confidential information to a public model — the response window is narrow and the stakes are permanent. Courts aren't going to care that you had good intentions. They're going to ask what you did, how fast you did it, and whether you had a plan.

Most firms don't have that plan. They'll draft their incident response during the incident, which is like writing your fire escape route while the building is burning. The Sixth Circuit imposed $30,000 in sanctions against two attorneys for hallucinated citations in March 2026. Gordon Rees had three AI incidents in six months. Butler Snow had partners disqualified from a case. These weren't inevitable outcomes — they were governance failures. Here's the playbook you need before the first incident happens.


The Three AI Incident Categories Law Firms Face

Not all AI incidents are the same, and your response needs to match the severity. Category 1: Accuracy Failures. Hallucinated citations, fabricated case law, incorrect legal reasoning in AI-generated work product. This is the most common category — the database tracking AI hallucinations in legal filings documented 1,227 cases globally by early 2026. If caught before filing, it's a near-miss that needs documentation and process review. If caught after filing, it's a crisis that may require court notification, amended filings, and potential sanctions exposure. Category 2: Data Exposure. Client data entered into an unapproved AI tool, vendor data breach, or inadvertent disclosure of privileged information through an AI system. This triggers confidentiality obligations under ABA Model Rule 1.6 and potentially GDPR, state breach notification laws, and contractual obligations to clients. Category 3: Privilege Breach. Confidential attorney-client communications processed by an AI system in a way that waives privilege — either through vendor data handling failures or by using a tool that trains on inputs. This is the most damaging category because privilege, once waived, cannot be restored. Each category requires a different response protocol, different notification obligations, and different remediation steps. Your incident response plan should have separate playbooks for each.

The First 24 Hours: Immediate Response Protocol

Hour 0-1: Detection and Triage. Whoever discovers the incident immediately contacts the designated AI Incident Lead — typically the ethics officer or AI committee chair. The Incident Lead makes an initial severity assessment: Which category? Is client data at risk? Has inaccurate work product been filed? Is privilege potentially waived? Hour 1-4: Containment. Stop the bleeding. If it's a data exposure, immediately suspend access to the compromised AI tool. If it's a hallucinated citation in a filed document, preserve all records of the AI interaction (prompts, outputs, timestamps). If it's a vendor breach, invoke the DPA's breach notification provisions and demand an incident report. Activate the incident response team: Incident Lead, IT/security, the responsible attorney, and the relevant practice group leader. Hour 4-12: Assessment. Determine the full scope. For accuracy failures: identify every document that used the same AI workflow and may contain similar errors. For data exposure: determine exactly what data was compromised, which clients are affected, and whether the data was exfiltrated. For privilege breaches: assess whether the disclosure meets the legal standard for waiver in the relevant jurisdiction. Hour 12-24: Initial notification decisions. Based on the assessment, determine who must be notified: affected clients, courts (if inaccurate filings were made), bar authorities (if professional conduct rules were violated), regulators (if data breach notification laws apply), and the firm's malpractice carrier.

Who to Notify and When

Notification obligations vary by incident category and jurisdiction. Get this wrong and you compound the original problem. Courts: If a hallucinated citation or fabricated legal reasoning was included in a court filing, you must notify the court immediately. ABA Model Rule 3.3(a)(1) prohibits knowingly making false statements of law to a tribunal. File a corrective notice, withdraw the problematic filing if possible, and provide accurate authorities. Do this proactively — courts impose significantly lighter sanctions on lawyers who self-report than on those caught by opposing counsel or judicial clerks. Clients: ABA Model Rule 1.4 requires keeping clients informed of material developments. An AI incident affecting their matter is material. Notify affected clients promptly with a clear explanation of what happened, what data was affected, and what remediation steps are being taken. Don't sugarcoat it. Bar authorities: Whether you must report depends on the incident severity and your jurisdiction's rules. Consult your ethics officer. If the incident involves a clear professional conduct violation, proactive reporting typically results in better outcomes than waiting for a complaint. Malpractice carrier: Notify your carrier as soon as you identify potential liability exposure. Most policies require prompt notification and failure to notify can jeopardize coverage. Don't wait until a client files a claim. Regulators: If client personal data was compromised, state breach notification laws (all 50 states have them) may require notification within specific timeframes — typically 30-60 days, but some states require notification within 72 hours. If EU personal data is involved, GDPR requires notification within 72 hours.

Documentation: Building Your Defense During the Incident

Everything you do during incident response must be documented. This documentation serves three purposes: it demonstrates compliance with your professional obligations, it provides evidence for any resulting litigation or disciplinary proceeding, and it enables post-incident analysis to prevent recurrence. Incident log: Start a contemporaneous log the moment the incident is reported. Record timestamps, actions taken, decisions made, and rationale for each decision. Include who was notified, when, and what they were told. Preserve AI evidence: Screenshot or export the AI interaction that caused the incident — prompts, outputs, model version, timestamps. AI systems may not retain this data indefinitely, and you need it for your investigation and any subsequent proceedings. Communication records: Preserve all communications related to the incident — emails, messages, meeting notes. Apply attorney-client privilege and work product protections to your internal investigation materials. Remediation documentation: Document every corrective step: amended filings, client notifications, policy changes, additional training deployed. This creates the narrative that the firm responded responsibly and took concrete action. Root cause analysis: Within 48 hours of containment, begin documenting why the incident happened. Was it a tool failure? A process failure? A training gap? A policy violation? The root cause drives your remediation plan and policy updates. Keep all incident documentation for at least seven years — the typical statute of limitations for legal malpractice claims varies by state, and disciplinary proceedings can have long lookback periods.

Post-Incident: Remediation and Prevention

The incident is contained. Notifications are sent. Now prevent it from happening again. Immediate process changes: If a hallucinated citation made it into a filing, your verification workflow failed. Identify where it broke down and fix it. Was the responsible attorney skipping verification? Was the verification tool inadequate? Did time pressure cause corners to be cut? Tool evaluation: If the AI tool itself was the problem, evaluate whether it should remain in the firm's approved tool suite. A tool with unacceptable hallucination rates for legal tasks should be restricted or replaced. Some firms have adopted citation verification platforms like RealityCheck (launched at Legalweek 2026) as a secondary check. Policy updates: Update your AI acceptable use policy to address the specific failure mode. If client data was entered into an unapproved tool, strengthen your technical controls (network blocking) and clarify the policy language. Targeted training: Deploy incident-specific training to the affected practice group and firm-wide awareness communications about the incident (anonymized). Real incidents are the most powerful training material. Committee review: Present the full incident report to the AI committee at its next meeting. The committee should evaluate whether systemic changes are needed — to the approved tool list, the vendor evaluation framework, the training program, or the monitoring controls. Pattern analysis: Track incidents over time. If you're seeing the same type of failure repeatedly — as Gordon Rees experienced with three incidents in six months — the problem is systemic and requires structural changes, not just individual remediation.

The Bottom Line: An AI incident response plan isn't something you write during the incident. It's a documented, rehearsed playbook that covers accuracy failures, data exposure, and privilege breaches — each with specific protocols for containment, notification, documentation, and remediation. The firms that survive AI incidents with their reputations and client relationships intact are the ones that respond in hours, not days, and can prove they had a plan before anything went wrong.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.