If you can't prove what AI tool you used, what prompt you entered, what output you received, and what verification you performed, you can't defend an AI-related malpractice claim. Documentation is the difference between a defensible decision and an indefensible one. Most firms have zero AI documentation protocol. Every AI interaction with client work is an undocumented liability.

The documentation requirement isn't just about malpractice defense. ABA Opinion 512's supervisory obligations under Rules 5.1 and 5.3 effectively require firms to maintain records of AI-assisted work. Bar regulators, malpractice insurers, and courts all expect a verifiable audit trail showing that human judgment, not blind reliance, produced the final work product.


What to Log: The Minimum Documentation Standard

Every AI interaction involving client work needs six data points recorded. This isn't optional — it's the minimum that makes an AI-assisted work product defensible:

1. Tool identification: Which AI platform and version. "ChatGPT" isn't sufficient. Log the specific product (GPT-4o, Claude 3.5 Sonnet, Lexis+ AI, CoCounsel), the access tier (enterprise vs. consumer), and the date. Models update; knowing which version produced the output matters.

2. Prompt content: What you asked. Save the actual prompt text. If the prompt contained client information, note that fact and confirm the tool has appropriate data handling agreements.

3. Output received: Save the AI's response, either as a screenshot or copied text. This is your baseline — what the AI actually said before human editing.

4. Verification performed: What you checked and how. "Verified citations" isn't sufficient. Log which citations you checked, in which database, and confirm each was accurate.

5. Modifications made: What you changed from the AI output. Track the delta between AI draft and final work product. The more you modified, the stronger your professional judgment defense.

6. Reviewer identity: Who verified the AI output. Name, date, and sign-off. If a supervising attorney reviewed AI-assisted work, their review is part of the audit trail.

How to Log It: Practical Systems That Work

Documentation only works if it's easy enough that attorneys actually do it. Three approaches, ranked by firm size:

Small firms (1-10 attorneys): Use a shared spreadsheet or Airtable base. Columns for the six data points above. Each row is one AI interaction. Associates log entries as they work. Partner reviews weekly. Low-tech but functional. Store AI output files in a dedicated folder within the matter file.

Mid-size firms (10-100 attorneys): Integrate AI logging into your practice management system. Most modern platforms (Clio, PracticePanther, MyCase) support custom fields. Add AI-specific fields to matter records: AI tool used, verification status, reviewer sign-off. Some platforms now offer AI usage tracking modules.

Large firms (100+ attorneys): Enterprise AI platforms like CoCounsel and Harvey include built-in audit logging. Ensure those logs are retained according to your document retention policy and are accessible for malpractice defense. Supplement platform logs with matter-level documentation that connects AI outputs to specific work products.

Regardless of size: prompts and outputs should be stored as part of the matter file, not in personal browser history. If the AI interaction informed a client deliverable, it's a work paper.

AI Audit Trail Requirements: What Regulators and Insurers Want

Three audiences will evaluate your AI documentation, and each wants slightly different things:

Bar regulators want to see competence and supervision. They'll ask: Did the attorney understand the AI tool's limitations? Was AI output verified before being used? Were supervising attorneys aware of and overseeing AI use? Your documentation should demonstrate training (attorneys knew the risks), process (verification was systematic), and oversight (supervisors reviewed).

Malpractice insurers want to see reasonable care. They'll ask: Did the firm have an AI governance policy? Was the policy followed? Can you demonstrate that the specific AI output at issue was verified? Your documentation is your evidence of reasonable care — the standard for both coverage eligibility and malpractice defense. No documentation means no evidence of reasonable care.

Courts want to see candor and diligence. If a Rule 11 issue arises, the court will ask: What inquiry did you conduct before certifying this filing? Your verification log — showing which citations you checked, when, and in what database — is your Rule 11 defense. Courts have explicitly stated that documented verification processes are mitigating factors in sanctions decisions.

Prompt Logging: The Sensitive Part

Logging prompts creates a secondary risk: prompt logs containing client information become discoverable documents. This is the tension every firm needs to manage.

Prompt logs that contain client facts, case details, or privileged communications are themselves potentially privileged work product — but only if they were created in anticipation of litigation or for legal analysis purposes. General research prompts may not qualify for work product protection.

Best practices for prompt logging:

- Log the substance of the prompt, not necessarily verbatim text containing client identifiers. "Researched enforceability of non-compete clause under Texas law" is better than logging a prompt containing the client's name, the opposing party, and specific contract terms. - If you must log verbatim prompts (for thoroughness), maintain prompt logs under the same confidentiality and access controls as other work papers. - Treat prompt logs as part of the attorney work product file. Mark them as privileged and confidential. - For prompts entered into consumer-grade AI tools, note whether the tool's terms allow data retention or training use. This documentation is critical if a confidentiality issue arises later. - Never log prompts in tools or platforms that are accessible to non-privileged personnel. Your documentation system must maintain the same access controls as your case files.

Building Documentation Into Your Workflow, Not On Top of It

The biggest risk to any documentation protocol is attorney non-compliance. If logging feels like extra work, people won't do it. The solution is embedding documentation into the AI workflow itself, not adding it as an afterthought.

Practical integration methods:

- Template prompts: Create prompt templates that include a documentation header. The attorney fills in the template, which automatically captures the tool, date, matter number, and prompt content in a structured format. - Post-output checklist: After receiving AI output, attorneys complete a 60-second checklist: citations verified (Y/N), holdings confirmed (Y/N), quotes checked (Y/N), modifications noted. Checkbox format, not narrative. - Auto-capture tools: Some AI platforms offer API access that allows firms to automatically log prompts and outputs. If your platform supports it, build automatic logging into your infrastructure. - Weekly audit: A designated person (paralegal, compliance coordinator) reviews AI logs weekly for completeness. Missing entries get flagged before they become a pattern. - Tie documentation to billing: If AI time appears on a client bill, documentation of that AI use must exist. No documentation, no billing. This creates a natural enforcement mechanism — attorneys who want to bill for AI-assisted work must document the AI assistance.

The Bottom Line: AI documentation isn't bureaucratic overhead — it's your malpractice defense, your Rule 11 protection, your insurance coverage evidence, and your bar compliance proof. Six data points per AI interaction. Build it into the workflow. If you can't prove you verified, you can't prove you were competent.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.