The most common AI mistake lawyers make isn't using AI — it's trusting AI output without verification. Hallucinated citations, fabricated case holdings, and AI-generated legal arguments that sound authoritative but are completely wrong. Every sanctioned lawyer who used AI had the same problem: they treated AI output as legal research instead of treating it as a first draft that requires human verification.
The second most common mistake? Not using AI at all — and falling behind competitors who do. The path between those two mistakes is narrow but well-defined. Here are the specific errors to avoid.
Mistake #1: Not Verifying AI-Generated Citations
This is the mistake that ended careers and triggered 300+ judicial standing orders. In Mata v. Avianca, Steven Schwartz asked ChatGPT for cases supporting his client's position. ChatGPT generated six case citations that didn't exist. Schwartz didn't check them in Westlaw or Lexis. He filed them with the court. He got sanctioned, publicly humiliated, and became the poster child for AI malpractice.
The fix is embarrassingly simple: verify every citation. Every case name, every volume and page number, every holding. Not on Google Scholar — in Westlaw, Lexis, or another verified legal database. AI hallucinations create citations that look real — correct reporter format, plausible party names, legitimate-sounding holdings. You can't spot a hallucinated citation by reading it. You can only spot it by looking it up.
Time required: 2-3 minutes per citation. For a brief with 30 citations, that's 60-90 minutes of verification. Compare that to the weeks of sanctions hearings, bar disciplinary proceedings, and reputation damage from filing a brief with fake cases.
Mistake #2: Using Free AI Tiers for Client Work
Free versions of ChatGPT, Claude, and Gemini come with terms of service that should terrify any lawyer handling confidential information. Free tiers may use your inputs for model training, which means your client's confidential information could influence AI outputs for other users. That's not a theoretical risk — it's a confidentiality breach.
The specific risks: - Privilege waiver: Inputting privileged communications into a free AI tool may waive attorney-client privilege. The information leaves the confidential attorney-client relationship and enters a third-party system without privilege protections. - Duty of confidentiality: Model Rules of Professional Conduct 1.6 prohibits revealing information relating to the representation without informed consent. Submitting client facts to an AI tool whose terms allow training on user inputs arguably violates this duty. - Data retention: Free tiers may retain your inputs in ways that paid enterprise tiers don't. Your client's deal terms, medical records, or litigation strategy could persist in the AI provider's systems.
The fix: Use paid tiers with enterprise data protection. Claude Pro, ChatGPT Plus, and enterprise versions of legal AI tools all have terms that prohibit training on your inputs. The $20/month cost of a paid AI subscription is the cheapest malpractice prevention investment you'll ever make.
Mistake #3: Outsourcing Legal Judgment to AI
AI is a tool, not a lawyer. The third most common mistake is treating AI-generated analysis as the final word rather than a starting point for human judgment. AI can identify relevant cases, summarize holdings, and draft arguments. It can't evaluate which arguments are strategically best for your specific case, your specific judge, or your specific client's goals.
Real examples of judgment failures: - AI recommending aggressive litigation strategy when the client's business relationship with the opposing party makes settlement the obvious play - AI generating technically correct arguments that are politically or practically unwise in the specific jurisdiction - AI missing the practical implications of legal positions — winning the motion but losing the case - AI treating all legal questions as having clear answers, when the most valuable legal advice acknowledges uncertainty
The fix: Use AI for volume work (research, first drafts, document review) and reserve judgment calls for the experienced human lawyer. The framework: AI generates options. Lawyers choose between them. Never let AI make the final call on strategy, risk assessment, or client advisory.
Mistake #4: Ignoring AI Disclosure Requirements
Over 300 federal judges now require attorneys to disclose AI use in court filings. Ignoring these requirements — whether through ignorance or deliberate omission — is a fast track to sanctions and disciplinary referrals.
The common failure modes: - Not checking whether the assigned judge has an AI disclosure standing order - Assuming that using AI for research (as opposed to drafting) doesn't trigger disclosure requirements - Disclosing AI use vaguely ("AI tools may have been used") instead of specifically - Failing to update disclosure practices as judges issue new or revised orders
The fix: Build AI disclosure into your pre-filing checklist. Check the judge's standing orders before every filing. When in doubt, disclose. Over-disclosure has no downside. Under-disclosure can result in sanctions, referral to the state bar, and permanent damage to your credibility with the bench.
Mistake #5: Not Using AI at All
The lawyers most at risk in 2026 aren't the ones using AI badly — they're the ones not using it at all. While they spend 8 hours on research, their competitors finish the same task in 2 hours with AI assistance. While they review contracts manually at 10 pages per hour, AI-equipped firms review at 100 pages per hour. The productivity gap is real and widening.
The competitive risk is concrete: - Clients are starting to ask whether their law firms use AI — and choosing firms that do - Firms using AI can bid lower on fixed-fee work because their costs are lower - AI-equipped lawyers produce more thorough work product because AI catches issues human review misses - Associates at AI-forward firms develop skills faster because AI handles the rote work and they focus on analysis
The fix: Start with Claude Pro at $20/month. Use it for three tasks: research questions, first drafts of memos, and document summarization. Spend one week building the habit. Measure how much time you save. Then expand from there. The barrier to entry has never been lower.
The Bottom Line: Verify everything. Use paid tiers. Keep judgment calls human. Disclose when required. And for the love of your practice, start using AI. The sweet spot is narrow but clear: use AI aggressively for volume work, verify every output, maintain human judgment on strategy, and comply with every disclosure requirement. That's the entire playbook.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
