Every AI-generated legal citation must be verified against primary sources before it goes into a filing -- no exceptions, no shortcuts. Stanford's 2025 study found that even the best legal AI tools hallucinate in 17-33% of queries, which means you're statistically guaranteed to encounter fabricated or inaccurate citations if you use AI regularly. The verification workflow below is the minimum standard courts expect, and it's what separates attorneys who use AI competently from attorneys who end up in sanctions opinions.

The good news: verification doesn't have to be slow. With the right workflow, you can verify a 15-citation brief in under an hour. The attorneys getting sanctioned aren't getting sanctioned because verification is hard. They're getting sanctioned because they skipped it entirely.


Step 1: Confirm the Case Exists

This is where you catch the most dangerous hallucinations -- cases that don't exist at all. AI can generate completely fabricated case names with realistic-sounding party names, docket numbers, and citations that point to nothing.

Pull every case on Westlaw or Lexis. Enter the exact citation the AI provided. If the case doesn't come up, it's fabricated. This sounds obvious, but Mata v. Avianca happened because an attorney skipped this step entirely.

Check the party names. AI sometimes generates real party names attached to the wrong case or combines elements of multiple real cases into a fake one. Confirm that the case at the citation matches the parties the AI identified.

Verify the court and date. AI hallucinations often get jurisdictional details wrong -- attributing a case to the wrong circuit, wrong district, or wrong year. Confirm the court and date match what the AI provided.

Time required: 1-2 minutes per citation. For a brief with 15 citations, that's 15-30 minutes. There's no justification for skipping this.

Step 2: Read the Actual Case and Verify the Holding

A case can be real but mischaracterized. AI frequently states holdings with misleading emphasis, omits critical qualifications, or attributes the wrong legal principle to a case. This is harder to catch than outright fabrication because the case exists -- the AI just got the substance wrong.

Read the relevant sections. You don't need to read every opinion cover to cover, but you must read the sections relevant to the proposition the AI cited the case for. Focus on the holding, the court's reasoning, and any qualifications or limitations the court stated.

Check direct quotations. If the AI provided a quote, search for that exact language in the opinion. AI-generated quotes often sound plausible but don't appear in the actual text. Misquoting a court is a credibility-destroying error.

Verify the proposition. The AI said this case stands for X. Does it? Read the holding carefully. Courts distinguish between holdings and dicta, and AI doesn't always get that distinction right. An argument built on dicta presented as holding is a weak argument that will be challenged.

Time required: 5-10 minutes per citation for targeted reading. This is the most time-intensive step, but it's also where you catch the errors that lead to sanctions.

Step 3: Shepardize and Confirm Good Law

AI training data has cutoff dates. A case that was good law when the AI was trained may have been overruled, distinguished, or superseded since then. Citing bad law is negligence regardless of whether AI suggested it.

Run Shepard's (Lexis) or KeyCite (Westlaw) on every cited case. Look for negative treatment -- overruled, reversed, criticized, distinguished. A red flag on Shepard's means the case needs closer examination before you cite it.

Check for subsequent legislative action. AI won't know about statutes that were amended after its training cutoff. If your case involves statutory interpretation, verify the statute is still in its cited form.

Verify parallel citations. If you're citing state court cases, confirm the parallel citations are correct. AI sometimes generates the wrong reporter volume or page number for parallel citations.

Time required: 2-3 minutes per citation using Shepard's or KeyCite. This is fast, mechanical, and catches a category of error that AI cannot self-correct.

The Clearbrief Alternative: Citation Verification by Architecture

Clearbrief deserves special mention because it solves the hallucination problem through architecture rather than process. Unlike generative AI tools that produce text and hope it's accurate, Clearbrief uses a semantic approach that can't hallucinate citations because it only references documents you provide or that exist in verified databases.

Here's how it works: you feed Clearbrief your draft, and it checks every citation against its verified database. It identifies unsupported assertions, flags citations that don't match the proposition they're cited for, and verifies quotation accuracy. It doesn't generate legal text -- it verifies legal text against primary sources.

This doesn't eliminate the need for attorney review. You still need to read the cases and exercise professional judgment. But Clearbrief catches the mechanical errors -- fabricated citations, misquoted text, bad law -- that human review sometimes misses and that AI-generated content frequently contains.

For firms that use generative AI for drafting, running the output through Clearbrief before filing is an efficient verification layer. It catches what the generative AI got wrong before you have to defend it in court.

The Complete Verification Workflow

Here's the end-to-end workflow that meets the standard courts expect:

1. AI generates initial research or draft. Use whatever tool fits your practice -- Lexis+ AI, CoCounsel, Claude, ChatGPT Enterprise. The tool matters less than what you do next.

2. Extract all citations. Create a list of every case, statute, regulation, and secondary source the AI cited. Don't verify inline -- work from a master list so nothing gets missed.

3. Confirm existence (Westlaw/Lexis). Pull every case. Confirm it's real, correctly cited, and from the right court and year. Flag anything that doesn't check out. Estimated time: 1-2 min per citation.

4. Read and verify holdings. For every case that exists, read the relevant sections and confirm the AI's characterization is accurate. Check quotations. Estimated time: 5-10 min per citation.

5. Shepardize/KeyCite. Run every case through Shepard's or KeyCite. Confirm it's still good law. Check for relevant negative treatment. Estimated time: 2-3 min per citation.

6. Run through Clearbrief (optional but recommended). If available, run your final draft through Clearbrief for an automated verification layer. Catches mechanical errors human review may miss.

7. Document the verification. Log what was checked, when, and by whom. This is your defense if the work product is ever challenged.

Total time for a 15-citation brief: 2-4 hours. That's a fraction of the time AI saved you in research and drafting. There's no ethical shortcut around this process.

The Bottom Line: Verify every AI citation in three steps -- confirm it exists, read the case and check the holding, and Shepardize it -- plus run your draft through Clearbrief if available; this takes 2-4 hours for a typical brief and is the minimum standard courts expect.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.