Noland v. Land of the Free is the case that buried the multi-AI verification strategy for good. In September 2025, a California appellate court fined attorney Amir Mostafavi $10,000 after he ran his brief through four different AI platforms, ChatGPT, Claude, Gemini, and Grok, and they amplified each other's fabrications instead of catching them. Twenty-one of twenty-three quotations from cited cases were fake.
Background
The case reached the California Court of Appeal, Second District, as an appeal from a lower court decision. Attorney Amir Mostafavi represented the appellant and was responsible for drafting the opening appellate brief.
Mostafavi used ChatGPT to enhance his appellate brief. But he didn't stop there. Aware of the risks of AI hallucination, and likely aware of cases like Mata v. Avianca, he adopted what he apparently believed was a responsible approach: cross-checking the output against multiple AI platforms.
He ran the ChatGPT output through Claude, then Gemini, then Grok. Four AI platforms in total, each meant to catch the errors of the others. The theory was that if one AI hallucinated, surely one of the other three would flag the problem. The theory was wrong.
What Happened
Instead of catching fabrications, the AI platforms amplified them. Each successive model treated the output of the previous model as plausible input and either confirmed or built upon the fabricated content. The result was a brief where twenty-one of twenty-three quotations attributed to cited cases were fabricated. The actual cases existed, but the quotations pulled from them did not.
The three-judge appellate panel identified the fabricated quotations during its review. The court found that the brief cited real cases but attributed fake language to them, a particularly insidious form of AI hallucination because the case names would check out in a database even though the quoted text was invented.
Mostafavi's four-platform approach created a false sense of security. He believed he had implemented quality controls. In reality, he had created an echo chamber where AI models confirmed each other's fabrications. No human ever checked the actual opinions to see whether the quoted language appeared in them.
The Ruling
The court imposed a $10,000 sanction on Mostafavi for filing a frivolous appeal, violating court rules, citing fake quotations, and wasting the court's and taxpayers' time. This was the largest AI-related sanction from a California court at the time.
The court ordered the opinion published, which in California means it serves as binding precedent within the Second District and persuasive authority statewide. The publication order was itself a form of sanction, ensuring the ruling would be widely cited and that Mostafavi's conduct would be permanently on the record.
The court's holding was direct: using multiple AI platforms to cross-check each other does not constitute adequate verification. AI tools can amplify rather than correct each other's fabrications. The only adequate verification is human review against the original source material.
Outcome: The court imposed a $10,000 sanction for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's and taxpayers' time. The opinion was ordered published as a warning to the legal profession.
Why This Case Matters
Noland v. Land of the Free closed the last theoretical escape hatch for attorneys hoping to use AI without manual verification. After Gauthier v. Goodyear rejected the two-AI approach, some attorneys argued that more AI layers would solve the problem. Noland proved the opposite. Four AI platforms, four failures, twenty-one fabricated quotations.
The published opinion creates binding precedent in California, the largest legal market in the country. Every attorney practicing appellate law in the Second District, and arguably every attorney in California, is now bound by the principle that multi-AI cross-checking is not verification.
The case also revealed a more sophisticated form of AI hallucination: fabricated quotations from real cases. Unlike earlier cases where entire citations were invented, Mostafavi's brief cited cases that existed but attributed fake language to them. This is harder to catch with a simple database check because the case name and citation will verify. You have to read the actual opinion to confirm the quoted text appears in it.
Lessons for Attorneys
More AI doesn't mean more accuracy. The instinct to add another AI layer for quality control is understandable but fundamentally misguided. Language models don't fact-check. They generate plausible text. Running plausible fabrications through another plausibility generator produces more plausible fabrications, not corrections. There is no number of AI tools that substitutes for reading the source.
Watch for the sophisticated hallucination: real cases with fake quotations. This is harder to catch than completely invented citations. A case name and reporter citation will check out in Westlaw or Lexis. But the quoted language won't appear in the actual opinion. The only way to verify is to pull the full text of every cited case and confirm that every quoted passage actually exists in the opinion. Word-for-word. Page by page.
For appellate practitioners specifically, the stakes are higher. Appellate courts scrutinize briefs more carefully than trial courts. The judges and their clerks are reading the cited authorities. Published opinions carry precedential weight and become permanent legal records. An AI hallucination in an appellate brief doesn't just risk sanctions. It risks creating a published opinion that documents your incompetence for every future attorney to find.
The Bottom Line
Four AI platforms confirmed each other's fabrications instead of catching them. Noland v. Land of the Free established that no amount of AI cross-checking substitutes for human verification, and California now has binding precedent to enforce it.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.