Wadsworth v. Walmart is the case that proved even major law firms with proprietary AI platforms aren't immune to hallucination problems. In February 2025, a federal judge in Wyoming revoked an attorney's pro hac vice admission and fined three Morgan & Morgan attorneys a total of $5,000 after eight of nine cited cases in their motions in limine turned out to be fabricated by the firm's own AI tool.


Background

The underlying case was a products liability action against Walmart and Jetson Electric Bikes, filed in the District of Wyoming. The plaintiff was represented by attorneys from Morgan & Morgan, one of the largest personal injury firms in the country.

Attorney Rudwin Ayala used the firm's in-house AI platform, MX2.law, to generate case law for motions in limine. Ayala prompted the AI to "add Federal Case law from Wyoming" to the motions. He didn't verify any of the output. The AI generated nine citations, and eight of them were completely fabricated.

Two other attorneys were involved in the filings. T. Michael Morgan, a supervising attorney, and Taly Goody, local counsel, both signed the motions without independently reviewing the citations. The filings went to Judge Kelly H. Rankin with fabricated authorities attached to three different motions.

Wadsworth v. Walmart Inc.
348 F.R.D. 489 (D. Wyo. 2025)
Court
U.S. District Court, District of Wyoming
Date
2025-02-21
Category
AI Hallucination / Sanctions
Sanctions
$5,000 (total: $3,000 + $1,000 + $1,000)
AI Case Law — Updated April 2026

What Happened

When the fabricated citations were identified, the court ordered an explanation. Ayala admitted he had used MX2.law, Morgan & Morgan's proprietary AI platform, to generate the case law. He acknowledged that he didn't verify any of the citations through traditional legal databases.

The fact that a major national firm's own AI tool produced the hallucinations was significant. This wasn't an attorney using consumer ChatGPT on the side. This was a firm-sanctioned AI platform, presumably built and maintained with legal research capabilities in mind. Eight out of nine citations were still fake.

Judge Rankin also examined the roles of the two co-signing attorneys. Morgan and Goody both signed the filings but neither independently verified the case citations. The court found that all three attorneys bore responsibility, with escalating culpability based on their respective roles in the process.


The Ruling

Judge Rankin revoked Ayala's pro hac vice admission, effectively removing him from the case. Ayala was also fined $3,000. Supervising attorney T. Michael Morgan and local counsel Taly Goody were each fined $1,000 for their roles in signing the filings without verification. Total sanctions across all three attorneys: $5,000.

The court held that all signing attorneys share responsibility for verifying the accuracy of filings, even when one attorney was the primary drafter. The pro hac vice revocation was particularly notable. It's a sanction that goes beyond money. It tells the attorney: you don't get to practice in this court anymore.

The ruling made clear that proprietary or firm-developed AI tools don't get a pass on verification requirements. The duty to verify citations applies regardless of whether the AI tool is a consumer product, a legal tech startup, or a platform built by the attorney's own firm.

Outcome: Ayala's pro hac vice admission was revoked and he was fined $3,000. Supervising attorney T. Michael Morgan and local counsel Taly Goody were each fined $1,000 for their roles in signing the filings without verification.

Why This Case Matters

Wadsworth v. Walmart was the first case to revoke pro hac vice admission as a sanction for AI-generated fake citations. That sanction has real teeth for attorneys who practice across state lines, which describes most attorneys at national firms like Morgan & Morgan.

The case shattered the assumption that firm-built AI tools are safer than consumer AI. Morgan & Morgan invested in building MX2.law specifically for legal work, and it still hallucinated at an 89% rate (eight of nine citations). This tells every firm considering an in-house AI platform that the tool's pedigree doesn't eliminate the hallucination problem.

The shared liability finding also changed the calculus for supervising attorneys and local counsel. Before Wadsworth, some attorneys treated the co-signing role as a formality. This ruling made clear that every attorney who signs a filing has an independent duty to verify every citation, regardless of who drafted it or what tool was used.


Lessons for Attorneys

Don't trust proprietary AI tools more than consumer ones. MX2.law is Morgan & Morgan's own platform, and it fabricated eight out of nine citations. Firm-developed, legal-specific, or enterprise AI tools all use language models that can hallucinate. The branding on the tool doesn't change the underlying technology. Verify everything, regardless of the source.

If you're a supervising attorney or local counsel signing someone else's filing, you own every word in it. Wadsworth imposed $1,000 fines on both the supervising attorney and local counsel, even though neither drafted the problematic content. Read the brief. Check the citations. If you don't have time to verify, don't sign. Your signature is your warranty to the court that the filing is accurate.

For firms building or licensing AI platforms, build mandatory verification workflows into the system. The AI output should be flagged as unverified by default, and the system should require affirmative confirmation that each citation has been checked against an authoritative database before it clears for filing. A prompt that says "add Federal Case law from Wyoming" with no verification gate is a sanctions risk waiting to happen.


The Bottom Line

Wadsworth v. Walmart proved that firm-built AI tools hallucinate just like consumer AI. The court revoked an attorney's pro hac vice admission and sanctioned all three signing attorneys, establishing that every attorney who signs a filing owns every citation in it.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.