Mata v. Avianca is the case that started it all. In June 2023, a federal judge in Manhattan sanctioned two attorneys $5,000 for filing a brief stuffed with six completely fabricated case citations generated by ChatGPT. Every attorney using AI for legal research needs to know this case because it's cited in virtually every AI sanctions ruling that followed.
Background
Roberto Mata sued Avianca Airlines for personal injuries sustained on a flight. His attorneys, Peter LoDuca and Steven Schwartz of the firm Levidow, Levidow & Oberman, were tasked with opposing Avianca's motion to dismiss the case. Schwartz turned to ChatGPT to research case law supporting their opposition brief.
The problem wasn't that Schwartz used AI. The problem was what happened next. ChatGPT generated six case citations that looked legitimate on the surface, complete with reporter citations, page numbers, and quotations from the supposed opinions. Schwartz didn't check whether any of the cases existed. He dropped them into the brief, LoDuca signed and filed it, and neither attorney verified a single citation.
When Avianca's counsel couldn't locate the cited authorities, they flagged the issue with the court. Rather than immediately coming clean, Schwartz and LoDuca initially doubled down. Schwartz even asked ChatGPT whether the cases were real, and the chatbot confirmed they were. It wasn't until Judge Kevin Castel demanded the full text of the cited opinions that the truth came out.
What Happened
Schwartz submitted an affidavit to the court in which he described his ChatGPT usage and expressed regret. He said he was "unaware that its content could be false." He included screenshots showing his ChatGPT conversation, including the moment he asked the tool to confirm the cases were real and it assured him they were.
Judge Castel held an evidentiary hearing on June 8, 2023. Both attorneys appeared and testified. The court found that six cited cases were entirely fabricated. None existed in any legal database. The fake opinions contained invented judicial reasoning, fabricated quotations, and citations to other nonexistent decisions. Some even attributed rulings to real judges who never wrote them.
The court also noted that the attorneys had been given multiple opportunities to correct the record and failed to act promptly. The doubling-down period between the initial flag and the eventual admission made things significantly worse.
The Ruling
On June 22, 2023, Judge Castel sanctioned both LoDuca and Schwartz $5,000, jointly and severally, payable to the court registry. The court found both attorneys acted with "subjective bad faith" sufficient for sanctions under Federal Rule of Civil Procedure 11.
The court held that attorneys have an affirmative duty to verify the accuracy of all legal citations before filing. Using an AI tool doesn't change this obligation. Schwartz's claim that he didn't know ChatGPT could fabricate information wasn't a defense. The court noted that "technological incompetence is not a defense to a Rule 11 violation."
Beyond the fine, the court ordered both attorneys to send notification letters to the plaintiff (their own client) and to each judge who was falsely identified as the author of the fabricated opinions. This public accountability component carried reputational consequences that far exceeded the monetary penalty.
Outcome: Both attorneys were sanctioned $5,000 jointly and severally, payable to the court registry. They were also ordered to send notification letters to the plaintiff and each judge falsely identified as the author of the fake opinions.
Why This Case Matters
Mata v. Avianca became the most cited AI case in legal history almost overnight. It put every attorney in the country on notice that AI-generated legal research can include completely invented authorities, and that filing those fabrications carries real consequences.
The case triggered a nationwide wave of judicial standing orders requiring AI disclosure and verification. Within months, hundreds of federal and state judges issued rules addressing AI use in court filings. Judge Brantley Starr in the Northern District of Texas issued the first such order just days before the Mata sanctions came down.
The ruling also exposed a critical gap in legal education and bar requirements. Most attorneys in 2023 had no training on AI tools, no understanding of hallucination risks, and no firm-level policies governing AI use. Mata v. Avianca forced the profession to confront those gaps head-on.
Lessons for Attorneys
The first lesson is obvious but bears repeating: never file a citation you haven't personally verified in an authoritative legal database. Westlaw, Lexis, Fastcase, Google Scholar, or a court's own PACER database. If a case doesn't appear in any of these sources, it doesn't exist. ChatGPT and similar tools generate plausible-looking but fictional citations routinely.
The second lesson is about the cover-up. When opposing counsel flagged the nonexistent cases, the right move was immediate disclosure and a motion to withdraw the brief. Instead, the attorneys asked ChatGPT to verify its own output (which it confirmed) and delayed disclosure. This turned a fixable mistake into a sanctions-worthy offense. Courts treat the failure to promptly correct the record as independent misconduct.
The third lesson is about firm-level responsibility. LoDuca signed the brief but relied entirely on Schwartz's research. Rule 11 makes every signing attorney personally responsible for the accuracy of what they file. If you're signing a brief someone else drafted, you own every citation in it.
The Bottom Line
Mata v. Avianca established the baseline rule for AI in legal practice: the attorney, not the algorithm, is responsible for every word filed with the court. If you use AI for research, verify every citation in a real legal database before it goes into a brief.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.