Park v. Kim is the first federal appellate court case to address AI-generated fake citations. In January 2024, the Second Circuit referred attorney Jae Lee to its Grievance Panel after she admitted using ChatGPT to find a citation for her reply brief, and the case she cited didn't exist. The ruling confirmed that Rule 11 obligations apply at every level of the federal court system, including appeals.
Background
The underlying case was a discrimination lawsuit filed in the Eastern District of New York. After the district court dismissed the case, attorney Jae Lee filed an appeal to the Second Circuit on behalf of her client.
During the briefing process, Lee needed to support a legal argument in her reply brief but couldn't find an on-point case through traditional legal research. Rather than adjusting her argument or acknowledging the gap, she turned to ChatGPT. The tool produced a citation that appeared to support her position.
Lee inserted the citation into her reply brief without verifying it existed in any legal database. She filed the brief with the Second Circuit, and the fabricated authority went before a three-judge panel of Parker, Nathan, and Merriam.
What Happened
The Second Circuit panel identified the problem during its review of the briefs. The cited case didn't exist. When confronted, attorney Lee was forthright. She admitted she had used ChatGPT to locate the citation after her own research came up empty. She didn't try to cover it up or blame someone else.
The court issued a per curiam opinion addressing the fabricated citation directly. The panel noted that Lee's conduct fell below the basic obligations of counsel practicing before the court. Specifically, the court pointed to Rule 11's requirement that attorneys "read, and thereby confirm the existence and validity of, the legal authorities on which they rely."
Unlike Mata v. Avianca, where the attorneys initially tried to defend the citations, Lee's immediate admission likely prevented harsher consequences. But the court made clear that honesty after the fact doesn't excuse the original failure to verify.
The Ruling
The Second Circuit referred attorney Lee to the Court's Grievance Panel for further investigation and potential referral to the admission committee. The court did not impose monetary sanctions directly but chose the disciplinary referral route instead.
The per curiam opinion stated that citing a nonexistent case "suggests conduct that falls below the basic obligations of counsel." The court emphasized that Rule 11 requires attorneys to confirm the existence and validity of their cited authorities, full stop. The source of the citation, whether AI, a colleague's suggestion, or faulty memory, doesn't matter.
This was a deliberate escalation from the district court level. Where Judge Castel in Mata imposed a fine, the Second Circuit chose a path that could affect Lee's ability to practice before the court. A grievance referral carries the possibility of suspension or disbarment from the circuit.
Outcome: The Second Circuit referred attorney Lee to the Court's Grievance Panel for further investigation and potential referral to the admission committee. This was the first federal appellate court to address AI-generated fake citations.
Why This Case Matters
Park v. Kim was the first time a federal appellate court addressed AI-generated fake citations. That distinction matters because appellate courts set binding precedent for district courts within their circuits. The Second Circuit covers New York, Connecticut, and Vermont, meaning every district court in those states now operates under this ruling.
The case also demonstrated that the AI citation problem wasn't limited to one outlier incident. Coming just months after Mata v. Avianca, Park v. Kim showed this was a pattern. Attorneys across the country were plugging research gaps with ChatGPT output without verification.
The grievance referral rather than a monetary sanction sent a message that appellate courts view this conduct as a professional fitness issue, not just a procedural violation. Monetary fines can be absorbed. A grievance investigation threatens the attorney's license.
Lessons for Attorneys
Don't use AI as a last resort when legitimate research fails. If you can't find a case supporting your argument through Westlaw, Lexis, or other verified databases, the case probably doesn't exist. ChatGPT doesn't search legal databases. It generates text that looks like citations based on patterns in its training data. A plausible-sounding citation from ChatGPT is not a lead to follow up on. It's a fabrication.
If you do use AI in your research workflow, build verification into the process as a hard requirement, not an optional step. Before any AI-sourced citation goes into a brief, pull up the full text of the case in a legal database. Read the actual opinion. Confirm it says what the AI claims it says. This takes minutes per citation and prevents career-ending mistakes.
Appellate practice carries heightened stakes. District court sanctions are bad. Appellate grievance referrals are worse. The Second Circuit made clear it views AI citation fabrication as a question of attorney competence, not just carelessness. Attorneys practicing before any federal appellate court should treat AI verification with the same rigor they'd apply to any other aspect of appellate briefing.
The Bottom Line
Park v. Kim proved that the AI citation problem extends to the highest levels of federal practice. The Second Circuit's grievance referral, rather than a fine, signaled that appellate courts treat AI fabrication as a fitness-to-practice issue, not just a Rule 11 technicality.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.