You get sanctioned, and the amounts aren't trivial -- documented penalties range from $15,000 to $109,700 per incident. Courts have also dismissed cases, referred attorneys to state bar disciplinary boards, and required lawyers to personally notify clients of the misconduct. The era of judges looking the other way on undisclosed AI use ended in 2023 with Mata v. Avianca, and every subsequent ruling has escalated the consequences.

The financial hit is only the beginning. Non-disclosure creates a cascade: the sanction becomes a reportable event, which triggers malpractice insurance implications, which damages your reputation with every judge in the jurisdiction. Attorneys who've been sanctioned for AI misconduct report that opposing counsel now cites their sanctions in future cases to attack credibility. One filing can follow you for a decade.


The Sanctions Record: Real Numbers from Real Cases

Mata v. Avianca (S.D.N.Y. 2023): Attorney Steven Schwartz submitted a brief containing six entirely fabricated case citations generated by ChatGPT. Judge Castel imposed $5,000 sanctions on each attorney involved and required them to send copies of the court's opinion to every judge falsely identified as authoring a fake case. This was the case that started it all.

Whiting v. City of Athens (N.D. Ga. 2024): A Georgia attorney submitted multiple AI-generated filings with fabricated citations. Sanctions hit $15,000, and the court ordered the attorney to complete a CLE course on AI ethics.

Couvrette v. Rexel USA (D. Ariz. 2025): The Oregon formula for attorney sanctions produced a $109,700 penalty -- the largest AI-related sanction documented in federal court. The court found that the attorney's reliance on AI without verification constituted a fundamental failure of professional duty.

Park v. Kim (E.D.N.Y. 2024): The court dismissed the case entirely after discovering AI-fabricated citations in the complaint. Not sanctions on top of a pending case -- the case was over.

Beyond Money: The Full Consequence Chain

Financial sanctions are the headline, but they're often the least damaging consequence.

Bar Referral. Courts that find AI misconduct increasingly refer the matter to state bar disciplinary authorities. A bar complaint is a separate proceeding with its own potential outcomes -- reprimand, suspension, or disbarment. The bar doesn't need to find that you fabricated citations intentionally. Negligent failure to verify is enough for a competence violation under Rule 1.1.

Case Dismissal. Park v. Kim showed that courts will terminate litigation over AI misconduct. If your case is dismissed because you submitted fabricated AI content, you haven't just harmed yourself -- you've harmed your client's legal rights. That's a malpractice claim waiting to happen.

Client Notification Orders. In Mata, the court required attorneys to notify every party and judge affected by the fabricated citations. Several subsequent courts have adopted this requirement. Imagine explaining to a client that you have to inform them -- and potentially opposing counsel -- that your filing contained AI-generated fabrications.

Malpractice Insurance. A sanction for AI misconduct is a reportable event to your malpractice carrier. Expect your premiums to increase. If the misconduct leads to a client claim, expect your carrier to scrutinize whether your AI use falls within covered activities.

The Risk Calculation: What You're Actually Betting

Attorneys who skip AI disclosure are making an implicit bet: that no one will find out, that the judge won't care, and that the AI output is accurate. Here's why all three bets lose.

Detection is getting easier. Opposing counsel now routinely checks citations in AI-era filings. AI detection tools are improving. Judicial clerks are trained to spot hallucination patterns. The days when AI-generated content could pass without scrutiny are over.

Judges care more every quarter. 58.3% of federal courts have adopted AI governance as of March 2026. That number was near zero in 2023. Every new standing order means another jurisdiction where non-disclosure is an explicit violation, not just a bad look.

AI still hallucinates. Stanford's 2025 study found that even purpose-built legal AI tools hallucinate at rates between 17% and 33%. Consumer tools are worse. If you're using AI without disclosing and without verifying, you're submitting content that has a one-in-three to one-in-five chance of containing fabricated information. That's not a rounding error -- it's a certainty over time.

What Courts Actually Want to See

Courts imposing sanctions for AI non-disclosure are sending a clear message: they don't want to ban AI, they want transparency. Every major sanctions opinion includes language acknowledging that AI can be a useful tool when used responsibly.

What "responsibly" means in practice:

Disclose the tool and its role. Tell the court what AI you used and how you used it. A one-paragraph certification satisfies this in most jurisdictions.

Verify every output. The verification obligation is absolute. Every citation must be checked against primary sources. Every factual assertion must be confirmed. "I trusted the AI" has never been accepted as a defense.

Take responsibility. The attorney's signature on a filing is a personal guarantee. AI doesn't change that. If anything, courts have been explicit that AI use increases the attorney's duty of care, not decreases it.

The Trajectory: Penalties Are Getting Worse

Track the trend line. In 2023, Mata v. Avianca produced $5,000 per attorney. By 2024, Whiting v. Athens hit $15,000. By 2025, Couvrette reached $109,700. Courts are not becoming more lenient -- they're escalating because attorneys keep making the same mistake after being put on notice.

The next wave of sanctions will be harsher for a simple reason: courts can no longer accept the "I didn't know" defense. ABA Formal Opinion 512 put every licensed attorney on notice in 2024. The proliferation of standing orders across 58.3% of federal courts eliminated jurisdictional ignorance. An attorney sanctioned for AI non-disclosure in 2026 is an attorney who ignored two years of explicit warnings from every authority in the profession.

The Bottom Line: Non-disclosure of AI use has produced sanctions from $5,000 to $109,700, case dismissals, bar referrals, and career-damaging consequences -- and the penalties are escalating, not stabilizing.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.