Over 300 judges now require AI disclosure in court filings, and no two rules are alike. Some require certification at filing. Some require separate notices. Some exempt Westlaw and Lexis. Some don't. Some apply to all generative AI. Some only apply to AI used for legal research. Miss the wrong disclosure requirement and you're facing sanctions in a court where the judge specifically told you to disclose.
This is the one-pager every litigator needs. Federal vs. state requirements, judge-level standing orders, what 'disclosure' actually means in each jurisdiction, and the certification language that satisfies the strictest courts. Print it. Pin it. Check it before every filing.
The Federal Landscape: No Uniform Rule, 300+ Individual Orders
There's no Federal Rule of Civil Procedure requiring AI disclosure. Instead, individual judges have issued standing orders — and they vary wildly. Judge Brantley Starr (N.D. Tex.) was one of the first: attorneys must certify at the time of appearance whether generative AI was used in preparing filings. If AI was used, they must confirm a human verified all statements and citations. Judge Evelyn Padin (D.N.J.) requires disclosure whenever AI is used in connection with court submissions. Attorneys must identify the specific tool used, describe which parts of the filing were affected, and certify human review. The Charlotte Division (W.D.N.C.) requires certification that either no generative AI was used (with exceptions for standard legal research platforms like Westlaw and Lexis) or that every statement and citation was verified by a human. As of April 2026, Bloomberg Law tracks these orders in a federal comparison table. The number has grown from a handful in 2023 to over 300, with the first prosecutor sanction and first circuit court sanction both arriving in 2026. Hintyr tracks 300+ rules across federal and state courts. The variance between orders means you can't create one disclosure template and use it everywhere — you have to check the specific judge's order for every case.
State Court Requirements: The Patchwork Gets Thicker
State courts are adding AI disclosure requirements at an accelerating pace. New York has emerged as a leader. Navigating AI disclosure rules in New York courts requires attention to multiple levels — state-level guidance, local rules, and individual judge orders can all apply. Some New York judges require disclosure of any AI use; others limit it to generative AI specifically. Texas state courts have followed Judge Starr's lead from the federal side, with multiple state judges adopting similar certification requirements. Colorado, with its AI Act taking effect June 2026, is creating a regulatory environment that will likely push state courts toward formal disclosure requirements tied to the broader compliance framework. The pattern is clear: state courts are moving faster than formal rulemaking processes. Individual judges aren't waiting for their state supreme court to adopt a uniform rule — they're issuing standing orders based on what they see happening in their courtrooms. For litigators practicing in multiple states, this means checking for AI disclosure requirements has to become as routine as checking local rules for formatting and filing requirements.
What 'Disclosure' Actually Means: The Three Models
Not all disclosure requirements are equal. They fall into three models. Model 1: Certification only. The attorney certifies that either no AI was used or that all AI-generated content was verified by a human. This is the lightest requirement — a single sentence in the filing or a separate certification. Most federal standing orders follow this model. Model 2: Specific disclosure. The attorney must identify which AI tool was used, which portions of the filing were AI-assisted, and certify human review. Judge Padin's order follows this model. It requires more transparency and creates a documented record of exactly how AI contributed to the work product. Model 3: Prior approval. Some courts require advance permission before AI tools can be used in case preparation. This is the strictest model and the least common, but it exists. Under this model, using AI without prior court approval is a sanctionable offense regardless of whether the output was accurate. The key distinction across all three models: failing to check an AI-generated citation is universally treated as a failure of reasonable inquiry in 2026. The days of 'I didn't know the AI hallucinated that case' as a defense are over. Every model requires human verification of every citation.
The Westlaw/Lexis Exception: Where It Applies and Where It Doesn't
Several standing orders explicitly exempt standard legal research platforms — Westlaw, Lexis, Bloomberg Law — from disclosure requirements. The Charlotte Division's order is a clear example: it requires certification that no generative AI was used, 'with exceptions for standard legal research platforms.' But this exception is narrower than many attorneys assume. What's exempted: using Westlaw or Lexis for traditional legal research — case search, statute lookup, Shepard's/KeyCite verification. These are established research tools that attorneys have used for decades. What's NOT exempted: using the AI-powered features within these platforms. Westlaw's CoCounsel, Lexis's Protege with Harvey integration, and Bloomberg Law's AI analysis tools are generative AI features built into traditional platforms. If a standing order requires disclosure of generative AI use, using CoCounsel to draft a research memo likely triggers the disclosure requirement even though it runs on the Westlaw platform. The safe practice: when in doubt, disclose. Over-disclosure has no penalty. Under-disclosure risks sanctions. If you used any AI feature — even within an exempted platform — that generated, summarized, or drafted content (as opposed to retrieving existing content), disclose it.
The Certification Language That Works Everywhere
Until there's a uniform rule, you need certification language that satisfies the strictest courts. Here's what covers you in virtually any jurisdiction. If AI was NOT used: 'No generative artificial intelligence tool was used in the preparation, drafting, or research of this filing, its arguments, or its citations. All legal research was conducted using [Westlaw/Lexis/Bloomberg Law] traditional search functions.' If AI WAS used: 'Generative artificial intelligence was used in the preparation of this filing. Specifically, [Tool Name] was used for [describe specific use — e.g., initial research on [topic], draft summarization of [documents], review of [contract/filing type]]. The undersigned attorney certifies that: (1) a human attorney reviewed all content generated or assisted by AI; (2) all citations, quotations, and legal authorities were independently verified as accurate and current using [Westlaw/Lexis/Bloomberg Law]; (3) the undersigned attorney takes full responsibility for the content of this filing.' Pro tip: maintain a disclosure log for every matter where AI is used. Record the tool, the task, the date, and who performed the human verification. If a court ever asks for your AI usage records, you want a complete trail, not a scramble to reconstruct what happened.
The Bottom Line: 300+ judges require AI disclosure with no uniform standard. Check every judge's standing order before every filing. Three disclosure models exist: certification only, specific disclosure, and prior approval. The Westlaw/Lexis exception doesn't cover generative AI features within those platforms. Use the strictest-standard certification language for every filing — over-disclosure has no penalty, under-disclosure risks sanctions. Keep a disclosure log on every matter. In 2026, failing to verify an AI-generated citation is universally treated as a failure of reasonable inquiry.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
