Judge Brantley Starr's May 2023 standing order was the first judicial requirement in U.S. courts that attorneys certify their use of generative AI in filings. Issued days after the Mata v. Avianca scandal broke, it launched a nationwide movement. By 2025, hundreds of federal and state judges had followed with their own AI disclosure requirements.
Background
In late May 2023, the legal world was reeling from the Mata v. Avianca revelations. Attorneys Peter LoDuca and Steven Schwartz had filed a brief full of fabricated case citations generated by ChatGPT, and the story dominated legal news. Judges across the country were asking the same question: is this happening in my courtroom?
Judge Brantley Starr of the Northern District of Texas didn't wait to find out. On May 30, 2023, he issued a standing order requiring all attorneys appearing before him to file a certificate regarding generative AI use. The order was direct: if you used AI, say so and confirm you verified everything. If you didn't use AI, say that too.
The order named specific tools: ChatGPT, Harvey.ai, and Google Bard. This specificity was deliberate. Judge Starr wanted attorneys to understand that the requirement wasn't about some theoretical future technology. It was about the tools they were already using. The order acknowledged that AI has legitimate uses but stated that 'legal briefing is not one of them' without human verification.
What Happened
The standing order created a simple certification requirement. Every attorney filing in Judge Starr's court must submit a certificate either (a) confirming that no portion of any filing was drafted by generative AI, or (b) confirming that any AI-generated content was verified for accuracy using traditional legal databases like Westlaw, LexisNexis, or other authoritative sources.
The order didn't ban AI use. It required disclosure and verification. An attorney who used ChatGPT to draft a brief could still file it, but had to certify they'd checked every citation, every quotation, and every legal proposition against real legal sources. The certification carries the weight of a representation to the court, meaning a false certification is itself sanctionable.
The response was immediate and widespread. Within weeks, other federal judges began issuing similar orders. By the end of 2023, dozens of courts had AI disclosure requirements. By 2025, the number reached into the hundreds, spanning federal district courts, bankruptcy courts, state courts, and administrative tribunals. Judge Starr's order became the template.
The Ruling
The standing order isn't a 'ruling' in the traditional sense. It's a court administrative order that functions as a local rule for all cases before Judge Starr. Its key holding: generative AI is not a reliable source for legal research and analysis. While it has 'many uses,' legal briefing 'is not one of them' without independent verification.
The order established a principle that became the foundation of judicial AI policy: attorneys retain full responsibility for every word in their filings, regardless of whether AI helped produce those words. AI is a tool, not a shield. Using it doesn't reduce the attorney's obligations. It increases them by adding a verification step.
The certification requirement serves two functions. It creates accountability (attorneys must affirmatively represent their AI use or non-use) and it deters reckless AI reliance (knowing you'll have to certify verification makes you more likely to actually verify). Together, these functions address the core risk: attorneys treating AI output as trustworthy without checking it.
Outcome: The standing order remains in effect. It sparked a nationwide wave — by 2025, hundreds of federal and state judges had issued similar orders, local rules, or guidance addressing AI use in court filings.
Why This Case Matters
Judge Starr's order launched a structural shift in how courts handle AI. Before May 2023, there was no formal framework for AI use in court filings. After it, the default position in hundreds of courts is that AI use must be disclosed and AI output must be verified. That's a fundamental change in the relationship between attorneys, AI tools, and the courts.
The order's influence extended beyond individual courtrooms. It prompted bar associations, law schools, and firms to develop AI policies. The American Bar Association issued guidance. State bar ethics committees published opinions. Law firms created internal AI use policies. A single standing order from a Texas judge catalyzed an industry-wide reckoning with AI governance.
The order also proved that existing judicial authority was sufficient to address AI risks. Courts didn't need new legislation or formal rule amendments. Standing orders, local rules, and inherent judicial power provided adequate tools. This pragmatic approach let the judicial system respond quickly to a rapidly evolving technology.
Lessons for Attorneys
Every attorney needs to know their jurisdiction's AI disclosure requirements before filing anything. The landscape changes constantly as more courts adopt rules. Check not just the judge's standing orders but also local court rules, which increasingly include AI-specific provisions. Filing without required AI certifications is itself a compliance failure.
For managing partners: establish firm-wide AI disclosure policies that comply with the strictest requirements your attorneys face. If any of your attorneys appear in courts with certification requirements, the firm needs a system to track AI use in filings, verify AI-generated content, and maintain records of the verification process. Retroactive compliance is much harder than building the system upfront.
For attorneys who do use AI: Judge Starr's framework isn't punishment. It's a reasonable expectation. Use AI tools, but verify everything against authoritative sources. Document your verification process. File your certifications honestly. The attorneys who got sanctioned in Mata v. Avianca, People v. Crabill, and Johnson v. Dunn didn't get in trouble for using AI. They got in trouble for not verifying and not being honest about it.
The Bottom Line
Judge Starr's standing order created the template for judicial AI governance that hundreds of courts now follow. The core principle is simple: attorneys must disclose AI use and verify all AI-generated content. It's not a ban on AI. It's a requirement for honesty and competence.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.