The Supreme Court of the United States has no explicit AI disclosure rule. For the highest court in the land — the one that every other court looks to for guidance — the silence is itself a statement. It doesn't mean AI use is prohibited, and it doesn't mean it's encouraged. It means practitioners filing at the Supreme Court level need to navigate without a map.
That silence creates real anxiety for the small universe of attorneys who file Supreme Court briefs. When your filing goes before nine justices whose opinions reshape American law, the risk calculus for AI-assisted work is fundamentally different from any other court. Here's what the silence means, what practitioners are actually doing, and how to handle AI in SCOTUS filings.
The Supreme Court's Silence on AI Disclosure
As of April 2026, the Supreme Court has not adopted any rule, order, or guidance addressing AI use in filings. The Supreme Court Rules don't mention AI, the Clerk's Office hasn't issued guidance, and no justice has publicly addressed the topic in a written opinion. This contrasts with the flurry of activity at the district and circuit court levels. The silence likely reflects the Supreme Court's institutional conservatism — the Court moves slowly on procedural changes and tends to let lower courts experiment before acting. But it also means there's no safe harbor. If an AI error surfaces in a Supreme Court filing, there's no established framework for how the Court would respond.
What the Silence Actually Means for Practitioners
The absence of an AI rule doesn't create a permissive environment — if anything, it creates a more cautious one. Supreme Court practitioners are a small, elite bar where reputation is everything. The Supreme Court bar's informal norms are more demanding than most courts' formal rules. Filing a brief with hallucinated citations before the Supreme Court wouldn't just risk sanctions — it would end a career. The justices' clerks are among the most meticulous readers in the legal profession. They will catch errors that lower court clerks might miss. And unlike district court, where a sanctions motion is the typical remedy, a credibility failure at the Supreme Court affects your ability to ever be taken seriously before the Court again.
How Supreme Court Practitioners Are Handling AI
Conversations with practitioners who regularly file at the Supreme Court reveal a conservative approach. Most are using AI tools for initial research and brainstorming but not for final brief drafting. Citation verification at the Supreme Court level is already multi-layered — typically involving multiple rounds of Westlaw/Lexis checking, cite-checking services, and senior attorney review. AI adds another research tool to the front end of this process but doesn't change the verification standard. Some practitioners report using AI to identify potential arguments or counterarguments during cert petition preparation, then building those arguments through traditional research. The consensus: AI as a starting point is acceptable; AI as a shortcut to final work product is not.
Amicus Briefs and Third-Party Filings
The AI disclosure question is particularly acute for amicus briefs filed at the Supreme Court. Amicus briefs often come from organizations, academics, and advocacy groups that may have less rigorous quality control than experienced Supreme Court practitioners. The Court receives hundreds of amicus filings per term, and the potential for AI-generated content in these briefs is significant. If a poorly verified amicus brief contains AI-hallucinated citations, it could undermine the credibility of the party it purports to support. Organizations considering amicus filings should implement the same verification standards as lead counsel, regardless of whether the Court requires disclosure.
Best Practices for AI Use in Supreme Court Filings
For the rare occasion when you're filing before the Supreme Court, here's the standard: First, do not rely on AI for any citation in a Supreme Court filing — independently verify every case, statute, and secondary source through traditional legal databases. Second, if AI tools contributed to research or analysis, treat that contribution as a rough draft requiring complete human reconstruction, not an output requiring mere editing. Third, disclose AI use voluntarily even though the Court doesn't require it — transparency builds credibility with clerks and justices. Fourth, implement a minimum of three independent verification rounds for any brief: the drafting attorney, a senior reviewer, and a cite-checking service. Fifth, for amicus briefs, apply the same verification standards regardless of the filing organization's size or resources — a single bad citation in an amicus brief can damage the entire party's credibility.
The Bottom Line: The Supreme Court's silence on AI disclosure doesn't mean AI use is safe — it means there's no established framework for when things go wrong. In a court where reputation is career capital and clerks catch everything, the practical standard for AI compliance is the highest in the legal system. Use AI for early-stage research if you want, but your Supreme Court filing should be verified as if AI didn't exist.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
