Judge Timothy Kelly sits on the U.S. District Court for the District of Columbia — a courthouse that handles more government litigation than any other federal district. Appointed by President Trump in 2017, Kelly's docket is heavy with administrative law, national security cases, and challenges to federal agency actions. His courtroom is where AI policy meets federal practice.

For practitioners filing before Judge Kelly, the AI disclosure landscape in D.C. is evolving fast. The DC Courts created an AI Taskforce in March 2024, published an AI Roadmap in June 2025, and issued an Internal AI Use Policy for court staff in July 2025. While there's no district-wide standing order mandating AI certification for attorneys in the D.D.C., individual judges are increasingly addressing the issue — and government cases bring unique AI considerations that don't exist in other districts.


The D.D.C.'s AI Framework and Judge Kelly's Position

The District of Columbia federal court hasn't adopted a blanket AI disclosure requirement like some other districts. Instead, the DC Courts AI Taskforce has taken a phased approach: studying AI applications, developing internal policies, and building toward formal guidance. The June 2025 AI Roadmap outlines a plan for evaluating AI use cases, creating training programs for court staff, and establishing rules for how AI should be used in court operations. Judge Kelly's individual practices don't include a specific AI standing order, but attorneys should expect heightened scrutiny in a courthouse that's actively thinking about these issues at the institutional level.

Government Cases and Unique AI Considerations

Judge Kelly's docket is dominated by government litigation — challenges to agency rulemaking, FOIA disputes, federal employment cases, and administrative actions. This creates AI issues that most district courts don't face. Government attorneys may be constrained by agency-specific AI policies. DOJ has its own internal guidance on AI use in litigation. Federal agencies are implementing AI governance frameworks under executive orders. When both sides of a case involve government entities or government-regulated activities, the AI disclosure calculus becomes more complex. Practitioners filing government cases before Judge Kelly should understand not just court rules but their own agency's AI policies.

Judge Kelly's Background and Judicial Approach

Before his appointment, Kelly served as the Chief Counsel for National Security on the Senate Judiciary Committee under Chairman Chuck Grassley — the same senator who later investigated federal judges' use of AI in October 2025. Kelly also served as a Special Assistant U.S. Attorney and in private practice at Baker Botts. His national security background means he's comfortable with classified material, FISA-adjacent issues, and the intersection of technology and government power. He runs a tight courtroom with clear expectations for briefing and argument. Sloppy filings — AI-generated or otherwise — won't be tolerated.

Practical Compliance Steps for Filing Before Judge Kelly

Step 1: Check Judge Kelly's current individual practices on the D.D.C. website. Even without a formal AI standing order, his rules may be updated at any time. Step 2: If you represent a government agency, confirm your agency's internal AI use policy before using generative AI tools for case preparation. DOJ and other agencies have their own requirements. Step 3: Verify every citation independently — D.D.C. judges are especially attuned to accuracy in government cases where statutory interpretation is central. Step 4: If your case involves classified or sensitive material, do not input any case-related information into public AI tools. The Heppner privilege ruling from SDNY makes clear that AI platform privacy policies may destroy confidentiality. Step 5: Maintain documentation of your AI use and verification process in case opposing counsel or the court raises questions.

The D.D.C. in Context: How This District Is Approaching AI

The D.D.C. is taking a methodical, institutional approach to AI — fitting for a court that handles the most complex government litigation in the country. The AI Taskforce, the AI Roadmap, and the Internal AI Use Policy for staff all signal that formal attorney-facing requirements are coming. Senator Grassley's October 2025 investigation into judges' AI use — which revealed that Judge Henry Wingate's clerk used Perplexity to draft an error-filled order — has accelerated the conversation. The D.C. Circuit's affirmance of the *Thaler v. Perlmutter* copyright ruling (holding AI-generated works aren't copyrightable) shows this circuit is already shaping national AI law. Expect D.D.C. judges, including Kelly, to formalize AI disclosure requirements in the near term.

The Bottom Line: The D.D.C. hasn't mandated AI disclosure yet, but it's actively building toward formal requirements. Before filing before Judge Kelly, check his current individual practices, comply with any agency-specific AI policies, and verify every citation independently. Government cases bring unique AI risks — especially around classified material and public AI tools.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.