Judge Eumi Lee of the Northern District of California is presiding over Concord v. Anthropic, a case that puts AI safety company Anthropic's practices under judicial scrutiny. Her courtroom is another node in the N.D. California's extraordinary concentration of AI litigation—and her AI disclosure expectations reflect the sophistication that comes with handling these cases daily.

For attorneys filing before Judge Lee, the message is straightforward: she's deeply familiar with AI technology through her caseload, she expects transparency about AI use in filings, and she's part of a district where AI disclosure has become standard practice rather than an exception.


Judge Lee's AI Disclosure Requirements

Judge Lee's AI disclosure expectations follow the N.D. California's emerging consensus: attorneys must disclose the use of generative AI in preparing filings and certify that all AI-generated content has been verified for accuracy. Her approach is practical and compliance-focused. She expects attorneys to identify the AI tools used, describe the nature of the assistance, and confirm that traditional legal research was used to verify citations and factual assertions. The requirements apply to all filings, including motions, briefs, and discovery-related documents.

Concord v. Anthropic and the AI Safety Context

Concord v. Anthropic involves claims against Anthropic, the maker of Claude AI. The case addresses questions about AI outputs, content generation, and corporate responsibility. For attorneys in this case, using AI tools—potentially including Anthropic's own Claude—to prepare filings creates a unique recursive dynamic. Judge Lee is navigating this thoughtfully, focusing on ensuring that whatever tools attorneys use, the work product meets professional standards of accuracy and candor. The case has also given her detailed exposure to how AI companies build and market their products, informing her understanding of AI capabilities and limitations.

What Triggers AI Disclosure Before Judge Lee

The disclosure obligation is triggered by any generative AI use in filing preparation. This includes drafting, research, analysis, and summarization performed by tools like ChatGPT, Claude, Gemini, or similar models. Traditional research platforms aren't covered unless their AI-enhanced features were used to generate new content. The obligation extends to the entire legal team contributing to the filing—partners, associates, contract attorneys, and support staff. If anyone on the team used generative AI, disclosure is required regardless of how heavily the output was edited.

Practical Steps for Filing Before Judge Lee

Step 1: Check Judge Lee's current standing orders for specific AI disclosure formatting requirements. Step 2: Track AI use throughout your drafting process—don't try to reconstruct it after the fact. Step 3: Verify every citation and factual assertion through traditional research. Step 4: Prepare a clear, specific disclosure statement. Step 5: If your case involves an AI company as a party, consider whether your choice of AI tools creates any appearance issues and address them in your disclosure if appropriate. Step 6: Maintain internal records of your AI use and verification process.

Judge Lee in the N.D. California AI Context

Judge Lee is part of the N.D. California's remarkable roster of judges handling AI cases, alongside Judge Alsup (Bartz v. Anthropic), Judge Gonzalez Rogers (Musk v. OpenAI), Judge Lin (xAI v. OpenAI), and Judge Chhabria (Silverman v. Meta). Each judge brings different experience and emphasis, but collectively they're creating the most AI-literate bench in the federal judiciary. Judge Lee's contribution through the Concord v. Anthropic case adds the AI safety dimension to the district's AI expertise, focusing on questions about AI outputs and responsible deployment.

The Bottom Line: Before filing in Judge Lee's courtroom, prepare clear AI disclosures, verify all citations, and recognize that she understands AI technology through her Concord v. Anthropic caseload. If you're using an AI tool made by a party in the case, address the issue proactively rather than hoping no one notices.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.