Judge Sidney Stein of the Southern District of New York is presiding over NYT v. OpenAI, the highest-profile AI copyright case in the country. When the New York Times sued OpenAI and Microsoft for training AI on its journalism, the case landed in Judge Stein's courtroom—making him the judge who'll shape how copyright law applies to large language model training.

For attorneys filing before Judge Stein, his AI disclosure expectations are informed by daily immersion in the mechanics of AI systems. He's reviewed technical evidence about how GPT models ingest, process, and reproduce copyrighted content. He knows what these tools can do—and where they fabricate.


Judge Stein's AI Disclosure Requirements

Judge Stein's courtroom requires disclosure of generative AI use in preparing filings, consistent with the S.D.N.Y.'s post-Mata v. Avianca standards. Attorneys must certify that AI-generated content has been verified for accuracy and that all citations have been checked through traditional legal research. Given his role in NYT v. OpenAI, Judge Stein's understanding of AI capabilities and limitations is exceptionally detailed. His disclosure expectations reflect a judge who can distinguish between responsible AI assistance and lazy over-reliance—and who expects attorneys to demonstrate the former.

New York Times v. OpenAI is arguably the most consequential AI case in the federal courts. The Times alleges that OpenAI trained GPT models on millions of its articles without permission, and that the models can reproduce Times content nearly verbatim. The case will determine whether AI training on copyrighted content constitutes fair use—a decision that could reshape the AI industry. Judge Stein is working through complex technical and legal questions about training data, model architecture, output reproduction, and transformative use. This depth of engagement makes him one of the most AI-knowledgeable judges in the country.

What Triggers Disclosure Before Judge Stein

The disclosure obligation covers any generative AI use in filing preparation. In the NYT v. OpenAI context, this carries additional weight because the case itself concerns the outputs of AI systems. Using AI to draft filings about AI reproduction of copyrighted content—potentially using a tool that was trained on the very content at issue—creates layers of sensitivity that Judge Stein is acutely aware of. Even in non-AI cases before Judge Stein, the disclosure standard is robust. He's part of a district that was fundamentally changed by the Mata v. Avianca scandal.

Compliance Steps for Judge Stein's Courtroom

Step 1: Review Judge Stein's current standing orders for specific AI disclosure formatting and timing requirements. Step 2: If in the NYT v. OpenAI case, consider carefully which AI tools you use and whether they create any conflict or appearance issues. Step 3: Verify every citation through Westlaw, Lexis, or primary sources—no exceptions. Step 4: Prepare a disclosure statement that identifies the tool, its role, and your verification process. Step 5: Be prepared for AI-related questions at hearings. Step 6: Maintain comprehensive records of your AI use and verification process throughout the litigation.

Judge Stein in the S.D.N.Y. AI Context

The S.D.N.Y. has more judges engaged with AI issues than almost any other district, and Judge Stein's NYT v. OpenAI assignment puts him at the top of that list alongside Judge Rakoff (Heppner privilege ruling) and Judge Castel (post-Mata standing order). Each judge brings different emphasis: Castel focuses on citation verification, Rakoff on privilege and jurisprudential questions, and Stein on the intersection of AI and intellectual property. Together, they've made the S.D.N.Y. the most influential district for AI law development. Judge Stein's ruling in NYT v. OpenAI will likely be one of the most consequential AI decisions in legal history.

The Bottom Line: Before filing in Judge Stein's courtroom, verify every citation, prepare detailed AI disclosures, and understand that he has extraordinary knowledge of how AI systems work through the NYT v. OpenAI case. If you're in an AI-related case, think carefully about which AI tools you use and whether they create appearance or conflict issues.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.