Judge Valerie Caproni came to the SDNY bench from the FBI, where she served as General Counsel—the top legal position in America's premier federal law enforcement agency. That background gives her a unique lens on AI-assisted filings that most judges don't have: she's spent her career evaluating the reliability of information and evidence.
When you file before Judge Caproni, you're filing before someone trained to question sources, verify claims, and identify unreliable information. Generative AI's tendency to hallucinate citations and fabricate authorities triggers exactly the kind of scrutiny her career prepared her for. The SDNY's high standards combine with her investigative instincts to create a courtroom where AI shortcuts carry outsized risk.
Judge Caproni's Background and AI Scrutiny
Judge Caproni served as FBI General Counsel from 2003 to 2011 before her appointment to the SDNY bench in 2013. In that role, she oversaw legal review of intelligence operations, FISA applications, and counterterrorism investigations—work that demands absolute accuracy in legal citations and factual representations. That training translates directly to how she evaluates court filings. Attorneys who submit AI-generated content with unverified citations are presenting exactly the kind of unreliable information her career taught her to reject. She understands that the appearance of accuracy isn't the same as actual accuracy, and AI tools excel at producing convincing-looking but fundamentally wrong output.
SDNY Expectations and AI Disclosure
The Southern District of New York operates with some of the highest briefing standards in the federal system. While no district-wide AI rule exists, the Mata v. Avianca case in 2023 created a permanent shift in how SDNY judges view AI-assisted work. Judge Caproni's individual practices emphasize thorough preparation and accurate representation of law and facts. She expects attorneys to know their cases inside and out, which means using AI as a crutch rather than a tool will be obvious in her courtroom. Voluntary AI disclosure is the smart play when filing before any SDNY judge, and Judge Caproni's background makes undisclosed AI use particularly risky.
National Security and Technology Cases
Judge Caproni's docket includes cases touching national security, cybersecurity, and government enforcement actions—areas where AI use raises heightened concerns. In cases involving classified information, government investigations, or sensitive corporate data, using generative AI tools introduces confidentiality risks that go beyond citation accuracy. Feeding case facts into a public AI model could compromise client confidences, expose privileged strategy, or even create national security issues. Judge Caproni, with her FBI background, is acutely aware of these data security dimensions that many other judges overlook.
Practical Compliance Steps
Step 1: Before using any AI tool, assess whether your case involves sensitive or classified information that shouldn't be entered into any external system. Step 2: Verify every citation through traditional legal databases—Westlaw, Lexis, Bloomberg Law. Step 3: For cases involving government enforcement or national security dimensions, use only secure, enterprise-grade AI tools if you use AI at all. Step 4: Consider voluntary disclosure of AI use in your filings, especially in cases with government parties. Step 5: Maintain a clear internal record of what information was and wasn't entered into AI systems, in case data security questions arise.
How Judge Caproni Fits Into the SDNY AI Landscape
Judge Caproni is part of an SDNY bench that includes Judge Jed Rakoff (who ruled on AI privilege), Judge Jesse Furman (who chairs the federal evidence rules committee), and Judge Denise Cote (who handled major tech litigation). Together, these judges are shaping how federal courts handle AI in litigation. Judge Caproni's particular contribution to this landscape is her emphasis on information security and source reliability—perspectives drawn directly from her FBI experience. While other judges focus on citation accuracy or disclosure requirements, Caproni adds a layer of concern about what data enters AI systems in the first place.
The Bottom Line: Judge Caproni's FBI background makes her especially attuned to information reliability and data security. If you're using AI before her, verify everything, consider the security implications of what you feed into AI tools, and treat voluntary disclosure as your baseline.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
