Da Silva Moore v. Publicis Groupe is the foundational case for AI in legal practice. In 2012, Magistrate Judge Andrew Peck wrote the first judicial opinion approving predictive coding for document review in e-discovery. This ruling opened the door for every AI-powered legal tool that followed, and its framework for evaluating AI in litigation is still the standard.


Background

Monique da Silva Moore filed a Title VII gender discrimination suit against Publicis Groupe, a global advertising and communications company. The case itself was unremarkable. What made it historic was the discovery phase: over three million electronic documents needed to be reviewed for relevance and privilege.

Manual review of three million documents is staggeringly expensive. At typical contract attorney review rates, it would cost millions of dollars and take months. The parties agreed to use predictive coding, also known as technology-assisted review (TAR), a machine learning system that learns from human reviewers' decisions and applies those decisions to classify the remaining documents.

Magistrate Judge Andrew J. Peck of the Southern District of New York was assigned the discovery dispute. Peck was already recognized as one of the leading judicial authorities on e-discovery. When the question of predictive coding's legal acceptability landed on his desk on February 24, 2012, he wrote the opinion that changed legal technology forever.

Da Silva Moore v. Publicis Groupe
287 F.R.D. 182 (S.D.N.Y. 2012)
Court
U.S. District Court, Southern District of New York
Date
2012-02-24
Category
AI in Discovery
Sanctions
None
AI Case Law — Updated April 2026

What Happened

The core question was whether computer-assisted review was a legally acceptable method for identifying relevant documents in litigation. Some parties and courts had resisted technology-assisted review, insisting that human eyes had to review every document. The concern was that a machine might miss relevant documents or privilege issues that a human reviewer would catch.

Judge Peck rejected that concern with data. Studies showed that predictive coding was at least as accurate as manual human review, often more so. Human reviewers suffer from fatigue, inconsistency, and subjective judgment variations. Predictive coding applies consistent criteria across millions of documents without tiring or losing focus.

The court approved predictive coding for the case, finding it was an acceptable methodology under Federal Rule of Civil Procedure 26(b)(2)(B)'s proportionality requirements. The decision emphasized that courts shouldn't hold technology-assisted review to a higher standard than keyword searches or manual review. If the methodology is reasonable, transparent, and subject to quality controls, it's acceptable.


The Ruling

Judge Peck's February 24, 2012 opinion (287 F.R.D. 182) held that predictive coding is a legally acceptable method for searching electronically stored information (ESI) in appropriate cases. The court identified several factors supporting its use: the parties had agreed to the methodology, the document volume (three million documents) made manual review impractical, predictive coding was shown to be superior to or at least as good as manual review, the approach was proportional to the needs of the case, and the process was transparent.

The court explicitly rejected the notion that technology-assisted review must be perfect to be acceptable. No review methodology is perfect. Manual review has well-documented error rates. The question isn't whether the technology is flawless but whether it's reasonable for the case at hand.

The opinion also established process requirements: the parties should cooperate on the training set (the documents used to teach the system), the process should be transparent and documentable, and quality control checks should verify the system's accuracy. These process safeguards became the standard framework for TAR in litigation.

Outcome: The court held that predictive coding is an acceptable method for searching relevant electronically stored information (ESI) in appropriate cases, considering the parties' agreement, the massive document volume, the superiority of TAR over manual review, cost proportionality, and process transparency.

Why This Case Matters

This is the case that legitimized AI in legal practice. Before Da Silva Moore, using computers to review legal documents was viewed with suspicion by many courts and practitioners. After it, technology-assisted review became not just acceptable but preferred for large-scale document review. Today, TAR is standard practice in complex litigation.

The framework Judge Peck established extends far beyond e-discovery. The principles that AI doesn't need to be perfect to be useful, that it should be held to the same standard as human performance, that transparency and quality controls make AI acceptable, are the same principles courts apply to AI legal tools today. Every judge evaluating AI in litigation, from document review to legal research to contract analysis, builds on Da Silva Moore's foundation.

The economic impact was transformative. Document review was the largest cost center in litigation. TAR reduced costs by 60-80% in many cases while maintaining or improving accuracy. That cost reduction changed the economics of litigation, made discovery more proportional, and created an entire legal technology industry. The legal AI market that exists today traces its legitimacy back to this single opinion.


Lessons for Attorneys

For litigators: Da Silva Moore established that you don't need to justify using AI tools for document review. The default assumption is now that TAR is acceptable. What you need to justify is your specific implementation: the training methodology, quality controls, transparency measures, and accuracy metrics. Document your process thoroughly, because challenges to TAR usually target the process, not the technology itself.

For managing partners evaluating AI tools: Judge Peck's framework gives you the criteria for assessing any legal AI tool. Is it at least as accurate as the human process it replaces? Is the methodology transparent and documentable? Are there quality controls? Can you explain how it works to a judge? If yes, the tool has judicial backing. If no, it's not ready for your practice.

For attorneys arguing against AI use by opposing parties: Da Silva Moore cuts both ways. You can't simply object to AI-assisted review on principle. You need to show specific methodological flaws: inadequate training data, lack of quality controls, unreasonable error rates, or failure to cooperate on the review protocol. Generic 'I don't trust computers' arguments don't work after this case.


The Bottom Line

Da Silva Moore v. Publicis Groupe is the case that made AI legitimate in legal practice. Its 2012 approval of predictive coding for e-discovery created the framework courts still use to evaluate AI tools in litigation. Every AI-powered legal technology traces its judicial acceptance back to this ruling.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.