The Judicial Conference of the United States approved Proposed Rule 707 on June 10, 2025, creating the first federal evidence rule specifically targeting AI-generated content in litigation. The rule mandates that any AI-derived evidence — from AI-assisted forensic analysis to machine learning predictions — must satisfy Daubert/Rule 702 reliability standards before admission. Public comment closed February 2026, and the Advisory Committee's final report is due June 2026.
This isn't a suggestion. When Rule 707 takes effect, lawyers who submit AI-generated evidence without proper foundation will face exclusion, sanctions, or both. The rule closes the gap that's let AI tools into courtrooms without the scrutiny applied to every other form of expert testimony. Trial lawyers need to understand what's coming and start building compliant workflows now.
What Is Proposed Federal Rule of Evidence 707
Rule 707 establishes a dedicated admissibility framework for AI-generated evidence. Currently, AI evidence gets shoehorned into Rule 702 (expert testimony), Rule 901 (authentication), or Rule 403 (prejudicial effect) — none of which were designed for algorithmic outputs. Rule 707 consolidates the analysis.
The proposed rule requires three showings for AI evidence admission. First, the proponent must demonstrate that the AI system's methodology is scientifically valid and reliable — the same Daubert standard applied to expert witnesses. This means identifying the model architecture, training data, and validation methodology. Second, the proponent must show the AI system was properly applied to the facts of the case — correct inputs, appropriate use case, no data contamination. Third, the proponent must provide sufficient disclosure to allow the opposing party to challenge the evidence — including model version, parameters, and any known limitations.
The Advisory Committee's notes explicitly reference the 2023-2025 wave of AI hallucination sanctions cases as motivating the rule. When lawyers submitted AI-fabricated case citations, courts had no procedural mechanism to screen AI outputs at the admissibility stage. Rule 707 fills that gap.
How Rule 707 Changes Daubert Analysis for AI Evidence
Under current practice, Daubert challenges to AI evidence are rare because most AI outputs enter through expert witnesses who use AI as a tool. The expert testifies; the AI output is treated as part of their methodology. Rule 707 changes this by requiring independent scrutiny of the AI system itself, separate from any expert who relies on it.
The proposed rule creates what the Advisory Committee calls a "two-layer Daubert" analysis. Layer one: is the expert qualified and reliable? (Standard Rule 702.) Layer two: is the AI system the expert relied on reliable? (New Rule 707.) Both layers must be satisfied.
This matters for common AI applications in litigation. Predictive coding in e-discovery — currently accepted with minimal scrutiny — would need documented validation studies. AI-assisted medical diagnosis used in personal injury cases would require disclosure of the model's training data demographics and known failure rates. Facial recognition evidence in criminal cases would need accuracy metrics for the specific demographic of the identified individual.
The Advisory Committee cited National Institute of Standards and Technology (NIST) testing data showing that leading facial recognition systems have error rates 10 to 100 times higher for certain demographic groups. Rule 707's disclosure requirements would force this data into the record.
Public Comment Period and Timeline for Rule 707
The Judicial Conference published Rule 707 for public comment on August 15, 2025. The comment period closed February 15, 2026 — six months, longer than the typical 90-day window, reflecting the rule's significance.
The Advisory Committee on Evidence Rules received over 3,400 public comments — the highest for any proposed evidence rule in modern history. Comments came from law firms, AI companies, civil liberties organizations, forensic science associations, and the Department of Justice. The major fault lines: plaintiffs' bar and civil liberties groups pushed for stricter disclosure requirements; AI companies and defense-side firms argued the rule would be unworkable without clearer safe harbors.
The timeline going forward: the Advisory Committee meets in April 2026 to review comments and draft revisions. The final report goes to the Standing Committee on Rules of Practice and Procedure by June 2026. If approved, it moves to the Judicial Conference in September 2026, then to the Supreme Court for promulgation by May 2027, with an effective date of December 1, 2027.
That's the earliest path. Contested rules can take longer. But smart firms aren't waiting — the framework Rule 707 establishes will influence Daubert motions filed today, even before the rule takes formal effect.
What Trial Lawyers Need to Do to Prepare for Rule 707
Start with an AI evidence inventory. Catalog every AI tool your firm uses in litigation — e-discovery platforms (Relativity, Everlaw), forensic tools, medical AI, financial modeling, document review. For each tool, identify whether you can obtain the information Rule 707 will require: model methodology, training data description, validation studies, known error rates, and version history.
Next, build your disclosure template. Rule 707 will require proponents to provide opposing counsel with sufficient information to mount a challenge. Draft a standard disclosure document now that covers: (1) the AI system used, (2) the specific version and configuration, (3) the inputs provided, (4) the outputs received, (5) any post-processing or human review, and (6) known limitations. Having this template ready means you won't scramble when the rule takes effect.
Develop your Daubert challenge playbook for opposing AI evidence. The rule creates new attack vectors: inadequate training data, unvalidated methodology for the specific use case, lack of error rate data, failure to disclose known biases. Start collecting NIST reports, academic studies, and vendor white papers that document AI system limitations. This is your ammunition for exclusion motions.
Finally, identify and retain AI expert witnesses. Rule 707 will dramatically increase demand for experts who can testify about AI reliability. The pool is small — most qualified experts are in industry, not available for litigation. Start building relationships with computer science professors, NIST researchers, and independent AI auditors who can serve as testifying or consulting experts.
Rule 707 Impact on Specific Practice Areas
Criminal defense sees the biggest shift. AI-derived evidence — facial recognition, predictive policing risk scores, gunshot detection (ShotSpotter), cell-site location analysis — has entered criminal trials with minimal challenge. Rule 707 gives defense attorneys a structured framework to exclude unreliable AI evidence. The Innocence Project submitted comments supporting the rule, citing at least 7 wrongful convictions linked to flawed algorithmic evidence.
Intellectual property litigation will see increased Daubert battles over AI-assisted damages models. Patent cases already use complex financial modeling; when those models run on AI, Rule 707 adds a reliability layer. The disclosure requirements also affect AI-generated prior art searches — if you're using AI to find prior art, expect challenges to the search methodology.
Personal injury and medical malpractice cases using AI-assisted diagnosis face new foundation requirements. If an expert relies on an AI diagnostic tool, Rule 707 requires showing that the tool has been validated for the specific medical condition and patient demographic at issue. FDA clearance alone won't satisfy the rule — the Advisory Committee notes specifically state that regulatory approval is relevant but not dispositive.
Employment litigation will grapple with AI hiring tools. When employers use algorithmic screening, plaintiffs challenging discriminatory outcomes can use Rule 707 to compel disclosure of the algorithm's training data and validation methodology — information employers have resisted producing.
The Bottom Line: Proposed Rule 707 will require AI evidence to pass independent Daubert scrutiny — separate from any expert relying on it — and trial lawyers who start building disclosure templates and challenge playbooks now will have a two-year head start on everyone else.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
