Courts are developing frameworks for AI-generated evidence, but most lawyers are applying yesterday's rules to today's technology. The proposed Rule 707 framework creates a specific pathway for AI-generated demonstrative evidence -- timelines, reconstructions, data visualizations -- that's distinct from how courts treat deepfakes and manipulated media. Understanding this distinction is now a litigation competency.

FRE 901 authentication requirements don't disappear because your evidence was created by AI. If anything, judges are applying heightened scrutiny. You need to authenticate the tool, the inputs, the process, and the output. Claude Design outputs -- structured slides, diagrams, timelines -- face different evidentiary treatment than AI-generated photographs or video because they don't purport to depict reality.


The Rule 707 Framework for AI-Generated Evidence

The proposed Rule 707 framework (under consideration by the Advisory Committee on Evidence Rules) creates a structured approach for AI-generated demonstrative evidence. It requires three showings: (1) the AI tool used is generally accepted or has been tested for reliability in the relevant domain, (2) the inputs to the AI system are independently admissible or stipulated, and (3) the proponent can explain the process by which the AI transformed inputs into outputs.

This framework explicitly distinguishes between AI as a tool for creating demonstrative aids (timelines, charts, accident reconstructions) and AI as a generator of substantive evidence (deepfake video, synthetic audio). The former gets treated like any other demonstrative exhibit under Rules 611 and 1006. The latter triggers Daubert-level scrutiny.

For litigators, the practical implication is documentation. Every AI-generated exhibit needs a provenance chain: what tool, what inputs, what settings, what output, what (if any) human modifications.

FRE 901 Authentication for AI-Generated Exhibits

Rule 901(a) requires evidence sufficient to support a finding that the item is what the proponent claims it is. For AI-generated exhibits, courts are requiring more than "I used Claude to make this timeline."

The emerging authentication standard has four elements: (1) testimony about the AI tool's capabilities and limitations, (2) evidence that the inputs were accurate and complete, (3) explanation of any human curation or editing of the AI output, and (4) disclosure that the exhibit was AI-generated, including the specific tool and version used.

Several courts have excluded AI-generated exhibits where the proponent couldn't explain how the tool processed the underlying data. A timeline that accurately depicted 50 events but arranged them in an AI-determined visual hierarchy was excluded because the attorney couldn't explain why certain events were visually emphasized over others. The lesson: understand your tool or lose your exhibit.

Claude Design generates structured visual content -- slides, diagrams, flowcharts, organizational charts, timelines. It doesn't generate photorealistic images or video. This distinction matters enormously for evidentiary treatment.

A Claude Design timeline of contract negotiations is a demonstrative aid. No one confuses it for a photograph of reality. It's the AI equivalent of a paralegal creating a PowerPoint slide -- the content is what matters, and the tool is just a means of presentation.

AI-generated photographs and video face entirely different scrutiny because they can deceive. A synthetic image of an accident scene could mislead a jury into believing it depicts the actual scene. Courts apply heightened authentication requirements, often requiring expert testimony about the generation process and limitations.

For trial attorneys, this means Claude Design and similar structured-output tools are far easier to get into evidence than AI image generators. Use AI for structured demonstratives. Use photographs and video from actual sources.

What Courts Currently Allow and Exclude

As of early 2026, the case law is fragmented but trending toward admissibility for AI-generated demonstrative aids with proper foundation. Courts have admitted AI-generated timelines, financial summaries, relationship diagrams, and data visualizations when the proponent established the accuracy of underlying data and disclosed the AI tool used.

Courts have excluded AI-generated content when: the proponent couldn't identify which AI tool was used, the underlying data wasn't independently verified, the AI output was presented as substantive evidence rather than a demonstrative aid, or the opposing party demonstrated that the AI tool introduced distortions in the presentation.

The safest approach: treat AI-generated exhibits like expert demonstratives. Prepare foundation testimony. Disclose the tool and methodology. Produce the underlying data in discovery. Be ready to explain every element of the visual to the court.

Best Practices for Using AI-Generated Evidence in Litigation

First, document everything. Screenshot your prompts. Save the raw AI output before any edits. Record the tool name, version, and date. This provenance chain is your foundation testimony.

Second, disclose proactively. Several courts now require affirmative disclosure of AI-generated exhibits in pretrial filings. Even where not required, disclosure prevents opposing counsel from making your methodology the issue instead of your evidence.

Third, separate demonstrative from substantive. Use AI to present data you already have in admissible form. Don't use AI to generate data, create simulations, or produce reconstructions without expert oversight.

Fourth, keep a human in the loop on design choices. If the AI emphasizes certain data points through color, size, or positioning, make sure those emphasis choices reflect the evidence rather than the AI's default design preferences. You need to testify that every visual element was intentional.

The Bottom Line: AI-generated demonstrative evidence is admissible with proper foundation; the key is documenting your tool, inputs, and process -- and never presenting AI outputs as photographs of reality.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.