Federal Rule of Evidence 902 lists 14 categories of self-authenticating evidence — items that don't require extrinsic proof to be admitted. The 2017 amendments added subsections 902(13) and 902(14) to cover electronic records authenticated by qualified-person certification. Per the Cornell LII text of Rule 902, these are the most recent textual updates directly applicable to digital evidence. Eight years later, neither subsection contemplates synthetic images. OpenAI shipped GPT Image 2 on April 21, 2026 at 4K resolution with ~99% character-level text accuracy per the Images 2.0 announcement. The Advisory Committee on Evidence Rules has had AI-authentication on its agenda since 2024 per the committee agenda books — and deferred each cycle. With 300+ federal judges running AI standing orders and 1,227 hallucination sanctions documented in the Charlotin database, trial teams need a workable authentication framework now, not when the rule updates. Here's how 902 actually applies to AI-generated images, where it falls short, and the foundation strategy that survives a 901 challenge.
What Rule 902 actually covers — and the 902(13)/(14) electronic records framework
Rule 902's 14 self-authenticating categories run from public documents under seal (902(1)) through commercial paper (902(9)) through certified copies of business records (902(11) and 902(12)). The 2017 amendments added two electronic-records categories that matter here.
Rule 902(13) covers "a record generated by an electronic process or system that produces an accurate result, as shown by a certification of a qualified person that complies with the certification requirements of Rule 902(11) or (12)." The certification must give written notice and a copy to opposing counsel before trial.
Rule 902(14) covers "data copied from an electronic device, storage medium, or file, if authenticated by a process of digital identification, as shown by a certification of a qualified person that complies with the certification requirements of Rule 902(11) or (12)." Same notice requirements as 902(13).
The LII text of Rule 902 is the operative source. The Committee Notes accompanying the 2017 amendments explain the intent: simplify authentication for digital records produced by reliable systems where chain-of-custody can be documented through process certification rather than live testimony.
The alignment math for AI-generated images: an AI-generated image is a record produced by an electronic process. The C2PA Content Credentials manifest: when present. Is a process certification of how the record was produced. A trial team that produces a C2PA manifest plus a qualified-person declaration explaining the C2PA process and the issuer's certificate trust can plausibly invoke 902(13) for self-authentication. The fit isn't perfect; 902(13) was drafted with database queries and forensic disk imaging in mind, not generative AI, but it's the closest existing fit. The C2PA Content Credentials evidence standards spoke covers the manifest framework in detail.
The second-order read: a smart trial team treats 902(13) as the working framework for AI-generated demonstratives until the rules update. Build the certification, give the notice, expect challenge, win on 902(13) plus 901 alternative grounds. The third-order read: the first published opinion adopting 902(13) for AI-image authentication will become the de-facto national template: and the Advisory Committee will likely codify the result in a future Rule 902(15) explicitly covering AI provenance.
Where 902 falls short for AI-generated images. The three structural gaps
Three gaps materially affect 902's application to AI-generated images. Trial teams need workarounds for each.
Gap 1: The "qualified person" requirement. Both 902(13) and 902(14) require certification by a qualified person. For traditional electronic records, that's a custodian of records, an IT administrator, or a forensic examiner. For an AI-generated image, the qualified person could be the OpenAI signing service, the in-house IT staff member who supervised the generation, or an outside forensic expert in C2PA verification. None of these is a clean fit for the rule's drafted intent. A federal judge will likely require the trial team to identify a human qualified person who can speak to the AI-generation process; which means the certification needs to come from someone with personal knowledge of how the image was generated, not just someone who can read the C2PA manifest after the fact.
Gap 2: The "accurate result" requirement. Rule 902(13) requires that the electronic process produce "an accurate result." This was drafted contemplating database queries, forensic image hash verification, and similar deterministic processes. AI-generation is non-deterministic by design, the same prompt produces different outputs across runs. The "accurate result" requirement doesn't map cleanly onto generative AI. The workaround: frame the certification narrowly. The C2PA process accurately identifies the generation tool and timestamps. The C2PA process does not certify the accuracy of the image's content. Authentication under 902(13) only goes as far as "this image was generated by GPT Image 2 on April 25, 2026 at 14:07 UTC": not "this image is an accurate representation of the underlying scene."
Gap 3: The notice requirement. Both 902(13) and 902(14) incorporate the notice requirements of Rule 902(11)/(12). Written notice and copy to opposing counsel reasonably before trial. For AI-generated images discovered late in litigation (e.g., in a deposition exhibit produced shortly before trial), the notice timeline may be impractical. The workaround: build AI-generation disclosure into the standard pretrial exhibit list per the AI demonstratives courtroom disclosure rule gap analysis, so notice is provided as a matter of standard practice rather than scrambled to before trial.
The deeper structural gap: Rule 902 presumes the record exists in some objective form that can be verified. AI-generated images don't exist in that sense; they're new objects with no physical-world referent. The rule was drafted for evidence that *is* the record. AI images *create* the record. Until the Advisory Committee writes a new subsection, the structural gap is filled by combining 902(13) authentication with FRE 901 general authentication and FRE 401/403 relevance and prejudice analysis.
Rule 901 fallback, when 902 doesn't fit, the general authentication standard
When 902(13) doesn't cover the fact pattern, Rule 901 is the general authentication standard. Per the LII text of Rule 901, the proponent must produce "evidence sufficient to support a finding that the item is what the proponent claims it is." The rule lists ten illustrations including witness with knowledge (901(b)(1)), comparison with authenticated specimen (901(b)(3)), distinctive characteristics (901(b)(4)), and "evidence describing a process or system": Rule 901(b)(9), which most closely fits AI-generated images.
Rule 901(b)(9) authenticates "by evidence describing a process or system and showing that it produces an accurate result." This is the rule that applies to most AI-generated images that don't have C2PA metadata or whose 902(13) certification is challenged.
The foundation strategy under 901(b)(9): testimony or documentary evidence describing the AI-generation process, the inputs provided, the model used, and the human review applied. This typically requires either (a) a fact witness with personal knowledge of how the image was generated. Usually the paralegal or associate who ran the prompt; or (b) an expert witness in AI image generation who can testify to how the process works generally.
The operational implication: every AI-generated demonstrative needs a foundation witness identified at the time of generation. If the paralegal who ran the GPT Image 2 prompt isn't available at trial, the foundation collapses. Build the witness identification into the standard demonstrative-creation workflow, name the prompt-runner on every demonstrative as a workflow rule.
The second-order read: 901(b)(9) is more flexible than 902(13) but requires live testimony rather than self-authentication. For demonstratives presented through a witness anyway (which most are), the live-testimony requirement isn't a meaningful additional burden. The third-order read: 901(b)(9) plus C2PA inspection plus disclosure produces a foundation record that's hard to attack on appeal: even when the trial court admits over objection, the appellate record is clean.
What the Advisory Committee is likely to do. And the Rule 902(15) prediction
The Advisory Committee on Evidence Rules has tracked AI-authentication on its agenda since at least 2024. Per the committee agenda books, the issue has been deferred multiple cycles pending case-law development and stakeholder consensus. Three structural factors govern the expected timeline.
Factor 1: Rule cycle. The federal rules of evidence amendment cycle runs 3-5 years from initial proposal to effective date. Even with active drafting starting in 2026, an AI-authentication subsection wouldn't take effect before 2029-2030. Courts can't wait that long when GPT Image 2 is shipping today.
Factor 2: Case-law development. Committee practice generally waits for circuit-level case law before drafting amendments. The first reported decisions applying 902(13) or 901(b)(9) to AI-generated images will likely surface in 2026-2027. The first circuit-level opinion will likely surface in 2027-2028. Committee drafting follows.
Factor 3: Stakeholder consensus. Defense bar, plaintiff bar, civil litigation, criminal litigation, and the Department of Justice all have different interests in how AI-authentication rules are drafted. Reaching consensus extends the timeline.
The likely outcome: a future Rule 902(15) explicitly covering AI-generation provenance, structured similarly to 902(13)/(14) but addressing the qualified-person and accurate-result requirements specifically for generative AI. The rule will likely require: (a) preservation of provenance metadata such as C2PA, (b) certification by a qualified person with knowledge of the generation process, (c) disclosure of AI-tool use as part of the certification, and (d) notice to opposing counsel. Trial teams that adopt the equivalent practice now will be prepared for the rule when it lands.
The first sanctioned attorney AI image prediction analysis covers the parallel sanctions-case prediction. The two prediction streams converge; the first sanctions case will accelerate Committee action by surfacing the gap publicly.
The foundation strategy that survives a 901 challenge, the practitioner playbook
Five-step foundation strategy that combines 902(13) certification with 901(b)(9) live testimony for AI-generated demonstratives. Every step is operational, not theoretical.
Step 1: Generate with a documented workflow. The paralegal or associate who runs the AI-generation logs the prompt, the inputs (reference images, source data), the model used (GPT Image 2 with named version), the timestamp, and the human review applied. This log becomes the foundation document. Build the log into the AI-generation tooling so it's automatic, not optional.
Step 2: Preserve C2PA metadata. GPT Image 2 ships C2PA by default per OpenAI's Images 2.0 documentation. Preserve the manifest through the production pipeline. Don't re-export through tools that strip metadata. The C2PA Content Credentials evidence standards spoke covers the preservation protocol.
Step 3: Disclose proactively. Include AI-generation disclosure in the standard pretrial exhibit list per the AI demonstratives courtroom disclosure rule gap analysis. Don't wait for the local rule to require it. Build proactive disclosure into firm policy.
Step 4: Identify the foundation witness early. Name the prompt-runner on every demonstrative as a workflow rule. Make sure that person is available at deposition and trial. If the demonstrative is created by an outside graphics vendor, make the vendor's prompt-runner available.
Step 5: Combine 902(13) and 901(b)(9). File a 902(13) certification giving notice to opposing counsel reasonably before trial. Be prepared to fall back to 901(b)(9) live testimony if the certification is challenged. The combined approach produces an authentication record that survives both trial-court rulings and appellate review.
My take: this five-step playbook is two pages of internal policy work plus one workflow tooling change. Every firm with active litigation should adopt it this quarter. The motion-practice leverage downstream: being the firm with a clean authentication record when the inevitable AI-image evidence challenge lands. Pays for the operational lift many times over.
The Bottom Line: The verdict: Federal Rule of Evidence 902 doesn't perfectly fit AI-generated images, but Rule 902(13) plus Rule 901(b)(9) produces a workable authentication framework today. The Advisory Committee won't write a Rule 902(15) explicitly covering AI provenance before 2029-2030; courts can't wait. Trial teams that adopt the five-step foundation playbook this quarter (documented workflow, preserved C2PA, proactive disclosure, identified foundation witness, combined 902(13)/901(b)(9) approach) build authentication records that survive both trial-court rulings and the eventual rule update.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
