Deposition exhibits are where the GPT Image 2 problem lands first. OpenAI shipped the model on April 21, 2026 at 4K resolution with ~99% character-level text accuracy per the Images 2.0 announcement. A deposing attorney shows a witness an image and asks: "Is this a fair and accurate representation of the location on the date in question?" If the image was AI-generated and the witness affirms, the resulting testimony enters the record under a corrupted authentication. The deposition transcript doesn't carry C2PA metadata. The exhibit attached does — if anyone bothers to check. With 300+ federal judges running AI standing orders and 1,227 hallucination sanctions documented in the Charlotin database as of early 2026, the deposition stage is the highest-leverage place to catch AI-image misuse — both as the deposing party and as the defending party. Here's the deposition-prep playbook every litigation associate should be running by Q3 2026.
Why depositions are the leading edge of the AI-image evidence problem
Three structural factors make depositions the place where AI-generated images first cause litigation problems.
Factor 1: Looser authentication standards apply. At trial, exhibits face Federal Rule of Evidence 901 authentication and the local pretrial disclosure rules. At deposition, exhibits are routinely shown to witnesses with minimal foundation — the witness's affirmation or denial *is* the foundation built for later trial use. An AI-generated image presented at deposition without disclosure can be authenticated through witness testimony, then offered at trial under FRE 801(d)(1)(C) (prior identification) or as deposition designation. The authentication challenge that should have happened at trial happens months earlier in a depo room with no judge.
Factor 2: Time pressure favors the deposing party. Depositions run on a clock. Most witnesses don't have time to scrutinize an exhibit forensically. "Is this the intersection on the date in question?" gets a yes-or-no answer, not a request to inspect C2PA metadata. The asymmetry favors the party who brought the exhibit.
Factor 3: Deposition exhibits propagate. A deposition exhibit becomes part of the record, gets attached to summary judgment briefing, gets cited in expert reports, gets used in cross-examination at trial. An AI-generated image that enters the record at deposition without disclosure can re-surface in five different filings before anyone catches it.
The second-order read: the first AI-image sanctions case will likely involve a deposition exhibit, not a trial exhibit. Trial exhibits face higher scrutiny by definition. Deposition exhibits face the lowest scrutiny in the entire litigation lifecycle. The third-order read: smart deposition prep: both as the deposing party and as the defending party. Is the highest-leverage place to invest AI-image-evidence training in 2026.
The defending attorney's deposition prep playbook; six new questions
When you're defending the witness, the goal is to prevent contaminated authentication. Six questions added to the standard witness prep checklist close most of the exposure.
Question 1: Has the witness reviewed the exhibits in advance? Standard prep practice. New importance: if you can review the exhibits with the witness before the deposition, you can flag any image that looks AI-generated and prepare a non-affirming response. "I can't authenticate this image without more information about how it was prepared."
Question 2: What does the witness actually know about each exhibit? Drill into provenance. Did the witness take this photograph? Was it taken by someone the witness knows? Was it produced by a third party? Don't let the witness affirm authentication of an exhibit they don't actually have personal knowledge of.
Question 3: How does the witness handle 'fair and accurate representation' questions? This is the standard authentication formulation. Train the witness to distinguish: "This is a fair representation of what I remember" (testimony about memory) versus "This is an accurate photograph of the scene" (testimony about the physical record). The first survives even if the image turns out to be AI-generated. The second is contaminated by AI-origin.
Question 4: Does the witness know how to ask for provenance information? Train the witness to ask: "Where did this image come from? When was it taken? By whom?" before affirming authentication. The pause to ask is itself a record-protective action.
Question 5: Has the witness been shown anything outside the deposition that they should disclose? AI tools are increasingly used in witness preparation by both sides. If opposing counsel showed the witness an AI-generated illustration during their preparation, that's discoverable and disclosable. Train the witness to surface it.
Question 6: Is the witness comfortable saying 'I don't know' to authentication questions? The most important question. Witnesses pleased to please tend to authenticate exhibits they don't actually have grounds to authenticate. Train the comfort with non-affirmation.
The firm policy template for AI-generated images in evidence prep covers the broader policy framework that this checklist sits inside.
The deposing attorney's playbook, disclosure that protects the record
When you're taking the deposition and intend to use AI-generated demonstratives or illustrations, proactive disclosure protects your own record. Five operational moves.
Move 1: Disclose AI-generation in the exhibit list. When marking exhibits before or during deposition, include a notation in the exhibit list when an exhibit is AI-generated or contains AI-generated content. "Exhibit 17: Reconstruction of intersection at 14th and Main, AI-generated using GPT Image 2 from 16 reference photographs supplied by [witness or counsel]." The disclosure becomes part of the deposition record. No room for later "we didn't know" challenges.
Move 2: Lay foundation through the exhibit's preparer, not the witness. If the AI-generated demonstrative is critical, depose the paralegal or associate who actually ran the GPT Image 2 prompt before depositions of substantive witnesses. The preparer's testimony establishes the foundation under FRE 901(b)(9) (process or system testimony). Then use the demonstrative in subsequent depositions with the foundation already in place. The federal rules of evidence 902 and AI images authentication guide covers the foundation framework.
Move 3: Distinguish demonstratives from photographic records. When using an AI-generated reconstruction, frame the question to make the demonstrative status clear. "I'm going to show you a reconstruction prepared by my office of the intersection at 14th and Main. This isn't a photograph from the date in question. It's a reconstruction. Looking at this reconstruction, can you tell me whether the layout matches your recollection?" The framing prevents the witness from misidentifying the demonstrative as a photographic record.
Move 4: Preserve C2PA metadata in the exhibit production. When producing exhibits to opposing counsel and the court reporter, produce the native file format with C2PA Content Credentials metadata preserved. Re-exporting through PDF or screenshot strips metadata and creates the appearance of provenance scrubbing. The C2PA Content Credentials evidence standards spoke covers the preservation protocol.
Move 5: Build the court reporter's record. Before showing the AI-generated exhibit, state on the record: "Counsel, for the record, Exhibit 17 is an AI-generated reconstruction prepared by my office. C2PA Content Credentials metadata is preserved on the produced file." This creates a transcript record that survives even if the exhibit's metadata is later corrupted.
The second-order read: proactive disclosure at deposition makes the deposing attorney's record bulletproof. No later "counsel hid the AI-generation" challenge survives a clean transcript record. The third-order read: this discipline becomes a procurement asset; clients increasingly evaluate firms on AI-disclosure posture, and a clean deposition record demonstrates the discipline.
Catching opposing counsel's AI-generated exhibits, the inspection workflow
When you suspect opposing counsel introduced an AI-generated exhibit at deposition without disclosure, three inspection steps surface the AI-origin within 24 hours.
Step 1: Inspect C2PA metadata immediately. Drag the exhibit file (in its native format, not screenshotted) into contentcredentials.org/verify. The portal displays the manifest chain. If the issuer is OpenAI, Adobe Firefly, Microsoft Designer, or another AI generator, the exhibit is AI-generated. If no manifest is present, that's not dispositive: but flag for forensic follow-up.
Step 2: Run reverse image search. Tools like TinEye and Google Reverse Image Search index hundreds of millions of online images. AI-generated images often share patterns with training data or with prior outputs from the same prompt. A reverse search that surfaces no matches at all is suspicious. Most authentic photographs have at least one prior online appearance somewhere.
Step 3: Run forensic AI-detection tools. Three commercial tools dominate as of April 2026: Optic, Hive Moderation, and Truepic. Each has meaningful false-positive and false-negative rates; none is dispositive, but together they produce a triage signal. The how to detect AI-generated images in discovery production guide covers the multi-tool detection protocol.
If inspection surfaces AI-origin, the response menu includes: object on the record before the deposition continues, request a recess to consult with the witness, request a written disclosure from opposing counsel before continuing, file a post-deposition motion to strike the exhibit and any testimony grounded in it, and consider sanctions motions under FRCP 26(g) (certification violations) and the local AI standing order.
The operational implication: build the inspection workflow into the standard deposition protocol. Have a litigation support staffer available to inspect exhibits in real time during deposition. The 30 seconds it takes to inspect a C2PA manifest can prevent days of post-deposition motion practice.
When the witness already affirmed an AI-generated exhibit: damage control
The hardest scenario: the witness already affirmed authentication of an exhibit that turned out to be AI-generated, the deposition transcript is closed, and the exhibit is in the record. Three damage-control moves apply, in order of preference.
Move 1: File a motion to strike. Move under FRCP 30(d)(3) or the local rule equivalent to strike the exhibit and any testimony authenticating it, on grounds of authentication failure under FRE 901 and disclosure failure under the local pretrial rule plus any applicable AI standing order. Include C2PA inspection results and any forensic AI-detection findings in the supporting brief. The motion has the best chance of granting when filed within 30 days of deposition.
Move 2: File a Rule 30(b)(6) deposition notice on the AI-generation process. If the exhibit was produced by opposing counsel's office, notice a 30(b)(6) deposition on the topic of how the exhibit was created. The deposition surfaces the prompt, the inputs, the model, and the human review applied. The resulting record can support a sanctions motion or impeach the prior testimony at trial.
Move 3: Cross-examine on the AI-origin at trial. If motion practice doesn't strike the exhibit, the AI-origin becomes impeachment material. Cross-examine the witness at trial: "You testified at deposition that this image is a fair and accurate representation. You weren't told at the time that it was generated by AI from 16 reference photographs, were you?" The impeachment can collapse the witness's credibility on the broader factual claim.
The second-order read: damage control after the fact is harder than prevention through deposition prep. A witness who affirmed authentication has reputation cost in changing the testimony, and the trial court has reluctance to disturb closed transcripts absent clear sanctions-grade misconduct. The third-order read: the deposition-prep checklist in section 2 above is the high-leverage move; damage control is the consolation prize when prep wasn't sufficient.
My take: deposition prep on AI-image questions costs minutes per witness and produces years of motion-practice insulation. Every defending attorney should have the six-question checklist memorized by Q3 2026.
The Bottom Line: The verdict: Depositions are the leading edge of the AI-image evidence problem because authentication standards are looser, time pressure favors the deposing party, and exhibits propagate into trial briefing. Defending attorneys should run a six-question prep checklist that prevents contaminated authentication. Deposing attorneys using AI-generated demonstratives should disclose proactively in the exhibit list, lay foundation through the exhibit's preparer, and build the court reporter's record. When opposing counsel introduces AI without disclosure, C2PA inspection plus forensic detection tools surface the AI-origin within 24 hours.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
