Demonstrative aids — recreated intersections, reconstructed crime scenes, simulated email threads, rendered product defects — have always been a disclosure question under federal local rules. GPT Image 2 changed the math. OpenAI shipped the model on April 21, 2026 at 4K resolution with ~99% character-level text accuracy per the Images 2.0 announcement. An associate can now generate a courtroom-grade demonstrative in 20 minutes from 16 reference photos. Most federal local rules require pretrial disclosure of demonstratives. Almost none specify whether AI-generated demonstratives need separate disclosure as AI-generated. The Bloomberg Law Federal Court AI Standing Orders comparison confirms the gap. With 300+ federal judges running AI standing orders and 1,227 hallucination sanctions documented as of early 2026 per the Charlotin database, the disclosure-rule gap is the next category of court order to write itself. Here's the jurisdictional split, where the gap actually bites, and what trial teams should do this quarter.


The current state of demonstrative aid disclosure rules — and where AI sits inside them

Federal local rules treat demonstrative aids inconsistently. Broadly, three patterns dominate.

Pattern 1: Pretrial disclosure required, AI-status not addressed. The majority pattern. Local rules require all demonstratives to be exchanged with opposing counsel a set number of days before trial: usually 7-14 days. The rules don't distinguish between hand-drawn diagrams, software-rendered reconstructions, photographs, or AI-generated images. SDNY, NDIL, and CDCA all run this pattern per their published local rules.

Pattern 2: Pretrial disclosure plus method-of-creation description. A smaller subset. Northern District of Texas, Western District of Washington, some Eleventh Circuit districts. Local rules or judge-specific orders require disclosure of how the demonstrative was created, including software used. AI-generated demonstratives plausibly trigger description disclosure here, but few orders specifically name AI as a covered method.

Pattern 3: Generative AI standing order plus demonstrative rule. A handful of districts, mostly downstream of the 2023-2024 wave of AI standing orders. The standing order requires disclosure of any generative AI use in court filings; the demonstrative rule requires pretrial disclosure of demonstratives. The intersection; a demonstrative produced with generative AI, gets covered by combined operation but requires the lawyer to read both rules together. Most don't.

The second-order read: the disclosure rule for AI-generated demonstratives is currently a question of rule interaction, not specific text. A trial team that produces an AI-generated demonstrative without disclosing the AI-generation can technically comply with the local pretrial disclosure rule (the demonstrative was disclosed) and still violate the local AI standing order (AI use was not disclosed). That's the most common compliance trap and the one that will produce the first sanctions case. The third-order read: the first published opinion sanctioning an attorney for this trap will set the de-facto national standard, the same way Judge Brantley Starr's 2023 NDTX standing order became the template that 50+ subsequent judges adopted with minimal modification. The federal rules of evidence 902 and AI images authentication guide covers the parallel authentication challenge.

Three case-fact patterns that will produce the first sanctions

Based on the structural pattern from text-generation sanctions, three case-fact patterns will produce the first sanctions for AI-generated demonstratives within roughly 90 days of broad GPT Image 2 availability.

Pattern 1: The undisclosed reconstruction. A personal injury matter. Plaintiff's counsel produces an intersection reconstruction as a demonstrative aid for trial. The reconstruction is AI-generated using GPT Image 2 from 16 reference photos taken at varying times of day. The reconstruction is disclosed under the local pretrial demonstrative rule. The AI-generation is not separately disclosed. Defense moves to exclude on authentication grounds, then moves for sanctions when the C2PA manifest reveals AI-generation. The judge issues a sanctions order plus a new standing order requiring AI-generation disclosure for all demonstratives.

Pattern 2: The synthetic exhibit list. A commercial dispute. Plaintiff's counsel produces a series of email-thread screenshots as demonstratives illustrating the alleged contract formation. The screenshots are AI-generated mockups based on real email content (the underlying emails are produced separately as primary evidence). Counsel intended the mockups as illustrative, not as evidence: but the witness testifies based on the mockups. Defense catches the synthetic origin via reverse image search. Sanctions follow under FRE 901 authentication failure plus FRCP 26(g) certification violation.

Pattern 3: The expert's illustrated report. A products liability matter. Plaintiff's expert produces a report illustrating the alleged defect mechanism. The illustrations are AI-generated from the expert's verbal description. The expert discloses AI-tool use to plaintiff's counsel; counsel doesn't pass the disclosure to defense. Defense's deposition of the expert reveals the AI tool. Sanctions follow against counsel under FRCP 26(a)(2)(B) (expert report disclosure obligations) plus the local pretrial demonstrative rule.

All three patterns share a common factor: the AI-generation tool was used inside the prep workflow, but the AI-status was not surfaced through to opposing counsel. The first sanctioned attorney AI image prediction analysis walks through the structural prediction in detail. This is forecast-by-pattern, not naming attorneys or cases.

What the disclosure rule should say. The model language drafting exercise

Until federal local rules write themselves, trial teams should adopt model disclosure language proactively in their own pretrial filings and exhibit lists. Five clauses produce a defensible disclosure record.

Clause 1; Existence disclosure. Each demonstrative aid disclosed to opposing counsel shall be accompanied by a statement of whether any portion of the demonstrative was created or modified using generative AI. The statement shall identify the specific AI tool (e.g., GPT Image 2, Midjourney v7, Flux Pro 1.1, etc.).

Clause 2, Method disclosure. When AI was used, the disclosing party shall identify the inputs (reference images, text prompts, source data) provided to the AI tool and the role of human review and editing in the final output.

Clause 3: Provenance metadata. AI-generated demonstratives shall be produced in their native file format with embedded C2PA Content Credentials metadata preserved. Where C2PA is not available from the generation tool, the disclosing party shall produce a written statement explaining the provenance chain. The C2PA Content Credentials evidence standards spoke covers the metadata preservation framework.

Clause 4. Stipulation or objection deadline. Opposing counsel shall have 14 days from disclosure to stipulate to admissibility, raise authentication objections, or request additional foundational discovery on the AI-generation process.

Clause 5; Foundation witness. When admissibility is contested, the disclosing party shall make available a fact witness or expert with personal knowledge of the AI-generation process to provide foundation testimony at trial or via deposition.

The second-order read: a trial team that adopts these clauses in its own exhibit list, even when the local rule doesn't require them: establishes a clean disclosure record that survives any post-hoc sanctions challenge. The third-order read: the model language above is structurally similar to what the first published image-disclosure standing order will look like. Adopting it now means the firm doesn't need to retrofit when the order lands.

How AI demonstratives interact with FRE 403 and prejudice analysis

Beyond authentication and disclosure, AI-generated demonstratives raise a distinct evidentiary question under Federal Rule of Evidence 403. Exclusion when probative value is substantially outweighed by danger of unfair prejudice, confusing the issues, or misleading the jury. The LII text of Rule 403 is the operative authority.

Three prejudice vectors apply specifically to AI-generated demonstratives.

Vector 1: The photorealistic illusion. GPT Image 2's 4K resolution and ~99% text accuracy produce outputs that look like photographs. A jury presented with an AI-generated reconstruction may treat it as photographic evidence rather than a demonstrative aid, even when the foundation makes the generation method clear. The 403 argument: prejudicial because the visual fidelity outpaces the cognitive disclosure.

Vector 2: The cumulative-on-fiction problem. AI tools can generate variations of the same scene rapidly. A trial team can produce 10 different demonstratives reconstructing an intersection at slightly different angles, lighting, and times. Each individually may be admissible. Together, they create cumulative prejudice; the jury sees the scene 10 times, none of which is a real photograph. 403 weighing tilts toward exclusion of cumulative AI demonstratives.

Vector 3: The expert-witness amplification. When an expert witness testifies based on AI-generated illustrations of their methodology, the illustrations gain credibility from the expert's qualifications. If the illustrations contain subtle inaccuracies, and AI-generated outputs often do, even with reasoning-pipeline self-checks: the expert's credibility transfers to inaccurate visual claims. 403 argument plus FRE 702 (expert testimony reliability) combine for exclusion.

The operational implication: 403 is the most underused tool against AI-generated demonstratives currently, and the most likely to land first because it doesn't require the local rules to update. A motion in limine framed under 403 can exclude an AI-generated demonstrative before authentication is even reached. The deposition exhibits AI image disclosure analysis covers the parallel deposition-stage prejudice argument.

What trial teams should do this quarter. Operational moves before the rules catch up

Five operational moves close the disclosure gap without waiting for the rules to update.

Move 1: Audit the demonstrative-creation workflow. Most firms can't currently answer "which AI tools are our paralegals using to prep demonstratives?" Run a 30-day usage audit across litigation support, paralegals, junior associates, and outside graphics vendors. Document every AI tool currently in use. Categorize: prohibited, conditional, approved with disclosure.

Move 2: Update the standard pretrial exhibit list. Add a column or footer noting AI-tool use for each disclosed exhibit. Even if the local rule doesn't require it, adopting it as the firm's standard practice forecloses post-hoc sanctions exposure.

Move 3: Update the witness preparation checklist. Add three questions: Was any AI tool used in preparing this exhibit? What inputs were provided to the tool? Does the exhibit carry Content Credentials metadata? The witness prep angle on deposition exhibits with AI image disclosure covers the deposition checklist version.

Move 4: Brief the bench proactively when filing. If the matter involves any AI-generated demonstrative, file a status report or proposed pretrial order disclosing the use and proposing authentication procedures. Don't wait for the judge to ask. Preemptive briefing builds credibility and influences how the court writes any subsequent standing order.

Move 5: Track state and local rule developments. Vortex maintains coverage of federal AI disclosure rules at the district level. The state bar ethics opinions on AI image generation analysis tracks parallel state-bar opinion development. Subscribe to both streams to stay ahead of the rule changes that will land in 2026-2027.

My take: the disclosure-rule gap is the single highest-leverage policy area in legal AI right now. Two pages of internal policy work plus exhibit-list discipline produces years of motion-practice insulation before the rules catch up. Firms that ship these five moves this quarter will be the firms shaping the rules when the inevitable sanctions cases land.

The Bottom Line: My take: The federal local rule gap on AI-generated demonstratives is the highest-leverage policy area in 2026 legal AI exposure. Most current rules require demonstrative disclosure but don't address AI-generation specifically; combined with AI standing orders, the rule interaction creates a compliance trap that will produce the first sanctions within 90 days. Trial teams that adopt model disclosure clauses in their own exhibit lists this quarter; without waiting for the rules to update, buy years of motion-practice insulation and shape the standard the rules eventually adopt.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.