Most law firms can't currently answer "which AI image-generation tools are our paralegals using to prep demonstratives?" That's the operational gap GPT Image 2 turned into immediate exposure. OpenAI shipped the model on April 21, 2026 at 4K resolution with ~99% character-level text accuracy per the Images 2.0 announcement. With 300+ federal judges running AI standing orders and 1,227 hallucination sanctions in the Charlotin database as of early 2026, firms need a written AI-image policy by Q3 2026 — before the first sanctions case in this category lands. This is a template structure, not a fictional fully-drafted policy. The clauses below describe the framework every firm should customize for their jurisdiction, practice mix, and risk tolerance. Have your general counsel and an outside ethics consultant review before adopting. The structural framework matters more than the specific language — what follows covers the seven clauses every firm-level AI-image policy needs.


Why a written policy beats informal practice — the structural argument

Three reasons a written policy outperforms ad-hoc practice on AI-image tooling.

Reason 1: Discoverable defense. When the inevitable sanctions motion lands or the first AI-image evidence challenge surfaces, the firm's response is materially stronger when supported by written policy that pre-dated the event. "Our firm has a written AI-image policy that requires X, Y, and Z; the exhibit at issue followed those requirements" is a defensible litigation posture. "Our firm doesn't have a written policy but generally we try to follow best practices" is not.

Reason 2: Consistent training signal. Paralegals, junior associates, outside graphics vendors, and litigation support staff all need the same instruction. A written policy is the only way to deliver consistent training across these cohorts. Verbal guidance varies by who delivered it and when. Written policy persists.

Reason 3: Procurement asset. Sophisticated clients: particularly in-house counsel at Fortune 500 companies. Increasingly evaluate outside firms on AI governance posture. A written AI-image policy is a procurement document. It surfaces in beauty contests, RFP responses, and engagement letter negotiations. Firms that have one have a procurement edge over firms that don't.

The second-order read: the written-policy advantage compounds. Clients that selected the firm partly for AI governance posture become harder to lose. The firm's reputation for AI discipline becomes a referral asset. The third-order read: this is one of the rare 2026 windows where two pages of policy work compound into multi-year competitive position. The opportunity closes when AI-image policies become standard across the industry; likely within 18-24 months.

The AI demonstratives courtroom disclosure rule gap analysis covers the related rule-disclosure framework that intersects with this policy.

Clause 1: Scope and applicability

The opening clause defines what the policy covers and who it applies to. Three elements.

Scope element 1: Tool coverage. The policy applies to any AI image-generation tool used in connection with firm legal work. This includes general-purpose tools (GPT Image 2, Midjourney v7, Flux Pro 1.1, Adobe Firefly, Gemini Imagen, Stable Diffusion deployments), specialized legal-tech tools that include image generation, and any future AI tool that produces images, videos, or composite visual content. The clause should be technology-neutral so it survives new tool releases.

Scope element 2: Personnel coverage. The policy applies to all firm personnel: partners, associates, paralegals, litigation support staff, summer associates, contract attorneys. And to outside vendors retained by the firm for graphics, demonstrative preparation, or document production. Vendor agreements should incorporate the policy by reference.

Scope element 3: Work-product coverage. The policy applies to all visual outputs produced for use in firm legal work, including but not limited to: trial demonstratives, deposition exhibits, expert report illustrations, settlement materials, mediation visuals, client presentations, and discovery production responses. The clause should not exempt low-stakes uses; the disciplinary value comes from consistent application.

The second-order read: scope clauses that try to carve out exceptions (e.g., "this policy doesn't apply to internal-use marketing materials") undermine the firm's defensible posture when the carved-out work surfaces in litigation context. Better to apply the policy uniformly and rely on the substantive clauses to scale obligations to risk level. The third-order read: technology-neutral language is essential because the AI image-generation field will produce 5-10 new tools in the next 24 months. A policy that names current tools by name will need rewriting; a policy that addresses any AI image-generation tool will not.

The GPT Image 2 vs Midjourney vs Flux legal disclosure comparison covers the per-tool fit considerations that complement scope clause drafting.

Clause 2: Approved, conditional, and prohibited use categories

The second clause categorizes AI image-generation tool use by risk level. Three categories.

Category 1: Approved with disclosure. Use cases that are operationally low-risk when accompanied by appropriate disclosure: courtroom demonstrative aids (with C2PA metadata preserved and AI-generation disclosed in pretrial exhibit list), expert report illustrations (with disclosure to opposing counsel and the expert's certification of methodology), client presentations and pitch materials (no formal disclosure required but firm internal documentation), and litigation support brainstorming (early-stage prep work that doesn't enter the record).

Category 2. Conditional approval. Use cases requiring case-by-case approval by the supervising partner: deposition exhibits (because of the looser authentication standard at deposition stage per the deposition exhibits AI image disclosure analysis), settlement-leverage exhibits (because adversarial parties may not have inspection capability), and discovery production responses (because of the C2PA preservation duty per the C2PA Content Credentials evidence standards spoke).

Category 3; Prohibited absent extraordinary circumstances. Use cases that create unacceptable risk: any AI image presented as a photographic record of a real event, any AI image used to misrepresent a fact (regardless of disclosure), any AI image produced from another party's photographs without permission and disclosure, any AI image used in criminal defense work without supervising partner pre-approval (privilege complications), and any AI image generation using consumer-grade tools where data may train future models (per the Heppner ruling framework).

The operational discipline: every use case in the firm's actual practice should map to one of these three categories. If a use case doesn't map cleanly, escalate to firm general counsel for case-by-case determination.

Clause 3: Documentation and provenance preservation

The third clause specifies the documentation discipline that supports authentication and discovery preservation. Five elements.

Element 1: Workflow log. Every AI image-generation event creates a contemporaneous log entry capturing: the prompt used, the inputs (reference images, source data), the model and version, the timestamp, the human who ran the generation, and the human review applied before the output was used. The log lives in the matter file. Build the log into the AI-tool deployment so it's automatic, not voluntary.

Element 2: C2PA metadata preservation. Every AI-generated image asset retains its embedded C2PA Content Credentials metadata through every stage of firm use: generation, internal review, production to opposing counsel, exhibit marking, trial use. Re-exports through tools that strip metadata are prohibited unless a written exception is documented for that asset.

Element 3: Native file format retention. AI-generated assets are retained in their native file format (PNG, WebP, etc., as produced by the generation tool) in addition to any rendered or production formats. The native file is the authoritative record for C2PA inspection if challenged.

Element 4: Vendor provenance certification. When AI-generated assets are produced by outside graphics vendors, the vendor delivers (a) the workflow log, (b) the native file with C2PA preserved, and (c) a written certification of the chain of custody from generation to delivery.

Element 5: Litigation hold integration. Standard litigation hold notice templates incorporate AI workflow logs, prompts, inputs, and provenance metadata in the preservation duty. Per the federal rules of evidence 902 and AI images authentication guide, this aligns the firm's hold practice with FRCP 26(b)(2)(B) ESI obligations and forecloses spoliation challenges under FRCP 37(e).

The second-order read: documentation discipline is what turns informal AI use into a defensible authentication record. The third-order read: the documentation itself becomes a procurement asset. Clients evaluating firms on AI governance increasingly ask to see the documentation framework.

Clause 4; Disclosure obligations to courts, opposing counsel, and clients

The fourth clause specifies who gets told what about AI-image use. Three audience tiers.

Tier 1: Courts. When the firm produces an AI-generated demonstrative, deposition exhibit, or other visual asset that will be presented in a court proceeding, the firm proactively discloses the AI-generation in the pretrial exhibit list, witness preparation notes, expert report (if applicable), and any pretrial filing referencing the asset. The firm does not wait for the court to ask. Per the AI demonstratives courtroom disclosure rule gap analysis, proactive disclosure protects the firm's record even when the local rule doesn't strictly require it.

Tier 2: Opposing counsel. All AI-generated assets disclosed to courts are also disclosed to opposing counsel with the same level of detail. The disclosure includes the model used, the workflow log summary, and the C2PA metadata status. The disclosure timing follows the local pretrial rule (typically 7-14 days before trial) plus any case-specific scheduling order.

Tier 3: Clients. The firm's engagement letter discloses the firm's AI-tool use posture, including the use of AI image-generation tools where appropriate to the matter. Clients with strict AI prohibitions (e.g., regulated industry clients with their own AI governance frameworks) can opt out of AI-tool use as part of engagement terms. The firm honors opt-outs strictly, even minor AI-tool use in matters where the client opted out creates client relationship and conflict-of-interest exposure.

The operational implication: the engagement letter's AI clause is itself a procurement asset. Clients increasingly evaluate firms on transparent AI disclosure. Building the disclosure into engagement terms surfaces the firm's discipline as a matter of standard practice rather than special handling.

Clauses 5-7: Training, supervision, and enforcement

Three remaining clauses round out the structural framework.

Clause 5. Training requirements. All firm personnel and outside vendors authorized to use AI image-generation tools complete annual training covering: the policy's substantive requirements, the AI-tool-specific operational discipline (workflow logs, C2PA preservation, disclosure obligations), the current state of federal and state evidence rules and AI standing orders, and the disciplinary path for policy violations. New personnel complete the training before authorized AI-tool use. Document training completion in the personnel file.

Clause 6; Supervision structure. A designated AI Governance Partner, a partner with both technology fluency and ethics committee experience: has ultimate authority over AI-image policy interpretation and case-by-case approvals under Category 2 (conditional approval). The AI Governance Partner reports periodically to the ethics committee on AI-image use patterns, policy effectiveness, and any incidents requiring corrective action. The role doesn't need to be full-time but does need to be named.

Clause 7. Enforcement and disciplinary path. Policy violations follow a four-step disciplinary path: (a) first violation; written warning plus mandatory re-training, (b) second violation, written reprimand plus formal review by AI Governance Partner, (c) third violation: formal disciplinary action through firm management, (d) violations involving sanctions exposure or court-imposed penalties. Immediate review and potential termination of employment or vendor relationship. The disciplinary path is documented in firm management policy and personnel files.

The second-order read: most firms underweight enforcement. A policy without a disciplinary path is aspirational, not operational. The third-order read: the AI Governance Partner role becomes increasingly important over time. By 2028, every firm with active litigation will need this role, just as every firm now has a designated ethics partner.

My take: this seven-clause framework is two pages of policy work that compounds into years of operational discipline and procurement advantage. Every firm should adopt it in some form by Q3 2026. Customize the language for jurisdiction and practice; but don't skip the structural elements. The first sanctioned attorney AI image prediction analysis covers the timing pressure that makes Q3 2026 the right adoption window.

The Bottom Line: My take: A written AI-image policy is the highest-leverage governance work available to law firms in 2026. The seven-clause framework, scope, use categories, documentation, disclosure, training, supervision, enforcement: covers the structural elements every firm needs. The specific language must be customized for jurisdiction and practice; the structural framework is universal. Two pages of policy work, customized with general counsel and ethics consultant review, produces years of motion-practice insulation, training consistency, and procurement advantage. Adopt by Q3 2026 before the first sanctions case in this category lands.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.