C2PA — the Coalition for Content Provenance and Authenticity — is the closest thing legal evidence has to an image-provenance standard. Founded in February 2021 by Adobe, Arm, BBC, Intel, Microsoft, and Truepic, the coalition published its first technical specification in 2022 and reached the v2.x specification by late 2025 per the C2PA technical specifications page. Consumer-facing implementation is Content Credentials (contentcredentials.org). OpenAI shipped C2PA on DALL-E 3 in February 2024 and GPT Image 2 inherits it by default per the Images 2.0 announcement on April 21, 2026. With 300+ federal judges writing AI standing orders and 1,227 hallucination sanctions documented through April 2026, courts are now searching for a provenance standard to anchor authentication challenges. C2PA is the standard they'll land on. Here's why, what it actually does, and the discovery production policy every firm should adopt this quarter.


What C2PA actually is — the technical layer in plain English

C2PA embeds cryptographically signed metadata into image, video, and audio files. The metadata describes the asset's origin, the tool that created it, the human or organization that authored it, and every edit applied along the way. The specification calls this metadata block a manifest. Each manifest contains one or more claims: signed assertions about the asset. And each claim is signed by an issuer (a hardware device, a generation tool, an editor) using standard X.509 certificate chains.

When GPT Image 2 generates an image, OpenAI's signing service writes a claim into the manifest stating the image was AI-generated, names the model, timestamps the generation, and signs with OpenAI's certificate. If a user opens the image in Adobe Photoshop and crops it, Photoshop adds a new claim describing the edit and chains its signature to the prior manifest. The full provenance chain is preserved as a sequence of signed claims, each cryptographically linked to the one before it.

The operational read for legal evidence: the manifest is checkable. Anyone with a Content Credentials-aware viewer can inspect the chain and verify the signatures haven't been tampered with. The bad news: manifests are strippable. A re-export through a tool that doesn't preserve the manifest, a screenshot, a re-save through legacy software, or a deliberate scrub via metadata-removal tools all break the chain. The standard is voluntary at the implementer level. There's no current legal requirement to preserve it.

The deeper problem isn't the strippability; it's the expectation gap. Courts and litigators don't yet know to look for a manifest. When the manifest is stripped, today nobody objects. When the manifest is present and the issuer is OpenAI, today nobody knows what that means. Both gaps close in the next 24 months, but firms operating now sit in the unsettled middle.

Where C2PA sits in current evidence law: the FRE alignment

Federal Rule of Evidence 901 establishes the general authentication standard: "To satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is." The rule's LII text lists ten illustrations including expert comparison, distinctive characteristics, and "evidence about a process or system". Rule 901(b)(9), which most closely contemplates technical authentication of digital records.

Rule 902 lists 14 categories of self-authenticating evidence; items that don't require extrinsic proof. The 2017 amendment added subsections 902(13) (electronic records authenticated by qualified-person certification) and 902(14) (electronic records of a process or system, also via certification). The full text lives at the Cornell LII Rule 902 page.

The alignment math: C2PA-signed images don't fit cleanly into any 902 subsection because the certifying party is the AI model's signing service, not a human "qualified person" within the meaning of the rule. But 902(14) comes closest, it covers electronic records authenticated by a process or system, with certification of the process. A C2PA manifest is exactly that: a signed assertion produced by a documented process. A trial team that produces a C2PA manifest plus an expert declaration explaining the C2PA process can plausibly invoke 902(14) for self-authentication.

The second-order read: the Advisory Committee on Evidence Rules has tracked AI-authentication on its agenda since 2024. Per the committee agenda books, the issue has been deferred multiple cycles. A future Rule 902(15) explicitly covering AI-generation provenance is a likely outcome: but the rule cycle is 3-5 years, and the issue lands in court before the rule lands in writing. The third-order read: the firms that build a 902(14) authentication record now will have a written authentication template that survives the eventual rule change with minimal modification.

How to inspect a C2PA manifest. The practitioner's three-tool workflow

Inspecting a C2PA manifest is now a 30-second task. Three tools cover the full workflow.

Tool 1: Content Credentials Verify; the consumer-grade inspection portal at contentcredentials.org/verify. Drag and drop an image. The portal displays the full manifest chain, names every issuer, shows every edit, and flags signature failures. Free. No account required. This is the right tool for a first-pass triage on inbound discovery production.

Tool 2: c2patool, the official command-line tool from the Content Authenticity Initiative (github.com/contentauth/c2patool). For batch inspection across thousands of discovery production images, this is the right tool. Output is structured JSON. Can be wired into a litigation support pipeline that flags any image without a manifest, any image with a stripped manifest, or any image whose generation issuer is on a watchlist.

Tool 3: c2pa-rs (Rust SDK) and c2pa-python (Python wrapper): the developer libraries for embedding C2PA inspection into custom tools. For firms with internal litigation-support engineering, this is the right depth. Build manifest inspection into the document review platform's image triage step.

The operational protocol that follows from these tools: every inbound image-asset in discovery production gets passed through c2patool batch inspection. The output gets logged to the matter file. Any image with a stripped or absent manifest gets flagged for forensic review. The how to detect AI-generated images in discovery production guide walks through the full multi-layer detection protocol that sits on top of C2PA inspection.

The second-order read: this protocol is asymmetric in the firm's favor. Producing parties don't currently strip C2PA metadata as a routine matter. Most don't know it's there. Receiving parties who inspect first will find AI-generation evidence the producing party didn't realize they were disclosing. The third-order read: as the protocol becomes standard, producing parties will start scrubbing; at which point C2PA-stripping itself becomes an FRCP 37(e) spoliation hook. That's a years-of-litigation moat for receiving parties that build the inspection muscle now.

The discovery production policy every firm should adopt this quarter

A two-page firm-level policy covers the C2PA exposure for the next 24 months. Five clauses minimum.

Clause 1, Production preservation. All images produced in discovery shall preserve any embedded C2PA Content Credentials metadata. Production protocols that strip metadata to reduce file size shall be modified to preserve the manifest. This single clause closes the largest current exposure: routine spoliation of provenance data through metadata-stripping export pipelines.

Clause 2: Inspection of inbound production. All inbound image assets in discovery production shall be passed through C2PA manifest inspection (c2patool batch process or equivalent) within 7 days of receipt. Findings shall be logged to the matter file. Images with stripped or absent manifests shall be flagged for forensic follow-up. Images flagged as AI-generated by a manifest issuer shall be reviewed for authentication implications before use.

Clause 3. Disclosure on production. When the producing party is aware that a produced image is AI-generated (e.g., a demonstrative aid, a synthetic illustration), the production cover letter shall disclose the AI-generation status and identify the generation tool. This clause aligns with the broader trend in federal AI standing orders requiring AI-use disclosure and gets ahead of the inevitable image-specific orders.

Clause 4; Witness preparation. Deposition preparation checklists shall include questions about AI-tool use in image preparation: "How was this image obtained? Was any AI tool used in its creation or modification? Does it carry Content Credentials metadata?" The deposition exhibits AI image disclosure analysis covers the witness-prep angle in detail.

Clause 5, Vendor and outside expert requirements. Outside experts and graphics vendors retained to prepare demonstrative aids shall preserve C2PA metadata on any AI-generated assets, disclose AI-tool use in their work product, and certify the chain of custody from generation to production.

The firm policy template for AI-generated images in evidence prep provides the structural framework: lawyers customize for jurisdiction and practice.

Where C2PA falls short. The operational gaps every firm should know

The standard is the right starting point, not the finish line. Three gaps materially affect legal use and need workarounds today.

Gap 1: Strippability. C2PA manifests are removable by any tool that doesn't preserve them. A determined adversarial party can produce an AI-generated image, strip the manifest, and present the result without provenance. The technical workaround is hard-binding; a C2PA spec feature that ties the manifest to the image's perceptual hash, so a stripped manifest can be reconstructed by querying a cloud-side claims registry. Hard-binding adoption is partial. Few generation tools currently bind hard. OpenAI's GPT Image 2 implementation is soft-binding only as of April 2026 per the technical documentation.

Gap 2: Issuer trust. The C2PA standard relies on certificate trust. A manifest signed by OpenAI's certificate carries OpenAI's trust posture. A manifest signed by a self-issued certificate from an unknown party carries close to zero. Courts have no current framework for assessing C2PA issuer trust. Building one will take years. In the meantime, treat issuer trust as a forensic question and build a watchlist of trusted issuers (OpenAI, Microsoft, Adobe, Leica) and a flag-list of unknown issuers.

Gap 3: Coverage. Not every generation tool ships C2PA. Midjourney does not ship it by default as of v7. Stable Diffusion implementations vary. Flux Pro 1.1 ships partial C2PA depending on the deployment surface. The GPT Image 2 vs Midjourney vs Flux legal disclosure comparison covers the per-tool C2PA posture. The operational implication: absence of a manifest doesn't prove non-AI-generation. It proves only that no compliant tool signed the asset.

The second-order read: these gaps will close gradually as major platforms (Microsoft, Adobe, Apple, Google) roll out C2PA into the operating-system and consumer-app layer. macOS Sequoia and iOS 18 added partial C2PA support. Windows 11 shipped Content Credentials display in 2025. The third-order read: by 2028, C2PA will be operating-system-native and metadata stripping will become an affirmatively suspicious act, which closes the strippability gap by social rather than technical means.

What this means for evidence preservation orders and litigation holds

The interaction between C2PA and existing evidence preservation doctrine is the most underdeveloped legal area in this space. Three takeaways.

Takeaway 1: C2PA metadata is electronically stored information under FRCP 26(b)(2)(B). The 2024 amendments to the Federal Rules of Civil Procedure clarified that ESI includes embedded metadata where reasonably accessible. A C2PA manifest embedded in an image file is reasonably accessible: any C2PA-aware tool extracts it in seconds. That means the manifest is subject to the same preservation duty as the image itself. A litigation hold that preserves the image but allows metadata stripping arguably violates the preservation duty.

Takeaway 2: FRCP 37(e) spoliation analysis applies to C2PA stripping. The rule covers ESI "that should have been preserved in the anticipation or conduct of litigation." If a producing party stripped C2PA metadata after the duty to preserve attached, sanctions analysis under 37(e)(1) (curative measures) and 37(e)(2) (intentional deprivation) becomes available. The first-published opinion applying 37(e) to a C2PA-stripping fact pattern will set the national template. And trial teams that flag the strippage in motion practice now will be the teams shaping that template.

Takeaway 3: Litigation hold notices need C2PA language. Standard hold notices already cover "electronic communications, electronic documents, and metadata." Adding explicit C2PA language; "including any embedded Content Credentials, provenance metadata, or signed claim chains", costs one sentence and forecloses the "we didn't know we had to preserve that" defense. The federal rules of evidence 902 and AI images authentication guide covers the broader authentication framework that sits downstream.

My take: every firm with active litigation should add a C2PA-preservation paragraph to their standard hold notice template this week. It costs nothing and creates motion-practice leverage downstream when the inevitable spoliation cases land.

The Bottom Line: The verdict: C2PA Content Credentials is the standard federal courts will adopt for AI-generated image provenance: not because it's perfect, but because it's the only standard that exists. Firms that build C2PA inspection into discovery production protocols this quarter, add C2PA preservation language to litigation hold notices, and train deposition-prep on provenance questions are buying years of motion-practice insulation for marginal operational lift. The strippability gap is real but closing. Build the muscle now while the tooling is asymmetric in the inspecting party's favor.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.