As of April 2026, no state bar has issued a formal ethics opinion specifically addressing AI image generation in legal practice. Multiple state bars — California, New York, Florida, Texas, Washington, Illinois, the District of Columbia, and others — have published general AI ethics opinions covering text generation and confidentiality, but none yet addresses image generation specifically. With OpenAI shipping GPT Image 2 on April 21, 2026 at 4K resolution with ~99% character-level text accuracy per the Images 2.0 announcement, and with 300+ federal judges running AI standing orders per the Bloomberg Law tracker, state bar opinions on image generation will arrive within 12-18 months. The structural pattern from text-generation opinions is clear: federal court sanctions cases produce state bar attention, state bar attention produces opinions, opinions produce model rule updates. Here's the current state of bar AI ethics opinions, the gaps that image generation surfaces, and what firms should expect from state bar guidance over the next 18 months.


Current state of bar AI ethics opinions — what's published as of April 2026

State bar AI ethics opinions have been arriving in a steady stream since 2023, but coverage is uneven and image generation is not yet specifically addressed.

California State Bar. The California Standing Committee on Professional Responsibility and Conduct issued Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law in November 2023, with updates in 2024 and 2025. The guidance covers competence, confidentiality, supervision, communication, and candor under California Rules of Professional Conduct 1.1, 1.6, 5.1, 5.3, 1.4, and 3.3 respectively. The current text addresses generative AI broadly without distinguishing text from image generation.

New York State Bar Association. NYSBA's Task Force on Artificial Intelligence published its Report and Recommendations on the Practice of Law and the Ethical Implications of Generative AI in April 2024. Coverage is broad and includes guidance on candor, competence, and confidentiality under New York Rules of Professional Conduct.

Florida Bar. Florida Bar Ethics Opinion 24-1 (January 2024) addressed generative AI, focusing on confidentiality (Rule 4-1.6) and competence (Rule 4-1.1). The opinion frames AI use as analogous to use of nonlawyer assistance under Rule 4-5.3.

ABA Formal Opinion 512. The American Bar Association published Formal Opinion 512 in July 2024, addressing generative AI tools across the Model Rules of Professional Conduct framework. While ABA opinions are not binding on state bars, they're widely cited and influence state-level opinion drafting.

Texas, Washington, Illinois, and DC. Each has issued general AI ethics guidance through 2024-2025, with similar coverage of confidentiality, competence, and supervision frameworks under their respective rules.

The second-order read: state bar AI ethics opinions are numerous but technology-neutral. The current opinions apply to image generation by extension: the duties of competence, confidentiality, supervision, and candor cover any generative AI use. But the specific guidance for image generation hasn't been written. The third-order read: lawyers and firms must currently extrapolate from text-generation guidance to image generation, which creates inconsistent practice and exposure for the firms whose extrapolation turns out to be wrong when specific opinions land. The firm policy template for AI-generated images in evidence prep covers the framework that should anchor firm-level discipline regardless of which state's bar issues image-specific guidance first.

The Model Rules framework; how existing rules cover AI image generation

Five Model Rules of Professional Conduct apply to AI image generation by extension from existing text-generation guidance.

Rule 1.1, Competence. Per ABA Comment 8 (added 2012), competence includes "the benefits and risks associated with relevant technology." Lawyers using AI image generation must understand the technology's capabilities and limitations, including the C2PA Content Credentials framework, the difference between AI-generated and photographic evidence, and the authentication implications. Lawyers who don't understand the technology and use it anyway create competence exposure.

Rule 1.6: Confidentiality. Use of AI tools that train on customer inputs creates confidentiality exposure. The Heppner ruling (US v. Heppner, SDNY February 2026). Addressed in detail in the Heppner explainer; established that consumer AI tool inputs may not be protected by privilege. AI image generation using consumer-tier tools when matter context is involved creates Rule 1.6 exposure. Enterprise-tier tools (ChatGPT Business or Enterprise, Claude Team or Enterprise, Flux Pro 1.1 with enterprise DPA) carry the data-handling commitments necessary to avoid Rule 1.6 issues.

Rule 3.3, Candor toward the tribunal. AI-generated demonstratives, exhibits, or other visual evidence presented to a court without disclosure of AI-origin can constitute lack of candor. Even when local rules don't strictly require AI disclosure, Rule 3.3's broader candor duty arguably requires it when the AI-origin would be material to the court's evaluation of the evidence. The AI demonstratives courtroom disclosure rule gap analysis covers the rule-disclosure interaction.

Rule 4.1: Truthfulness in statements to others. Statements to opposing counsel about exhibits or evidence that fail to disclose AI-origin when material can constitute affirmative misstatement under Rule 4.1. The duty extends beyond formal court proceedings to communications with opposing counsel during discovery, depositions, and settlement negotiations.

Rule 5.3. Responsibilities regarding nonlawyer assistance. AI tools and the personnel using them function analogously to nonlawyer assistance under Rule 5.3. Lawyers retain responsibility for AI-assisted work product. Inadequate supervision of AI image generation by paralegals, associates, or outside graphics vendors creates Rule 5.3 exposure.

The operational implication: firms can build a defensible AI-image practice today by documenting compliance with these five Model Rules. The structural framework applies regardless of which state's bar issues specific image-generation opinions first.

Where image generation surfaces gaps in current opinions

Three gaps in current state bar AI ethics opinions create exposure for firms using AI image generation.

Gap 1: Provenance metadata duties. Current opinions don't address whether lawyers have an affirmative duty to preserve C2PA Content Credentials metadata in client files, work product, or evidence production. The duty exists by extension from competence, confidentiality, and candor frameworks; but no published opinion says so directly. Firms that strip metadata as a matter of routine practice risk later opinions finding the practice inconsistent with the duties. Per the C2PA Content Credentials evidence standards spoke, preservation should be the default firm practice.

Gap 2: Disclosure to clients about AI use. Current opinions vary on whether lawyers must affirmatively disclose AI use to clients. California's guidance suggests disclosure when AI use would be material to the representation. Florida's opinion frames AI use as analogous to nonlawyer assistance, which doesn't typically require client notification. The variation creates compliance uncertainty for multi-jurisdictional firms. For AI image generation specifically, where the work product is visual evidence that may be central to case strategy: the disclosure question is sharper. Best practice today: disclose AI image generation in the engagement letter or in matter-specific status updates.

Gap 3: Bar discipline for AI-image misuse. Current opinions don't address what disciplinary path applies to lawyers who use AI image generation in ways that violate the rules. The general disciplinary process applies, but specific guidance on aggravation factors (e.g., undisclosed AI use, fabricated evidence) hasn't been written. The first state bar disciplinary case in this category will set the local precedent. Firms with practices in multiple states will face inconsistent disciplinary expectations until opinions consolidate.

The second-order read: these gaps create asymmetric exposure for firms operating before specific opinions land. The compliance posture that satisfies general AI ethics guidance may turn out to be insufficient when image-specific opinions arrive. Conservative firms over-comply now and adjust later. Aggressive firms under-comply and risk having to defend the gap when the opinions arrive.

The third-order read: the conservative posture is asymmetric in the firm's favor. The cost of over-compliance is marginal operational lift. The cost of under-compliance is potential disciplinary action plus reputational damage. The math favors conservative discipline today.

What state bar opinions on AI image generation will likely cover

Based on the structural pattern from text-generation opinions and the unresolved gaps in current guidance, image-generation opinions will likely address six topics.

Topic 1: Affirmative provenance preservation duty. Opinions will likely conclude that lawyers must preserve C2PA Content Credentials and similar provenance metadata when producing AI-generated visual work product. The duty will flow from competence (understanding the technology), candor (preserving what authentication requires), and confidentiality (preserving the chain of custody) frameworks combined.

Topic 2: Mandatory disclosure to courts. Opinions will likely conclude that AI-generated demonstratives, exhibits, or other visual evidence presented to a court require disclosure of AI-origin under Rule 3.3 candor duties. Independent of whether local rules strictly require it. The opinions will address the materiality threshold (when AI-origin must be disclosed vs. when disclosure is not required).

Topic 3: Mandatory disclosure to opposing counsel. Opinions will likely conclude that AI-generated visual work product produced or shared with opposing counsel requires disclosure of AI-origin under Rule 4.1 truthfulness duties. Same materiality threshold analysis as Topic 2.

Topic 4: Client disclosure standards. Opinions will likely consolidate the current variation across states. The most likely consensus: disclosure required when AI image generation is used in a matter-specific way that might affect case strategy or work product attribution. Routine internal-use AI generation (administrative tasks, brainstorming) likely won't require disclosure.

Topic 5: Supervision standards for paralegals and outside vendors. Opinions will likely conclude that lawyers retain Rule 5.3 supervision responsibility for AI image generation by paralegals, associates, summer associates, and outside graphics vendors. The supervision standard will require documentation of AI tool use, review of outputs, and integration with firm-level AI policy.

Topic 6: Disciplinary aggravation factors. Opinions will likely identify aggravation factors specific to AI image misuse: undisclosed AI use in court submissions, fabricated visual evidence, AI image use in violation of explicit local rules or court orders, and use of consumer-grade tools for matter-context work in violation of confidentiality duties.

The operational implication: the firm's current AI-image policy should already address these six topics in some form per the firm policy template. Firms that anticipate the opinions land with policies that survive the first round of bar guidance with minimal modification. Firms that wait for opinions to publish will be retrofitting under deadline pressure.

How firms should track and respond to bar opinion development

Five operational moves track the bar opinion stream and integrate updates into firm practice.

Move 1: Subscribe to state bar ethics opinion notifications. Most state bars publish ethics opinions through email subscription, RSS, or newsletter formats. Subscribe for the states where the firm has practice presence. Add the AI Governance Partner role from the firm policy template as the responsible reader.

Move 2: Track the ABA Formal Opinion stream. ABA Formal Opinions are not binding but are widely cited in state opinion drafting. ABA Opinion 512 (July 2024) is the most recent on generative AI broadly. The next ABA opinion specifically addressing AI image generation will likely arrive in 2026-2027 and will be the template that several state bars adopt.

Move 3: Monitor major state bar task forces. California, New York, Texas, and Florida all have ongoing AI task forces that publish guidance and recommendations periodically. The task force outputs precede formal opinions and provide early signal on the direction of bar guidance.

Move 4: Review firm AI policy quarterly against published guidance. When new bar opinions land; whether image-specific or general, review the firm's AI-image policy for compliance gaps. Document the review and any policy updates. Build the review into the AI Governance Partner's quarterly responsibilities.

Move 5: Brief the bar proactively when appropriate. For firms in jurisdictions where the bar's AI task force is actively soliciting practitioner input, submit comments and recommendations based on the firm's operational experience. Practitioner input shapes the eventual opinions; firms with active practice posture get to influence the standard.

My take: the bar opinion development process is a slow but consequential signal stream. Firms that track it and adapt produce defensible AI-image practice. Firms that don't operate in a fog of compliance uncertainty until specific opinions land: by which time exposure has accumulated. The Vortex tracker at aivortex.io/legal/ai-disclosure/ covers the federal court disclosure landscape; the parallel state bar opinion tracking should be part of the firm's AI Governance Partner's standard responsibilities.

The Bottom Line: My take: No state bar has issued a formal ethics opinion specifically addressing AI image generation as of April 2026, but image-specific opinions will arrive within 12-18 months following the structural pattern from text-generation. Five Model Rules. 1.1 (competence), 1.6 (confidentiality), 3.3 (candor), 4.1 (truthfulness), and 5.3 (supervision); apply to AI image generation by extension from current guidance. Firms should adopt conservative compliance posture now: preserve C2PA metadata, disclose AI-origin to courts and opposing counsel, document supervision of paralegals and outside vendors, and disclose AI use to clients in engagement letters. The cost of over-compliance is marginal; the cost of under-compliance is disciplinary exposure plus reputational damage when opinions land.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.