In October 2023, ABA Formal Opinion 512 confirmed what state bars had been signaling for two years: lawyers who use AI tools are personally responsible for protecting client data under existing ethics rules, with no AI-specific exception. The opinion mapped Model Rule 1.6 (confidentiality) and Model Rule 1.1 (competence) directly onto AI tool usage, making clear that feeding client information into an AI platform without understanding its data handling is an ethics violation, full stop.

Since then, at least 14 state bars have issued their own guidance. Florida, California, New York, and Texas have all published opinions reinforcing the same framework: the duty of confidentiality applies to every tool you use, and the duty of competence requires you to understand how that tool processes data before you use it. The bar doesn't care which AI tool you used. It cares what happened to the data.


ABA Model Rules That Govern AI Tool Use

Model Rule 1.1 requires competence, which the ABA has interpreted to include technological competence since the 2012 amendment to Comment 8. In the context of AI, this means an attorney must understand — at a functional level — how an AI tool processes inputs, whether it retains data, and whether inputs are used to train the model. Claiming ignorance of how ChatGPT handles data is not a defense. It's the violation.

Model Rule 1.6 prohibits revealing information relating to the representation of a client unless the client gives informed consent. Typing a client's case facts into a consumer AI tool that uses inputs for model training is disclosure to a third party. The ABA's Formal Opinion 512 was explicit: if the AI vendor's terms of service allow data retention or training on inputs, using that tool with client information violates 1.6 unless the client consents.

Model Rule 5.3 extends supervisory duties to nonlawyer assistants, which now includes AI tools and the staff using them. A partner who allows associates to use unvetted AI tools without guidelines is personally exposed to discipline. The AI governance policy framework addresses this chain of responsibility.

State Bar Guidance: Where the Rules Are Tightest

Florida Bar Ethics Opinion 24-1 (2024) was the most prescriptive early guidance, requiring lawyers to review AI tool terms of service, confirm data handling practices in writing, and obtain client consent before using AI on confidential matters. Florida treats AI tools as third-party service providers under Rule 4-1.6, triggering the same vetting obligations as hiring an outside vendor.

California State Bar Practical Guidance (2024) went further, recommending that firms maintain a list of approved AI tools and prohibit use of unapproved tools for client work. California also flagged that metadata and prompt histories in AI tools can constitute discoverable work product, creating a secondary confidentiality risk most firms haven't addressed.

New York City Bar Association Formal Opinion 2024-1 focused on the informed consent requirement, finding that lawyers must disclose AI use to clients when the AI tool processes confidential information, even if the tool has enterprise-grade data protections. The opinion distinguished between AI tools that touch client data (disclosure required) and AI tools used for general legal research without client-specific inputs (no disclosure required).

Texas, Illinois, and New Jersey have issued similar guidance. The trend across all jurisdictions is consistent: AI tools are not exempt from existing confidentiality obligations, and competence now requires understanding the technology.

Consumer vs. Enterprise AI: The Compliance Line

The critical distinction for compliance is whether the AI tool retains, trains on, or exposes client data. Consumer-tier AI tools — including ChatGPT Plus ($20/month), the free tier of Google Gemini, and Perplexity — use inputs to improve their models unless users opt out, and even then, data retention policies vary. These tools are not compliant for client data under any current bar guidance.

Enterprise-tier tools with contractual data protections are the minimum standard. ChatGPT Enterprise and ChatGPT Team offer zero-data-retention agreements. Anthropic's Claude via API with enterprise terms provides contractual guarantees against training on inputs. Purpose-built legal AI platforms like Harvey AI, CoCounsel, and Casetext are designed with legal confidentiality requirements from the ground up.

But the tool alone isn't enough. Your firm needs a vendor contract that explicitly addresses: no training on client data, no data retention beyond the processing session, encryption in transit and at rest, SOC 2 Type II certification, and breach notification obligations. Without that contract, even an enterprise tool leaves your firm exposed.

What This Means for Your Firm

Start with an audit of every AI tool currently being used in your firm — including the ones partners don't know about. Shadow AI is the biggest ethics risk most firms face, because it means client data is flowing into unvetted tools with no oversight.

Build an approved tools list with vendor contracts reviewed by someone who understands both the ethics rules and the technology. Document the review. State bars are moving toward requiring firms to demonstrate their AI vetting process, not just assert that one exists.

Implement a firm-wide AI governance policy that covers tool approval, client consent requirements, data handling standards, and training obligations for every attorney and staff member. The firms that treat this as a one-time task will be revisiting it after a bar complaint. The firms that build it into their operations will be the ones winning clients who ask about AI governance during intake — and that question is becoming standard.

The Bottom Line: Your ethics obligations didn't change when AI arrived. Your attack surface did. Govern the tools or the bar will govern you.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.