ABA Formal Opinion 512, issued July 29, 2024, is the first comprehensive ethical framework for lawyers using generative AI. It doesn't create new rules. It maps six existing Model Rules of Professional Conduct onto AI use and makes one thing clear: the obligations you already have don't disappear because a machine produced the output.
The opinion's core message lands hard. Uncritical reliance on AI-generated content is almost certainly a violation of professional conduct rules. Not "might be" or "could be" — "almost certainly." That language matters. It means bar counsel already has the framework to prosecute AI-related misconduct. Managing partners who haven't read this opinion are running a compliance gap that grows every month.
Rule 1.1 Competence: You Must Understand the Tool You're Using
Opinion 512 starts with Model Rule 1.1 and the duty of competence. The opinion requires lawyers to understand how generative AI works at a functional level — not as computer scientists, but enough to recognize its limitations. You need to know that large language models predict probable text, not verified facts. You need to understand that hallucinations aren't bugs; they're a feature of how these systems generate output.
The practical requirement: lawyers must stay current on AI developments relevant to their practice. Comment 8 to Rule 1.1 already requires keeping up with "the benefits and risks associated with relevant technology." Opinion 512 makes explicit that generative AI falls squarely within that obligation. A managing partner who tells associates to "just use ChatGPT" without training on its limitations has a competence problem — and it's the firm's problem, not just the associate's.
Rule 1.6 Confidentiality: Your Prompts Are Client Data
This is where most firms are failing right now. Every prompt containing client information is a potential confidentiality breach under Rule 1.6. Opinion 512 requires lawyers to evaluate whether AI tools retain, use, or expose client data — before entering that data.
The opinion specifically addresses consumer-grade AI tools (ChatGPT, Claude, Gemini without enterprise agreements). Using these tools with client facts, case details, or privileged communications likely violates 1.6 unless the lawyer has confirmed the tool's data handling practices. "I didn't know it stored my data" is not a defense. The opinion also flags that some AI vendors use prompts as training data, meaning your client's privileged information could influence outputs for other users. Enterprise agreements with no-training clauses aren't optional — they're an ethical requirement.
Rules 1.4 and 3.3: Communication and Candor Obligations
Opinion 512 addresses Rule 1.4 (communication) and Rule 3.3 (candor toward the tribunal) together because they create overlapping disclosure obligations. On communication: lawyers must inform clients about the use of AI when it's material to the representation. The opinion stops short of requiring disclosure in every instance, but the safe practice is clear — tell your clients.
On candor: Rule 3.3 prohibits presenting false statements of fact or law to the court. AI-generated legal research that hasn't been verified is a ticking Rule 3.3 violation. The Mata v. Avianca line of cases proved this. Opinion 512 codifies what those sanctions decisions demonstrated — you can't cite cases you haven't confirmed exist in an actual reporter. The verification obligation is absolute. There's no "I relied on the AI" exception to candor.
Rules 5.1 and 5.3: Supervision of AI Users in Your Firm
Rules 5.1 (supervisory responsibilities) and 5.3 (responsibilities regarding nonlawyer assistance) create firm-level obligations that most managing partners haven't addressed. Opinion 512 treats AI output like work from a nonlawyer assistant — it requires the same supervisory framework.
What this means practically: firms need written AI use policies. Partners with supervisory authority must ensure associates and staff using AI tools are trained on their limitations. If a junior associate submits an AI-drafted brief with hallucinated citations, the supervising partner shares responsibility under Rule 5.1. The opinion doesn't say "consider" creating policies — the supervisory rules already require reasonable measures to ensure compliance. A firm with no AI policy has no supervisory framework, which means supervisory lawyers are exposed.
Rule 1.5 Fees: The Billing Question That's About to Explode
Opinion 512's treatment of Rule 1.5 (fees) is where the economics get uncomfortable. The opinion states that fees must remain reasonable, and billing for AI-assisted work as if it were entirely human work raises reasonableness concerns. If AI drafts a motion in 20 minutes that would have taken 4 hours manually, billing 4 hours is ethically questionable.
The opinion doesn't ban billing for AI time, but it draws a line: the fee must reflect the value and work actually performed, not the time it would have taken without AI. This creates a fundamental tension with hourly billing. Florida Bar Opinion 24-1 went further and explicitly prohibits charging AI tool costs as a separate line item. The emerging consensus is that AI is overhead, like Westlaw — you can't bill the subscription as a disbursement. Firms clinging to hourly models need to recognize that AI competence under Rule 1.1 and fee reasonableness under Rule 1.5 are on a collision course.
The Bottom Line: ABA Opinion 512 isn't aspirational guidance — it's a prosecution roadmap. Every obligation it identifies already exists in the Model Rules. Bar counsel in every jurisdiction now has a clear framework for AI-related disciplinary actions. Firms that treat this as optional are betting their licenses that regulators won't act. That's a bad bet.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
