The ethics landscape for AI in law practice changed permanently when the ABA issued Formal Opinion 512 in 2024, declaring that lawyers who use AI without understanding its limitations are 'almost certainly' committing malpractice. By April 2026, 42 state bars have issued opinions on AI use, over 300 federal judges have standing AI disclosure orders, and at least 8 lawyers have been publicly sanctioned for AI-related misconduct.
This isn't a future concern -- it's a present obligation. Every lawyer using AI needs to understand the competence requirement, disclosure landscape, supervision duties, and billing implications. This guide covers all of it, with specific citations so you can build your firm's AI governance policy on solid ground.
ABA Formal Opinion 512: What It Actually Requires
ABA Formal Opinion 512 (July 2024) didn't create new rules -- it applied existing Model Rules to AI use. The key holdings:
Competence (Rule 1.1): Lawyers must understand AI's capabilities and limitations before using it. You don't need to know how large language models work technically, but you need to know that they hallucinate, that their training data has cutoff dates, and that their output requires verification. Using AI without this baseline understanding is 'almost certainly' a violation.
Confidentiality (Rule 1.6): Inputting client information into AI tools requires the same analysis as any third-party service. Consumer AI tools (ChatGPT free tier, Claude free) likely don't have adequate protections. Enterprise tools with proper DPAs and data handling agreements can satisfy Rule 1.6 if the lawyer conducts due diligence.
Supervision (Rules 5.1 and 5.3): Partners and supervising lawyers must ensure that associates and staff using AI do so competently. You can't delegate AI oversight to your IT department -- the ethical obligation stays with the supervising lawyer.
Communication (Rule 1.4): Clients should be informed when AI is used in their matters, particularly when it materially affects the work product or billing. Opinion 512 didn't mandate disclosure in all cases, but the trend is clearly toward transparency.
Candor (Rule 3.3): AI-generated content submitted to tribunals must be verified. Submitting AI hallucinations violates the duty of candor. Period.
State Bar Opinions: The Patchwork That Matters
State bars haven't been uniform, and the differences matter if you practice in multiple jurisdictions.
Most restrictive: Florida (Opinion 24-1) requires disclosure to clients whenever AI is used substantively. California (Proposed Formal Opinion 2024-01) requires lawyers to review and verify all AI output and prohibits AI from exercising independent legal judgment. New York's guidance requires disclosure in court filings when AI was used for legal research or drafting.
Moderate approach: Texas, Illinois, and New Jersey require competence and confidentiality compliance but don't mandate client disclosure in all cases. They treat AI as a tool -- like Westlaw or a paralegal -- that requires appropriate supervision.
Permissive states: Several states (including most that haven't issued formal opinions) default to existing rules without AI-specific guidance. This doesn't mean anything goes -- Model Rules apply regardless.
The trend: Every new state bar opinion in 2025-2026 has moved toward more disclosure, more supervision, and more documentation. If your state hasn't issued an opinion yet, build your policy around the restrictive end -- you'll be compliant everywhere and won't need to scramble when your bar catches up.
Court Disclosure Requirements: The 300+ Order Landscape
Over 300 federal judges now have standing orders requiring some form of AI disclosure in filings. The requirements range from blanket disclosure ('certify whether AI was used') to narrow ('disclose only if AI-generated content was not verified by a licensed attorney').
Key circuit positions: - Fifth Circuit: Standing Order (2024) requires attorneys to certify AI use and confirm human review of all AI-assisted filings - Third Circuit: Individual judge orders vary widely; no circuit-wide rule yet - Ninth Circuit: Pilot program for AI disclosure in appellate briefing - Federal Circuit: Requires disclosure for patent-related filings where AI assisted in claim construction or prior art search
State courts: At least 19 states have some form of AI disclosure requirement at the trial court level. Texas, California, and New York lead in specificity.
What to actually do: Draft a standard AI disclosure certification and include it in every filing, regardless of whether the specific judge requires it. It takes 30 seconds to add and eliminates any risk of noncompliance. Make it a template in your document management system. Don't overthink this -- disclose proactively and move on.
Billing Ethics: The Uncomfortable Questions
This is where AI ethics gets genuinely complicated. If AI reduces a task from 5 hours to 30 minutes, what do you bill?
The emerging consensus: You can't bill 5 hours for 30 minutes of work. ABA Opinion 512 and multiple state bar opinions make clear that billing for time not actually spent is prohibited under Rule 1.5 (reasonable fees). The fact that AI did the work faster doesn't entitle you to bill as if it hadn't.
But there's nuance: Some bars distinguish between billing for the *tool's time* and billing for the *value delivered*. If you produce a research memo that would have taken an associate 5 hours, and it took AI + your verification 1 hour, you can potentially bill 1 hour at a higher rate that reflects the value -- but you can't bill 5 hours at associate rates.
The practical approach most firms are taking: - Bill actual time spent (including AI interaction and verification time) - Don't pass through AI tool subscription costs as disbursements unless the engagement letter specifically permits it - Consider value-based or flat-fee arrangements for AI-assisted work - Document AI use in billing narratives for transparency
What will get you in trouble: Billing 8 hours for a brief that AI drafted in 20 minutes and you reviewed for 40 minutes. It's happening, and it's a matter of time before bar complaints start naming specific AI-inflation patterns.
Building Your Firm's AI Ethics Policy
Every firm needs a written AI policy. Not a suggestion -- a requirement if you want to comply with supervisory obligations under Rules 5.1 and 5.3. Here's what your policy must cover:
Approved tools list: Which AI tools are approved for use with client data? Maintain a whitelist. Consumer tools (ChatGPT free, Claude free, Gemini free) should never be on it. Enterprise tools with signed DPAs belong here.
Data handling rules: What client information can be input into AI tools? Draw clear lines. Names, case numbers, and confidential facts require enterprise-grade tools with appropriate data handling agreements.
Verification requirements: All AI output used in client deliverables must be verified by a licensed attorney. Specify what 'verification' means -- checking every citation, reviewing for accuracy, confirming legal analysis.
Disclosure protocols: When and how to disclose AI use to clients and courts. Default to disclosure unless there's a specific reason not to.
Billing guidelines: How to bill for AI-assisted work. Actual time, transparent narratives, no inflation.
Training requirements: Minimum training before an attorney can use AI tools on client matters. Document completion.
Incident response: What happens when AI produces an error that makes it into a filing or client deliverable? Who gets notified, what gets documented, what corrective action is required.
This policy should be reviewed annually at minimum. AI capabilities change fast -- your policy needs to keep up.
The Bottom Line: ABA Opinion 512 made clear that using AI without understanding its limitations is likely malpractice. Forty-two state bars and 300+ federal judges have added specific requirements. Every firm needs a written AI policy covering approved tools, data handling, verification, disclosure, and billing. Default to transparency -- disclose AI use proactively, bill actual time, and verify everything before it goes out the door.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
