Technology competence isn't optional. It's an ethical duty. Since 2012, Comment 8 to ABA Model Rule 1.1 has required lawyers to keep abreast of "the benefits and risks associated with relevant technology." As of 2026, 40 states plus the District of Columbia and Puerto Rico have adopted this duty. If you're practicing law in the United States and you don't understand the AI tools your firm uses — or the AI tools your opponents are using — you're not just behind the curve. You're violating your professional obligations.

The AI era has made this duty operational in ways the drafters probably didn't anticipate. When Comment 8 was adopted in 2012, "relevant technology" meant things like email encryption and e-discovery platforms. In 2026, it means understanding how large language models work, what hallucination risk is, whether your AI tool trains on client data, and what disclosure obligations apply when you use AI in court filings. The bar has been raised — literally and figuratively.


What Comment 8 Actually Says and Why It Matters Now

ABA Model Rule 1.1 requires that "a lawyer shall provide competent representation to a client," which includes "the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation." Comment 8 elaborates: a lawyer should keep abreast of changes in the law and its practice, "including the benefits and risks associated with relevant technology." That last clause — added in 2012 — is the technology competence duty. For a decade, it was mostly discussed in the context of e-discovery, cybersecurity, and cloud computing. AI has transformed it into the most consequential ethical obligation most lawyers didn't know they had. Here's why: with 1,227+ documented cases of AI hallucinations in court filings globally by early 2026, and sanctions exceeding $145,000 in Q1 2026 alone, courts are no longer treating AI ignorance as an excuse. Judges are explicitly stating that lawyers have a duty to understand the tools they use. "I didn't know the AI hallucinated" is not a defense. Under Comment 8, you had an obligation to know that hallucination was a risk before you relied on the output.

State Adoption: Where the Duty Applies

As of 2026, 40 states, the District of Columbia, and Puerto Rico have adopted the technology competence duty. All followed the ABA's lead in doing so through a comment to their general competency rule, though not all used the ABA's exact language. The most recent adoptions show the duty is expanding, not plateauing. The District of Columbia amended its comments to Rule of Professional Conduct 1.1 in April 2025 (D.C. Court of Appeals, No. M284-24), adding language mirroring Comment 8. Puerto Rico went further: its new Rule of Professional Conduct 1.19, effective January 1, 2026, creates a standalone rule requiring technological competence and diligence — not just a comment appended to the general competency rule. Puerto Rico's approach signals where the trend may be heading. Rather than burying the obligation in a comment, future state adoptions may establish technology competence as an independent, freestanding ethical duty. For managing partners, the practical implication is clear: regardless of your jurisdiction, the technology competence duty either already applies to you or will soon. Plan accordingly.

What Technology Competence Means for AI Specifically

The RedGrave LLP 2025 update on the ethical duty of technological competence maps the obligation to specific AI knowledge areas. At minimum, technology competence in the AI era requires understanding these concepts. How AI tools generate output: You don't need to understand transformer architecture. You do need to understand that large language models predict text based on training data, that they can "hallucinate" citations and legal reasoning that sound authoritative but are fabricated, and that they don't "know" law the way a legal database retrieves indexed cases. Data handling and training: Does the AI tool you're using train on your inputs? If so, client data could influence outputs for other users. ABA Formal Opinion 512 specifically requires lawyers to understand whether their AI tools are "self-learning." Output verification requirements: Every AI-generated legal work product must be verified against authoritative sources. This isn't a best practice — it's the minimum standard under Rule 1.1. Disclosure obligations: Over 25 federal district courts and numerous state courts now require disclosure of AI use in filings. Knowing your jurisdiction's requirements is part of the competence duty. Bias and limitations: AI tools can reflect biases in their training data. Understanding these limitations — particularly in areas like sentencing recommendations, employment law, or immigration — is part of competent representation.

CLE Implications: Training Is No Longer Optional

The technology competence duty creates a de facto CLE obligation for AI literacy. If you're required to keep abreast of technological benefits and risks, and AI is the most significant technology impacting legal practice, then maintaining AI competence is part of your professional development obligation. Several state bars have responded. Florida, California, and New York have all approved AI-focused CLE programs addressing the technology competence intersection. The ABA Commission on AI released working group recommendations in February 2025, establishing what it calls "immediate attorney obligations" versus "emerging best practices" — a distinction that helps lawyers prioritize their learning. For firm leadership, this means AI training can't be treated as a one-time event or an elective CLE credit. It needs to be mandatory, recurring, and substantive. A 60-minute webinar on "AI overview" doesn't meet the bar. Training should cover specific tools the firm uses, hands-on practice with verification workflows, and current bar guidance in the firm's jurisdictions. The firms that tie AI tool access to training completion — like Brownstein Hyatt, which achieved 90% proficiency — are the ones best positioned to demonstrate compliance with the technology competence duty.

Practical Compliance: What Managing Partners Should Do

Here's a concrete compliance checklist for the technology competence duty in the AI era. Inventory your AI tools. Catalog every AI-enabled tool in the firm — including embedded AI features in existing platforms like Westlaw Edge, Lexis+ AI, Microsoft Copilot, and Zoom summaries. You can't be competent about tools you don't know exist. Establish minimum knowledge requirements. Define what every lawyer at the firm must understand about AI: hallucination risk, data handling, verification obligations, and disclosure requirements. Test this knowledge. Mandate AI training. Make it annual, practical, and tied to the tools your firm actually uses. Include jurisdiction-specific disclosure requirements. Track completion and tie it to AI tool access. Document everything. Your firm's AI training records, policies, committee minutes, and vendor evaluations are evidence of compliance. If a court questions whether your firm met its technology competence obligations, you want a paper trail. Monitor bar developments. Assign someone — your ethics officer or AI committee — to track state bar guidance on AI and technology competence. The landscape is evolving quarterly. What was a best practice in 2025 may be a mandatory requirement in 2026. Address the partnership. Partners aren't exempt from the competence duty. In fact, they carry additional supervisory obligations under Rules 5.1 and 5.3. A partner who doesn't understand AI but supervises associates using it has a dual competence gap.

The Bottom Line: The technology competence duty under ABA Model Rule 1.1, Comment 8 has been adopted by 40+ states and is the legal foundation for every AI governance obligation in legal practice. In the AI era, it means understanding hallucination risk, data handling, verification requirements, and disclosure obligations. It's not a suggestion. It's an enforceable ethical duty, and the firms documenting their compliance through training, policies, and AI committee governance are the ones protected when courts come asking.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.