● Formal Opinion — Texas AI Task Force Preliminary Report

Texas moved early on AI ethics. The State Bar of Texas AI Task Force released its Preliminary Report in February 2024, making Texas one of the first major states to formally address generative AI in legal practice. The report covers competence, confidentiality, billing, and supervision — all mapped to the Texas Disciplinary Rules of Professional Conduct.

The report is advisory, not binding. But multiple Texas courts have independently adopted AI disclosure requirements for filings, creating a patchwork of obligations across the state. For Texas lawyers, the practical effect is clear: AI use is a compliance issue now, not a future concern.


What the Bar Says

The Texas AI Task Force Preliminary Report (February 2024) addresses AI through the lens of existing disciplinary rules rather than creating new ones. Rule 1.01 (Competence) requires lawyers to understand the AI tools they use — including their limitations, hallucination risks, and output verification needs. Rule 1.05 (Confidentiality) demands that attorneys evaluate how AI tools process, store, and potentially expose client data before use. The report emphasizes that delegating to AI does not relieve the attorney of supervisory obligations under Rule 5.03. Lawyers remain responsible for all work product, regardless of whether a human or machine generated the first draft.

Billing Implications

The Task Force recommends transparency in billing for AI-assisted work. Fees must remain reasonable under Rule 1.04. If AI reduces the time needed for a task from 10 hours to 2, billing 10 hours is an ethics violation. The report stops short of prescribing a specific billing methodology but signals that value-based billing may be the safest approach for AI-heavy work. Attorneys should document AI use in billing records and be prepared to justify the fee if questioned. Passing through AI tool subscription costs requires client disclosure.

Confidentiality Rules

Texas disciplinary rules require lawyers to evaluate AI tools for compliance with Rule 1.05 before inputting any client information. The Task Force emphasizes vendor due diligence: attorneys must understand the AI provider's data retention, training, and sharing policies. Consumer-grade AI tools (free ChatGPT, Google Bard) present heightened risk because their terms of service often allow data use for model training. Enterprise or API-based tools with contractual data protections are the safer path. The report also flags that even anonymized prompts can reveal confidential information through context.

What's Still Unclear

The Task Force report is preliminary — a final version with more specific guidance is expected but has not been issued. There is no statewide AI disclosure requirement for court filings; instead, individual Texas courts have adopted their own rules, creating inconsistency. The report does not address whether AI-generated legal research constitutes "practice of law" when performed by non-lawyer staff. Questions about malpractice liability for AI errors remain open, as does the interaction between AI use and Texas's unique approach to unauthorized practice of law enforcement.

The Bottom Line: Texas has a Task Force report and growing court-level disclosure rules — treat AI compliance as mandatory even though final formal guidance is still pending.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.