Alabama hasn't issued a formal ethics opinion on AI in legal practice. The State Bar published an article in August 2024 laying out broad principles, but attorneys in Birmingham, Montgomery, and Huntsville are operating without binding AI-specific rules. That gap became consequential in July 2025 when a federal court in the Northern District disqualified attorneys for submitting AI-hallucinated citations.


AI Regulation in Alabama: The Current Landscape

As of April 2026, Alabama has no legislation specifically addressing AI in legal practice. The Alabama State Bar published an article in August 2024 discussing generative AI and legal ethics, but it stopped short of issuing a formal ethics opinion with an opinion number. The article emphasized three principles: AI should never substitute for a lawyer's professional judgment, lawyers must understand the technology and its limitations, and AI output must be reviewed for compliance with professional standards.

The regulatory posture is silent. Alabama hasn't formed a task force, proposed rule amendments, or initiated a public comment process on AI rules. For a state with 12,414 licensed attorneys and four major legal markets, that's a notable absence. Neighboring states like Florida (Opinion 24-1, January 2024) and Georgia (Special Committee on AI formed in 2024) have moved faster.

The practical effect: Alabama attorneys are governed by existing Rules of Professional Conduct and the ABA's Formal Opinion 512 on AI. That's not nothing, but it leaves significant gray areas around disclosure obligations, billing for AI-assisted work, and supervision of AI tools.

Alabama (AL)
Partial Guidance
Regulation Status
Partial Guidance
Regulation Type
Bar Guidelines
Posture
Silent
State AI Regulation — Updated April 2026

What the Alabama Bar Says About AI

The Alabama State Bar's August 2024 publication represents the only official guidance from the state bar on AI. It's informal, not a numbered ethics opinion, and carries less weight than formal guidance issued by states like Florida or the District of Columbia. The key takeaways: AI is a tool, not a replacement for professional judgment. Lawyers must understand what the AI is doing before relying on its output. All AI-generated content must be reviewed for compliance with professional standards before submission.

Notably absent from the guidance: any requirement to disclose AI use to clients, courts, or opposing counsel. Alabama hasn't taken a position on whether AI-assisted work changes billing obligations. There's no guidance on confidentiality protocols for entering client data into AI systems, and no mention of supervisory duties when associates or paralegals use AI tools.

For practical purposes, Alabama attorneys should treat ABA Formal Opinion 512 as the most authoritative guidance available. That opinion establishes that existing ethical duties of competence, confidentiality, communication, and supervision apply fully to AI use.


Court Rules and Judicial Guidance

Alabama has no court-level rules specifically addressing AI use in filings or proceedings. However, the Northern District of Alabama sent a clear signal in Johnson v. Dunn, No. 2:21-cv-1701 (July 23, 2025). In that case, a large law firm submitted hallucinated legal citations in a motion. The court's response was severe: instead of monetary sanctions, the judge disqualified the offending attorneys from representing the client for the remainder of the case.

That outcome is arguably worse than a fine. Disqualification means the firm lost the client relationship, the client had to find new counsel mid-litigation, and the attorneys' professional reputations took a direct hit. Alabama practitioners should treat Johnson v. Dunn as the de facto standard for what happens when AI output goes unverified.

Practical Implications for Alabama Attorneys

The absence of formal rules doesn't mean the absence of risk. Alabama attorneys using AI tools are still bound by Rules 1.1 (Competence), 1.6 (Confidentiality), 3.3 (Candor Toward the Tribunal), and 5.3 (Supervision of Nonlawyer Assistance). Every one of those rules has direct application to AI use, even without AI-specific language.

The Johnson v. Dunn disqualification shows that Alabama federal courts won't hesitate to impose serious consequences for AI-related failures. The state court system hasn't had a comparable case yet, but the precedent is set. Any attorney submitting AI-generated content without verification is taking on significant professional risk.

For firms in Birmingham, Montgomery, Huntsville, and Mobile, the practical move is to build internal AI policies now rather than waiting for the bar to act. The firms that establish governed workflows and review protocols early will be better positioned when formal rules arrive, and they'll avoid being the test case that prompts those rules.


What Attorneys in Alabama Should Do

First, adopt the ABA Formal Opinion 512 framework as your baseline. It covers competence (understand the AI tool before using it), confidentiality (don't input privileged information into consumer AI tools), communication (tell clients when AI significantly impacts their matter), and supervision (review all AI output before submission). This is the floor, not the ceiling.

Second, create a written AI use policy for your firm. Document which tools are approved, what types of work they can be used for, who reviews AI output, and how client data is protected. When Alabama eventually issues formal guidance, firms with existing policies will need minimal adjustment.

Third, verify every citation, case reference, and legal proposition generated by AI. Johnson v. Dunn isn't an outlier. Federal courts across the country are imposing escalating sanctions for AI hallucinations, and Alabama is on that list. Build verification into your workflow as a non-negotiable step.


The Bottom Line

Alabama is in the silent majority of states with no formal AI rules for attorneys. The August 2024 bar article provides general principles but no binding framework. Johnson v. Dunn proves that Alabama courts will impose real consequences for AI misuse, so don't mistake regulatory silence for safety.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.