North Dakota has zero formal AI guidance for attorneys. With roughly 1,800 lawyers spread across Fargo and Bismarck, the state bar hasn't published an ethics opinion, formed a task force, or issued any informal guidance on generative AI in legal practice.


AI Regulation in North Dakota: The Current Landscape

As of April 2026, North Dakota sits in the silent category on AI regulation for lawyers. No formal ethics opinion exists. No task force has been announced. No legislative action has addressed AI in legal practice. The state appears on Bloomberg Law's AI guidance tracker, but only as a jurisdiction with nothing to report.

This isn't unusual for smaller states. North Dakota's legal market is compact — about 1,800 attorneys across two primary markets. The bar hasn't felt the same pressure that drove states like Texas or Pennsylvania to act. But that silence doesn't mean the rules don't apply.

The existing North Dakota Rules of Professional Conduct still govern every aspect of AI use. Rule 1.1 (competence), Rule 1.6 (confidentiality), Rule 5.1 (supervisory duties), and Rule 5.3 (responsibilities regarding nonlawyer assistance) all apply directly to AI-generated work product. The absence of AI-specific guidance just means you're working without a safety net of interpretation.

North Dakota (ND)
No Specific Guidance
Regulation Status
No Specific Guidance
Regulation Type
None
Posture
Silent
State AI Regulation — Updated April 2026

What the North Dakota Bar Says About AI

The State Bar Association of North Dakota hasn't issued any formal AI-specific ethics opinion or guidelines as of April 2026. No informal guidance documents, best practice recommendations, or advisory opinions address generative AI use in legal practice.

This puts North Dakota attorneys in a position where they're relying entirely on existing ethical rules without the benefit of state-specific interpretation. Other states have clarified questions like whether AI tools count as "nonlawyer assistance" under Rule 5.3, or what competence means when using AI for research. North Dakota lawyers don't have those answers yet.

The practical risk here is real. Without clear guidance, there's no established standard for what constitutes reasonable AI use in the state. If a disciplinary complaint arises from AI-generated work, the bar will be interpreting existing rules on a case-by-case basis — with no precedent to predict the outcome.


Court Rules and Judicial Guidance

No North Dakota state courts have issued standing orders, local rules, or judicial guidance specifically addressing AI use in legal filings as of April 2026. Federal courts in the District of North Dakota haven't published AI-specific standing orders either.

Attorneys should monitor both the state courts and the U.S. District Court for the District of North Dakota, as federal judges nationwide have been issuing AI disclosure requirements independently of state bar action.

Practical Implications for North Dakota Attorneys

For North Dakota attorneys, the silence creates a dual-edged situation. There's no prescriptive compliance burden — nobody's requiring AI disclosure certifications or mandatory training. But there's also no safe harbor. If something goes wrong with AI-generated work, you can't point to compliance with state-specific AI guidelines as a defense.

The biggest practical risk is hallucinated citations. Federal courts across the country have sanctioned attorneys for filing AI-generated briefs containing fabricated case law. North Dakota's small legal community makes this risk even more acute — judges and opposing counsel know each other, and a hallucinated citation incident would spread fast.

Attorneys using AI tools for research, drafting, or document review should treat the absence of guidance as a reason for more caution, not less. The standard of care hasn't been lowered just because the bar hasn't spoken. If anything, the lack of guidance means the bar retains maximum flexibility in how it interprets violations.


What Attorneys in North Dakota Should Do

First, verify everything. Every case citation, every statutory reference, every factual claim generated by AI needs independent confirmation through traditional legal research tools. This isn't optional — it's the floor for competent practice regardless of what the bar says or doesn't say.

Second, protect client data. Before entering any client information into an AI tool, understand that tool's data retention and training policies. Most enterprise legal AI platforms don't train on user data, but consumer tools like free-tier ChatGPT have different terms. Rule 1.6 doesn't care whether your state has AI-specific guidance — confidentiality is confidentiality.

Third, document your AI workflow. Keep records of what tools you're using, what prompts you're running, and what review processes you apply to AI output. If the bar eventually issues guidance — or if a client files a complaint — you want a paper trail showing you exercised professional judgment at every step. Proactive documentation is the best insurance in a silent regulatory environment.


The Bottom Line

North Dakota hasn't said a word about AI in legal practice. That silence doesn't mean freedom — it means existing ethical rules apply without any AI-specific interpretation to guide you, and the bar retains full discretion on how to handle complaints.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.