Vermont has taken a thoughtful, measured approach to AI in legal practice. The Vermont Judiciary Committee on Artificial Intelligence and the Courts released an annual report in March 2025, and the state's bar counsel published a notable opinion in May 2024 arguing that existing Rules of Professional Conduct don't need amendment for AI.
AI Regulation in Vermont: The Current Landscape
Vermont's approach to AI regulation is defined by a clear position: existing rules are enough. In May 2024, Vermont's bar counsel published an opinion piece arguing that the current Rules of Professional Conduct don't need amendment in response to AI adoption. This isn't silence — it's a deliberate stance that the existing ethical framework already covers AI use.
The Vermont Judiciary Committee on Artificial Intelligence and the Courts reinforced this direction with its annual report in March 2025. The report includes guidelines from a Disciplinary Rules Subcommittee, establishing that lawyers must keep abreast of the risks and benefits of generative AI and should review the terms of service of AI tools to understand how they handle input data.
For a state with roughly 2,200 attorneys across Burlington and Montpelier, Vermont's approach is notably sophisticated. Rather than rushing to create AI-specific rules that could become outdated as technology evolves, Vermont is interpreting existing rules in an AI context — a position that provides flexibility while still setting expectations.
What the Vermont Bar Says About AI
Vermont's bar guidance comes from two complementary sources. The bar counsel's May 2024 opinion piece directly addresses whether the Rules of Professional Conduct need amendment for AI. The answer: no. The existing rules on competence, confidentiality, supervision, and candor already cover AI use without modification.
The Vermont Judiciary Committee's March 2025 annual report adds specificity through its Disciplinary Rules Subcommittee guidelines. Two requirements stand out: lawyers must keep abreast of the risks and benefits of generative AI (a technology-competence obligation under Rule 1.1), and lawyers should review the terms of service of AI tools to understand how they use input data (a confidentiality obligation under Rule 1.6).
The terms-of-service requirement is particularly practical. It addresses one of the most common real-world risks: attorneys entering client data into AI platforms without understanding how that data is stored, used for training, or shared. Vermont is the rare state that names this specific risk in its guidance.
Court Rules and Judicial Guidance
The Vermont Judiciary Committee on Artificial Intelligence and the Courts is the judicial-level body addressing AI. Its March 2025 annual report with Disciplinary Rules Subcommittee guidelines represents the most direct judicial engagement with AI in Vermont.
No separate standing orders or local rules from Vermont state courts address AI in filings as of April 2026. The judiciary committee's work suggests that Vermont is developing a coordinated approach through committee recommendations rather than individual judge standing orders.
Practical Implications for Vermont Attorneys
Vermont's "existing rules are sufficient" position has a specific practical meaning: don't expect AI-specific safe harbors or detailed compliance checklists from the state. The ethical framework you already know is the framework you'll be judged against. The advantage is simplicity — you don't need to learn a new regulatory scheme. The trade-off is that there's no AI-specific guidance to point to as a defense if your AI use falls into a gray area.
The terms-of-service review requirement is immediately actionable. Every AI tool you use has terms of service that describe data handling. Read them. Specifically look for: Does the tool train on user input? Does it retain conversation data? Is there a data processing agreement available for enterprise use? These aren't abstract questions — they determine whether entering client data into the tool violates Rule 1.6.
For Vermont's small bar, the judiciary committee's ongoing work suggests more guidance will come through annual reports rather than standalone opinions. Monitoring the committee's output is the best way to stay ahead of evolving expectations.
What Attorneys in Vermont Should Do
Review the terms of service of every AI tool you currently use. The Vermont Judiciary Committee specifically identified this as an obligation. Look for data retention policies, training practices, and enterprise vs. consumer tier differences. If a tool trains on user input and you've entered client data, you have a confidentiality problem to address.
Maintain competence in AI risks and benefits. Vermont's guidance explicitly requires lawyers to stay current on generative AI. This means understanding what AI tools can and can't do reliably, recognizing hallucination risks, and knowing the difference between AI-assisted research and independently verified legal analysis.
Use Vermont's "existing rules suffice" framework as your compliance baseline. Map your AI practices to Rule 1.1 (competence), Rule 1.6 (confidentiality), Rule 3.3 (candor), and Rule 5.3 (supervision). If your AI workflow satisfies each of these rules, you're aligned with Vermont's position. No new rules to learn — just existing rules applied carefully.
The Bottom Line
Vermont's position is clear: existing ethical rules cover AI without amendment. The judiciary committee's March 2025 report and the bar counsel's May 2024 opinion provide practical guidance within that framework. It's a sophisticated approach for a small state, and it means Vermont attorneys don't need new rules — they need to apply the old ones carefully.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.