The Colorado AI Act takes effect June 1, 2026 — less than two months from now — and most companies using AI for consequential decisions aren't ready. SB 24-205 is the first comprehensive state-level AI regulation in the United States, and it creates obligations that go far beyond a disclosure requirement. Risk assessments, impact statements, bias audits, consumer notification, and ongoing monitoring are all mandatory for "high-risk" AI systems. The penalties for non-compliance: enforcement by the Colorado AG with remedies under the Colorado Consumer Protection Act.
Here's what firms need to understand: the Colorado AI Act doesn't just affect tech companies. Any business that deploys or develops AI systems making "consequential decisions" about Colorado residents is covered — that includes law firms using AI for client intake screening, insurance companies using AI for claims evaluation, employers using AI for hiring, and lenders using AI for credit decisions. If your AI touches employment, education, financial services, insurance, housing, legal services, or healthcare, you're in scope.
Who's Covered: Deployers and Developers
The Act creates obligations for two categories of entities:
Deployers: any business or organization that uses a high-risk AI system to make or substantially contribute to consequential decisions affecting Colorado residents. A law firm using an AI intake tool to screen potential clients is a deployer. A bank using an AI model for credit scoring is a deployer. An employer using an AI tool to filter resumes is a deployer. You don't need to build the AI — you just need to use it for decisions that materially affect people.
Developers: any entity that creates, codes, or substantially modifies a high-risk AI system. This includes AI companies (OpenAI, Anthropic, Harvey AI) but also companies that significantly customize or fine-tune AI models for specific decision-making purposes. If your firm built a custom AI intake screening tool using Claude's API, you may qualify as both a developer and a deployer.
The trigger: "consequential decisions" in education, employment, financial services, government services, healthcare, housing, insurance, and legal services. The legal services category means law firms using AI for any decision that materially affects a client or potential client relationship are squarely within scope.
Required Risk Assessments: What You Need to Do Before June 1
Deployers of high-risk AI systems must complete and document a risk assessment before the compliance deadline. The assessment must include:
Purpose and intended use: document exactly what the AI system does, what decisions it informs, and how human oversight is applied. "We use an AI tool" isn't sufficient — the documentation needs to specify the decision type, the data inputs, the AI's role in the decision, and the human review process.
Known or foreseeable risks: identify potential harms the AI system could cause, including algorithmic discrimination, accuracy failures, and unintended consequences. For legal AI tools, this includes: bias in intake screening (rejecting cases from certain demographics), accuracy failures in legal research (hallucinated citations), and over-reliance on AI outputs without adequate attorney review.
Mitigation measures: document what controls you've implemented to address identified risks. This includes human review requirements, accuracy testing protocols, bias monitoring, and feedback mechanisms for individuals affected by AI-assisted decisions.
The assessment must be updated annually or whenever the AI system undergoes a material change. Retain all risk assessments for the most recent three years. The AG can request these documents during an investigation.
Practical advice: if you're using AI tools from vendors (Harvey, CoCounsel, Clio's AI features), request the vendor's risk assessment documentation and bias audit results. Your deployer obligations aren't satisfied by pointing to the vendor — you need your own assessment of how you're using the tool in your specific context.
Consumer Notification and Transparency Requirements
Before deploying a high-risk AI system for a consequential decision, deployers must:
Notify affected individuals that AI is being used. The notice must be clear, specific, and provided before the decision is made — not buried in a terms-of-service document. A law firm using AI for intake screening must tell potential clients that AI is involved in the evaluation before the screening occurs.
Provide a description of the AI system's purpose and the type of decision it's informing. Generic language ("we use AI to improve our services") doesn't satisfy the requirement. The notice must explain what the AI does in the specific decision context.
Offer the right to opt out of AI-assisted decision-making where feasible, or provide an alternative process. If a potential client objects to AI intake screening, the firm must have a human-only alternative available.
Provide appeal mechanisms. If an AI-assisted decision adversely affects an individual, the individual must have the ability to appeal and receive human review. The appeal process must be accessible and documented.
For law firms: this means intake forms, client engagement letters, and website disclosures need updating before June 1. Any AI-assisted client screening, matter evaluation, or resource allocation decision requires advance disclosure and appeal mechanisms.
Algorithmic Discrimination and Bias Audits
The Act specifically targets algorithmic discrimination — AI systems that disproportionately disadvantage individuals based on protected characteristics (race, color, national origin, sex, religion, age, disability, sexual orientation, or veteran status).
Deployers must implement processes to monitor AI outputs for discriminatory patterns. This isn't optional aspirational language — it's a specific obligation with enforcement consequences. The monitoring must be ongoing, not a one-time check.
Developers have additional obligations: they must provide deployers with documentation of the training data used, known limitations, and bias testing results. Developers must also make available the results of any assessments conducted regarding algorithmic discrimination.
For law firms, the discrimination risk in AI-assisted intake is real and documented. Studies have shown that AI tools can exhibit bias in evaluating case descriptions that correlate with demographic characteristics — e.g., case descriptions mentioning geographic areas with higher minority populations, names associated with specific ethnic groups, or medical facilities serving particular communities. If your AI intake tool rejects a disproportionate number of cases from certain demographics, that's a compliance problem with teeth.
The practical step: audit your AI-assisted decisions quarterly. Track the demographic characteristics of accepted and rejected outcomes where available. Look for patterns. If the AI is making discriminatory distinctions, adjust the system or add human review for flagged categories.
Enforcement, Penalties, and the Attorney General's Role
The Colorado AG has exclusive enforcement authority under the Act. Private rights of action are specifically excluded — individuals can't sue for non-compliance. This is a deliberate choice that balances compliance obligations with litigation risk.
Penalties fall under the Colorado Consumer Protection Act framework, which provides for injunctive relief, civil penalties of up to $20,000 per violation, and attorney's fees. For systematic non-compliance (failing to complete risk assessments, failing to provide consumer notifications), each affected individual's decision could constitute a separate violation.
The AG's office has signaled a phased enforcement approach: education and voluntary compliance in the first year (June 2026 - June 2027), with formal enforcement actions beginning in year two. This doesn't mean you can wait — the compliance obligations are in effect on June 1, and the AG can investigate at any time. The grace period applies to enforcement discretion, not legal obligations.
Affirmative defense: the Act provides an affirmative defense for deployers who discover and cure algorithmic discrimination or compliance failures in good faith. If you identify a bias issue in your AI system, document it, take corrective action, and notify the AG, the affirmative defense is available. This incentivizes proactive compliance over willful ignorance.
For law firms advising clients: the June 2026 deadline creates immediate advisory work. Every company using AI for employment, lending, insurance, or healthcare decisions in Colorado needs a compliance assessment. This is a real practice development opportunity — the regulatory work is urgent, the client base is broad, and most companies haven't started.
The Bottom Line: The Colorado AI Act (SB 24-205) takes effect June 1, 2026 and requires risk assessments, consumer notification, appeal mechanisms, and bias monitoring for any AI system making consequential decisions about Colorado residents. Law firms are covered as both deployers (AI intake tools) and potential advisors (client compliance work). Complete your risk assessments, update intake disclosures, and build appeal mechanisms before the deadline. The AG's phased enforcement approach gives breathing room on penalties but not on obligations.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
