AI tools used in legal practice can embed and amplify bias — and Mobley v. Workday (N.D. Cal.) is the wake-up call. That case allowed a discrimination claim to proceed against an employer using AI-powered hiring tools, establishing that companies deploying biased AI systems can face liability even when the bias is in the algorithm, not the intent.

For law firms, the implications extend beyond employment decisions. Any AI tool that influences case assessment, client intake, risk scoring, or litigation prediction carries bias risk. The Colorado AI Act (effective June 2026) will require impact assessments for high-risk AI systems, and legal tools that affect client outcomes are likely within scope. Firms that don't test for bias now will scramble when the regulatory requirements arrive.


Mobley v. Workday: The Case That Changed the Calculus

Derek Mobley filed a discrimination complaint against Workday, Inc., alleging that the company's AI-powered hiring and screening tools systematically discriminated against him based on race, age, and disability. The Northern District of California allowed the claims to proceed, finding that Workday could be liable as an "agent" of the employers using its platform.

The ruling matters for law firms on two levels. First, as a legal precedent: AI tool vendors and deployers can face discrimination liability when their tools produce disparate outcomes, even without discriminatory intent. This applies to any firm deploying AI tools that influence decisions affecting people — from case assessment to client intake.

Second, as a practice area signal: Employment discrimination claims involving AI are accelerating. The EEOC has issued guidance on AI and employment discrimination. State legislatures are passing AI bias laws. Firms that understand AI bias have a first-mover advantage in this emerging practice area.

The core lesson: "The algorithm did it" isn't a defense. If you deploy an AI tool and it produces biased outcomes, you're responsible. That applies to your clients, and it applies to your firm.

AI tools learn from training data. If the training data reflects historical bias, the tool reproduces it — often at scale and with false confidence.

Litigation prediction tools trained on historical case outcomes may undervalue cases from jurisdictions with historically lower damages awards — which often correlate with demographics. A tool that predicts lower damages for cases in majority-minority jurisdictions isn't making a legal judgment; it's repeating a pattern that reflects systemic inequality.

Contract analysis tools trained predominantly on agreements from large corporate transactions may flag provisions as "non-standard" when they're actually standard in different market segments. Terms common in minority-owned business contracts or community development deals might be incorrectly identified as risky.

Legal research tools can exhibit citation bias — prioritizing cases from certain jurisdictions, courts, or time periods based on their prevalence in training data. This can systematically underrepresent relevant authority from underserved jurisdictions.

Client intake AI that scores potential clients based on historical firm data may reproduce whatever biases existed in the firm's historical client selection. If the firm historically underserved certain demographics, the AI will rate those demographics lower.

The insidious part: bias in AI looks like objectivity. The tool doesn't say "I'm biased." It produces a score, a recommendation, or an analysis with the same confidence regardless of whether the output is fair or discriminatory.

The Colorado AI Act and Coming Regulatory Requirements

The Colorado AI Act (SB 21-169, effective June 2026) is the first comprehensive state AI bias law in the United States. It requires deployers of "high-risk AI systems" to:

Conduct impact assessments that evaluate the AI system's potential for algorithmic discrimination based on age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected characteristics.

Implement risk management policies including documentation of the AI system's purpose, intended uses, known limitations, and the types of data used.

Notify consumers when a high-risk AI system makes or substantially contributes to a consequential decision affecting them.

Maintain records of impact assessments, risk management documentation, and any known instances of algorithmic discrimination.

For law firms, the key question is whether legal AI tools constitute "high-risk AI systems." Under the Colorado Act, high-risk AI systems include those that make or substantially contribute to consequential decisions in areas including employment, financial services, insurance, and legal services. Legal AI tools used for case assessment, risk scoring, or client intake likely qualify.

Other states are following. Illinois, Connecticut, and several others have proposed or enacted AI bias legislation. The federal AI Bill of Rights blueprint, while non-binding, signals the direction of national policy. Firms operating in multiple states need to prepare for a patchwork of requirements.

Testing for bias doesn't require a data science degree. It requires structured comparison.

Test 1: Demographic variation. Run the same legal scenario through the AI tool with variations in party names, locations, and other characteristics that shouldn't affect legal analysis. Does a contract analysis change when the company name suggests a different demographic? Does case assessment shift when the plaintiff's name changes? Does risk scoring vary by jurisdiction in ways that correlate with demographics?

Test 2: Historical comparison. Compare the AI tool's recommendations against outcomes in cases with similar facts but different demographic profiles. If the tool consistently recommends lower settlements for cases involving minority plaintiffs, that's a bias signal.

Test 3: Edge case testing. AI bias often surfaces at decision boundaries. Test cases where the facts are borderline — cases that could go either way. If demographic factors tip the AI's recommendation in one direction, that reveals embedded bias.

Test 4: Vendor data transparency. Ask your AI vendors: What data was the model trained on? What demographic representation exists in the training data? What bias testing has the vendor performed? What were the results? Vendors that can't or won't answer these questions haven't tested for bias.

Document everything. The testing results, the methodology, the dates, and the remediation steps. This documentation is your defense if bias is ever alleged — and under the Colorado AI Act, it may be legally required.

What Firms Should Do Before June 2026

The Colorado AI Act's effective date is a hard deadline, but smart firms are acting now.

Immediate actions:

Inventory every AI tool used in the firm and classify each as high-risk or low-risk based on whether it influences decisions affecting clients, employees, or third parties. Any tool used for case assessment, risk scoring, client intake, hiring, or performance evaluation is high-risk.

Conduct baseline bias testing on high-risk tools using the framework above. Document the results and any remediation steps taken.

Review vendor contracts for bias-related representations, testing obligations, and indemnification provisions. If your vendor hasn't tested for bias, your contract should require it.

Policy development:

Add bias testing to your annual AI audit framework. Develop a protocol for responding to identified bias — including whether to discontinue use of a biased tool pending remediation. Create documentation templates for impact assessments that comply with the Colorado AI Act.

Practice opportunity:

Firms that build AI bias expertise now position themselves to advise clients when the regulatory requirements hit. Employment law, tech transactions, and regulatory compliance practices should be developing AI bias capabilities. The firms that understand AI bias will be the ones clients call when they need to comply with the Colorado AI Act.

The Bottom Line: Legal AI tools can embed bias in case assessment, risk scoring, and client decisions — Mobley v. Workday proved deployers face liability, the Colorado AI Act (June 2026) will require impact assessments, and firms that test now gain both compliance protection and a competitive practice area advantage.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.