AI-assisted jury selection just got its ethical framework. ABA Formal Opinion 517, issued in 2026, specifically addresses AI use in jury selection — making it the first practice-specific AI ethics opinion from the ABA. It doesn't ban AI in voir dire. It sets boundaries, and firms that understand those boundaries have a significant advantage over those still guessing.

The technology is already here. Jury Analyst and similar tools use machine learning to analyze juror questionnaire responses, identify bias patterns, and predict juror leanings based on demographic and psychographic data. The question isn't whether AI changes jury selection — it's whether you're using it within the ethical lines the ABA just drew.


ABA Opinion 517: What It Actually Says About AI in Jury Selection

Opinion 517 establishes three key principles. First, attorneys can use AI tools to research prospective jurors using publicly available information — social media, public records, news articles — without disclosure to the court or opposing counsel. Second, AI-generated juror profiles and predictions are attorney work product, but the underlying data sources must be verifiable. Third — and this is where firms get tripped up — AI tools cannot be used to communicate with or surveil prospective jurors in real time during voir dire without court approval. The opinion draws a clear line between research (allowed) and surveillance (restricted). Firms that stay on the research side of that line have a powerful tool. Firms that cross it face sanctions and potential mistrial.

How AI Jury Selection Tools Actually Work

Jury Analyst and comparable platforms aggregate public data on prospective jurors — social media posts, voter registration records, property records, litigation history, and demographic data — then apply machine learning models to predict juror attitudes on key case issues. The models are trained on post-trial juror surveys, verdict data, and academic research on juror decision-making. The output isn't a simple "good juror / bad juror" rating. It's a multi-factor analysis: predicted attitude toward corporations, sensitivity to emotional testimony, leadership tendency in deliberations, and likelihood of holding out as a minority. Trial attorneys use this to prioritize their limited peremptory challenges and focus voir dire questioning on the jurors most likely to be unfavorable.

The Ethical Boundaries You Can't Cross

Opinion 517 is permissive on research but strict on three things. No real-time social media monitoring of jurors during trial. Checking a juror's public Facebook profile before selection is fine; monitoring their posts during deliberation is not. No AI-assisted communication with jurors, including indirect methods like targeted social media advertising aimed at the jury pool. No use of AI predictions as a substitute for attorney judgment — AI informs peremptory challenges, but the attorney must exercise independent judgment and document the non-discriminatory basis for each strike. That last point matters for Batson challenges: if opposing counsel argues your strikes were discriminatory, "the AI told me to" isn't a defense. You need race-neutral, gender-neutral reasoning that you — not the algorithm — can articulate.

Practical Workflow: AI-Assisted Voir Dire Preparation

Here's how the top trial firms are integrating AI into jury selection. Step one: run the juror list through AI analysis 48-72 hours before voir dire. This generates profiles with predicted attitudes, flagged social media content, and litigation history. Step two: attorneys review AI profiles and develop targeted voir dire questions for jurors flagged as potentially unfavorable. Step three: during voir dire, use AI-generated data to guide follow-up questions — not replace the attorney's instincts, but supplement them with data points the attorney couldn't have gathered manually. Step four: document peremptory challenge reasoning independent of AI predictions, ensuring Batson compliance. The firms doing this consistently report that AI-assisted jury selection improves their ability to identify stealth jurors — panelists who conceal biases during voir dire.

Cost, Access, and the Competitive Reality

Jury selection AI isn't cheap — Jury Analyst and similar platforms run $2,000-10,000 per trial depending on jury pool size and analysis depth. For high-stakes litigation where a favorable jury can mean a difference of millions in verdict, that's a rounding error. For smaller cases, the math is harder. The competitive reality is this: plaintiff firms in mass tort and personal injury are adopting AI jury selection faster than defense firms, largely because their contingency fee model makes the per-trial investment easier to justify. Defense firms at large corporations are catching up as in-house counsel increasingly ask whether their trial teams are using available technology. The gap between AI-assisted and non-AI-assisted jury selection will widen as the tools improve — and juries selected with AI assistance are producing measurably better outcomes for the firms using them.

The Bottom Line: ABA Opinion 517 gave AI jury selection a green light with guardrails. Research is fine, surveillance isn't. AI predictions inform your strategy but don't replace your judgment. At $2,000-10,000 per trial, AI-assisted jury selection is already standard for high-stakes litigation. If you're trying cases without it, you're bringing instinct to a data fight.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.