In April 2026, Anthropic's Claude Mythos Preview isn't a product you can buy — it's a proof point about what the underlying model architecture can do. Anthropic demonstrated that a Claude-based system can autonomously find and exploit thousands of zero-day vulnerabilities across major operating systems and browsers, with performance that surpasses all but the most skilled human security researchers. That capability isn't available to the public — Mythos Preview is restricted to Project Glasswing partners. But the model architecture behind it is what runs Claude's current consumer and API tiers.

That's the frame litigators should be using. Not "is Claude approved by my bar association?" but "what does this reasoning capability mean for my hardest litigation tasks — and how does that stack up against specialized tools that require dedicated procurement?"

I tested Claude for 30 days on active litigation tasks: discovery review, deposition prep, motion drafting, and case timeline construction. The results on comprehension of complex technical exhibits and circuit-specific research were strong. The cost was $20/month on the Pro plan.

Harvey and CoCounsel serve real workflow needs for firms that have already built procurement relationships around them. The question isn't whether they're legitimate — they are. It's whether you need what they add on top of the base model capability.


Why Litigators Need Offensive AI, Not Defensive AI

Most legal AI conversations start with risk mitigation: what can the bar prohibit, what might the court sanction, what data should never touch a third-party server. That defensive posture has merit. It also means litigators are running a year behind on the offensive case for AI — the question of how reasoning capability translates to winning more cases or doing better work on the ones you already have.

The offensive case starts with where attorney time actually goes in litigation. First-pass document review on a medium-sized production consumes associate hours that don't produce strategic insight — they produce a log. Deposition prep on a technical witness requires reading dozens of pages of expert report, synthesizing what the expert is actually claiming, and building question lines. Brief research on circuit-specific issues requires pulling authorities, distinguishing cases, and organizing the argument. All three of those tasks — review, synthesis, research organization — are exactly what Claude's architecture handles well.

Offensive AI means treating Claude as a force multiplier on the cognitive work that precedes strategy, not as a replacement for strategy itself. The 30-year partner objection — "AI doesn't understand legal judgment" — is correct and irrelevant. No one is suggesting Claude files the motion. The question is whether Claude can compress the work that informs the motion from 10 hours to 2 hours, leaving the attorney more time to think about the judgment call.


Claude Mythos in Context: What Autonomous Reasoning Actually Demonstrated

On April 8, 2026, Anthropic announced that a Claude-based system had autonomously found and written working exploits for thousands of zero-day vulnerabilities across major operating systems and browsers. The performance benchmark: surpassing all but the most skilled human security researchers, with no human direction between iterations.

This matters for litigators not because the security capability is relevant to legal work — it isn't — but because of what it demonstrates about the underlying reasoning architecture. The same model that can reason through thousands of code paths without losing context can reason through a 200-page technical expert report without losing context. The same architecture that builds internally consistent exploit chains builds internally consistent argument chains.

The distinction to hold: Mythos Preview is a restricted capability demonstration, not the product you access at Claude.ai or through the API. What you're accessing in the $20/month Pro plan is the production model — a different offering built on the same underlying architecture. The Mythos demo tells you where the architecture ceiling is. The production model is below that ceiling but significantly above where most legal AI tools were two years ago.

For a litigator, the practical implication is that Claude's comprehension on genuinely complex materials — technical exhibits, scientific testimony, financial modeling — is strong enough to be operationally useful, not just directionally useful.


Brief-Writing With Claude: A 30-Day Litigation Test

Over 30 days of active use on litigation tasks, the pattern that emerged was consistent: Claude performs best when given a clear task with complete materials, and worst when asked to make inferences about facts not in its context window.

On discovery review, I fed Claude batches of deposition transcripts and asked it to identify factual inconsistencies with prior written discovery responses. The output quality was high enough to accelerate the review process substantially — Claude flagged discrepancies I then verified, rather than requiring me to find them first. For a solo or small-firm litigator on a matter where hiring a review team isn't viable, this changes the economics of taking complex cases.

On brief research, I provided circuit court decisions and asked Claude to map the circuit-specific treatment of a legal standard, identify the strongest cases on each side, and draft a summary of the argument landscape. The research summary was accurate on the cases I provided. The gaps were predictable: Claude's training has a cutoff date, so anything filed in the last six months wasn't in its knowledge base unless I provided it directly. That's a workflow requirement, not a capability failure — you provide current sources, Claude synthesizes them.

On deposition prep, I gave Claude an expert report and asked it to generate question lines that would expose the methodological assumptions the expert was making. The output was operationally useful as a starting point, though the final question set required attorney judgment about which lines were worth pursuing given the specific judge and jury profile.

Claude Pro costs $20/month. Claude Team Standard runs $20/seat billed annually ($25/seat monthly). Claude Max starts at $100/month for high-volume users. Harvey and Lexis+ AI are quote-only for enterprise deployment. CoCounsel's third-party-reported tiers range from $75 to $500/user/month (per costbench.com). The pricing differential is significant enough that for any firm without an existing procurement relationship with a specialized vendor, the Claude-first evaluation is the right starting point.


Specialized Legal AI vs. Claude: Understanding the Cost-Capability Tradeoff

Specialized legal AI tools — Harvey, CoCounsel, Lexis+ AI — add real value over the raw model. That value is primarily workflow integration, not raw capability. Harvey builds intake-to-output pipelines that sit inside your existing systems. CoCounsel integrates with Thomson Reuters' legal database. Lexis+ AI pulls from the Lexis citation database directly. Those integrations solve a real problem: the workflow friction of moving between a research tool and a drafting environment.

What they don't provide is fundamentally superior reasoning capability on the underlying model. Harvey's GPT-based architecture and Claude's architecture are different, and neither is categorically better for every task. For complex open-ended analysis — technical exhibit comprehension, novel legal argument construction — Claude's Constitutional AI training and long-context performance are well-suited.

The cost-capability tradeoff looks like this: Claude Pro at $20/month gives you strong reasoning capability with no workflow integration. Specialized legal AI gives you workflow integration with managed vendor accountability at enterprise price points. For a firm with 50+ attorneys where workflow integration across practice groups is the bottleneck, specialized tools may be worth the premium. For solos and small firms where the bottleneck is cognitive work on individual matters, Claude at $20/month is the right starting point.


How to Build a Mythos-Informed Litigation Stack Without Enterprise Lock-In

A functional litigation AI stack at the solo or small-firm level doesn't require enterprise procurement. It requires three things: a Claude subscription configured correctly, a Westlaw or Lexis account for citation verification, and a set of reusable system prompts for your practice-specific tasks.

Configuration for litigation work: enable Claude.ai Pro, disable conversation history for matters involving client communications, and use the API with zero-training mode enabled for anything you want contractual data protections on. Florida Bar Ethics Opinion 24-1 (January 2024) requires confidentiality safeguards — those two configurations satisfy the reasonable precautions standard.

System prompts worth building: a discovery review prompt that specifies what inconsistencies you're looking for and how to flag them; a brief research prompt that specifies the jurisdiction, the legal standard, and the specific question; a deposition prep prompt that takes an expert report and generates methodological challenge lines.

Westlaw or Lexis stays in the stack as the citation backbone. Claude is the reasoning layer on top of the sources you provide it. That combination — verified authorities plus AI synthesis — is more defensible than either alone and cheaper than any enterprise legal AI platform.


My take: For solos and small firms without an enterprise procurement budget, Claude at $20/month does the cognitive heavy lifting now — you build the workflow yourself, which takes about a week. For larger firms with integration requirements, compliance needs, or vendor-accountability mandates, Harvey and CoCounsel offer genuine value that goes beyond the raw model. The tradeoff is cost and flexibility, not capability character.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes, follow me on LinkedIn or email me directly.