E-discovery is where AI proved itself in law first — and it's still where the ROI is most dramatic. Technology-assisted review (TAR) reduced document review costs by 60-80% compared to manual review when it launched a decade ago. Now AI-powered continuous active learning has pushed that further, and platforms like Relativity, Everlaw, and DISCO are embedding large language models directly into the review workflow.
The firms still running linear review — hiring contract attorneys to click through documents one at a time — are burning money. A modern AI-assisted discovery workflow processes 1 million documents in the time it used to take to review 50,000. The technology isn't new anymore. The competitive gap is between firms that've optimized their AI review protocols and firms that haven't built one yet.
The 5-Stage AI Discovery Workflow
Stage 1: Collection. Data sources are identified and preserved. AI helps here with automated custodian identification — tools like Relativity Collect and Exterro pull from Microsoft 365, Slack, Google Workspace, and mobile devices. The AI layer identifies custodians likely to have relevant data based on organizational charts and communication patterns.
Stage 2: Processing. Raw data gets de-duplicated, de-NISTed (removing system files), and converted to reviewable formats. Near-duplicate detection clusters similar documents together. Relativity Processing and Nuix handle this at scale — processing rates of 50-100 GB/hour are standard.
Stage 3: AI-Powered Review. This is the game-changer. Continuous active learning (CAL) models learn from reviewer decisions in real-time, pushing the most likely relevant documents to the top. After 1,000-2,000 coding decisions, the AI is predicting relevance with 90%+ accuracy. Reviewers focus on borderline documents; the AI handles the clear calls.
Stage 4: Quality Control. Statistical sampling validates the AI's predictions. Elusion testing confirms the non-relevant set doesn't contain missed documents. Senior attorneys review the AI's privilege designations. This step is non-negotiable — courts require defensible validation of TAR results.
Stage 5: Production. Automated redaction, Bates stamping, and format conversion. Tools like Relativity and Everlaw handle production in native, TIFF, or PDF format with full metadata preservation.
Relativity vs Everlaw vs DISCO: Platform Comparison
Relativity (RelativityOne) is the market leader with roughly 70% of AmLaw 200 firms using it. Strengths: deepest feature set, strongest TAR implementation (Relativity aiR uses transformer models), massive ecosystem of add-ons. Weakness: complex pricing, steep learning curve, requires dedicated administrators. Typical cost: $18-25 per GB/month for RelativityOne cloud.
Everlaw is the modern challenger. Its interface is dramatically better than Relativity's, making it easier to train contract reviewers. Its AI features include predictive coding, automatic clause detection, and a new LLM-powered search. Strength: best user experience in e-discovery, strong collaboration features for co-counsel. Weakness: fewer third-party integrations than Relativity. Typical cost: $20-30 per GB/month.
DISCO filed for bankruptcy in 2024 and was acquired, but the platform remains functional and has loyal users. Its AI was competitive, and its pricing was often lower than Relativity or Everlaw. If you're on DISCO, evaluate migration timelines but don't panic — the platform still works.
For most mid-size firms, Everlaw is the best new investment. Better UX means faster reviewer training, lower error rates, and less time spent on platform administration.
TAR 2.0 and Continuous Active Learning: How It Actually Works
Forget TAR 1.0 (the seed-set approach where you trained a model on a subset and applied it once). TAR 2.0 / Continuous Active Learning (CAL) is the standard now, and it's fundamentally different.
With CAL, the AI model updates after every batch of reviewer decisions. It continuously re-ranks the remaining documents, always pushing the most informative document to the reviewer next. This means:
1. You don't need a seed set. Start reviewing from any point and the model learns. 2. Recall improves continuously. The more you review, the better the model gets at finding relevant documents. 3. You can stop when the gain rate drops. Once the AI is finding fewer than 1 relevant document per 100 reviewed, you've likely captured the vast majority of relevant materials.
Relativity aiR takes this further with transformer-based models that understand document meaning, not just keyword overlap. You can describe what you're looking for in natural language — "communications about the merger timeline between executives" — and the AI surfaces relevant documents even if they don't contain those exact words.
Courts have consistently upheld TAR/CAL as proportional and defensible. Rio Tinto v. Vale (2015) and In re Biomet (2013) established that TAR can be superior to manual review.
Privilege Review: The AI Challenge That's Getting Solved
Privilege review has been the hardest discovery task to automate. Missing a privileged document in production is a potential waiver — the stakes are too high for pure automation.
But AI is making privilege review faster and more accurate. Relativity's privilege identification flags documents containing attorney names, legal department email domains, and privilege-indicator language. It doesn't make the final call — a human does — but it prioritizes the documents most likely to be privileged.
Everlaw's privilege prediction uses machine learning trained on your prior privilege designations in the same matter. After coding 500-1,000 documents, it predicts privilege status on the remaining set with 85-90% accuracy on clear calls.
The best protocol: AI flags potential privilege → junior attorney reviews AI-flagged documents → senior attorney reviews all "privileged" designations and a sample of "not privileged" AI predictions. This catches the edge cases (work product doctrine, joint defense privilege, crime-fraud exception) that AI consistently struggles with.
Clawback agreements under FRE 502(d) provide a safety net — if a privileged document slips through, you can claw it back without waiver. Get this order in every case where you're using AI-assisted review.
Cost Impact: What AI Review Actually Saves
Traditional manual review: $1.50-2.50 per document using contract attorneys at $35-50/hour reviewing 40-60 documents/hour. For a 500,000-document review, that's $750,000-1,250,000.
AI-assisted review (CAL workflow): Review 30,000-50,000 documents to train the model, then apply predictions to the remaining set with QC sampling. Effective cost: $0.15-0.40 per document. For the same 500,000-document review: $75,000-200,000.
That's a 75-90% cost reduction on document review alone. Add processing and hosting fees, and the total e-discovery cost drops 50-70%.
Platform costs are the smaller number. RelativityOne at $20/GB/month for a 100GB dataset = $2,000/month. Reviewer costs are where the savings hit hardest — you need 3-5 reviewers instead of 20-30, and they finish in weeks instead of months.
For managing partners: AI-assisted discovery isn't a technology decision — it's a competitiveness decision. Clients are demanding it. Opposing counsel is using it. Courts expect it. The question isn't whether to adopt it; it's whether your current workflow is optimized.
The Bottom Line: Everlaw for mid-size firms prioritizing usability and modern AI features. Relativity (RelativityOne) for large firms and complex litigation needing the deepest feature set. Both platforms deliver 75-90% cost reduction on document review compared to manual approaches.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
