The ABA's own magazine (March-April 2026 issue) said what everyone in legal tech was thinking: "too often 'agentic' is just marketing dressing on a pumped-up chatbot." That's the American Bar Association — not a competitor, not a skeptic blog — calling out the hype in the industry's most widely read publication.

The timing makes it sharper. Bloomberg Law polling shows only 5% of attorneys have actually used an AI agent. Not 5% have heard of agents. Five percent have used one. Meanwhile, every legal tech vendor on the planet has slapped "agentic" on their product page. The gap between marketing claims and market reality has never been wider.


What the ABA actually said about agentic AI

The ABA Journal's March-April 2026 coverage didn't dismiss agentic AI — it drew a line between real agents and fake ones. The core argument: vendors are rebranding existing chatbot features as "agentic" to ride the hype cycle, and most lawyers can't tell the difference.

The article pointed to a pattern that's been obvious to anyone paying attention. A legal AI tool that takes a prompt, generates a response, and waits for the next prompt is a chatbot. Calling it an "agent" because you added a workflow template doesn't change what it is.

This matters because firms are making purchasing decisions based on vendor claims about agentic capabilities. If your firm pays for an "agentic AI platform" that's really a chatbot with better UX, you're not just wasting budget — you're falling behind firms that deployed actual multi-step AI agents.

The ABA's credibility makes this different from industry commentary. When the profession's own governing body signals that most agentic claims are overblown, managing partners should listen.

The 5% adoption stat and what it reveals

Bloomberg Law's polling data is the most telling number in legal AI right now: 5% of attorneys have used an AI agent. Contrast that with Clio's finding that 79% of lawyers use AI daily for basic tasks.

That 74-point gap tells the whole story. The legal profession has broadly adopted AI for simple tasks — drafting emails, summarizing documents, basic research questions. But almost nobody has crossed the threshold into agentic workflows: AI that autonomously executes multi-step tasks, makes decisions, and delivers completed work product.

Why the gap? Three reasons:

Cost. Real agentic platforms (Harvey, CoCounsel Deep Research, Lexis+ Protege) require enterprise subscriptions. Solo practitioners and small firms — the majority of the profession — can't justify the spend.

Complexity. Building custom agents on Harvey's Agent Builder requires understanding your workflows well enough to codify them. Most firms haven't done that mapping work.

Trust. Lawyers are trained to verify everything. Handing a multi-step task to an AI agent and trusting it to make good decisions at each step goes against every instinct in legal training. Until firms build confidence through controlled deployments, the trust barrier stays high.

How to tell real agents from chatbots wearing costumes

The ABA's warning is only useful if you can apply it. Here's a practical framework for evaluating whether a legal AI tool is genuinely agentic:

Test 1: Multi-step autonomy. Give the tool a complex task that requires multiple steps. Does it break the task down, execute each step, and deliver a completed result? Or does it answer your first prompt and wait? Real agents don't need you to press enter between each step.

Test 2: Adaptive reasoning. Halfway through a task, introduce new information that changes the analysis. Does the tool adjust its approach? Or does it continue on its original path? Real agents re-evaluate when conditions change.

Test 3: Tool usage. Does the AI call external tools — search databases, pull documents, run calculations, validate citations — as part of its workflow? Or does it only generate text? Real agents use tools. Chatbots generate responses.

Test 4: Audit trail. Can you see every reasoning step, decision point, and tool call the AI made? Real agentic platforms provide detailed logs. Chatbots give you input and output with a black box in between.

Test 5: Scale response. Give it a task involving 100 documents instead of 1. Does performance and quality hold? Real agents are built for volume. Chatbots degrade as complexity increases.

If a vendor's "agentic AI" fails three or more of these tests, you're looking at a chatbot with a marketing upgrade.

Which platforms pass the ABA's test

Applying the framework above to the major legal AI platforms:

Harvey Agent Builder — passes all five tests. Multi-step autonomy across 700,000+ daily tasks, adaptive reasoning in custom agents, tool integration, detailed audit trails, and scale to 25,000 custom agents across 1,300 organizations. Harvey is genuinely agentic. Verdict: real agent.

CoCounsel Deep Research — passes all five. Multi-agent chains using multiple models (OpenAI + Google + Anthropic), tool usage across the Westlaw database, adaptive research strategies, and detailed reasoning logs. Verdict: real agent.

Lexis+ Protege — passes four of five. 300+ pre-built workflows with GPT-5 and Claude access, Lexis+ integration, and audit capabilities. Customization is more limited than Harvey's Agent Builder, but the workflows are genuinely multi-step. Verdict: real agent with guardrails.

DISCO Cecilia — passes all five for document review specifically. Autonomous multi-step review, adaptive relevance analysis, native e-discovery tool integration, audit trails, and built for million-document scale. Verdict: real agent for discovery.

Most other "agentic" legal AI tools — fail two or more tests. If the vendor can't demonstrate autonomous multi-step execution on your actual data, it's a chatbot. The ABA was talking about these products.

What managing partners should do with this information

The ABA's reality check is a gift. It gives managing partners cover to be skeptical and a framework to demand proof.

Here's the action plan:

Audit your current AI tools. Run the five-test framework against every AI product your firm uses or is evaluating. How many are genuinely agentic vs. rebranded chatbots? You might find you're paying for capabilities you don't actually have.

Demand demos on your data. Any vendor claiming agentic capabilities should demonstrate multi-step autonomous execution on your firm's actual documents — not canned demo datasets. If they can't or won't, that tells you everything.

Separate your AI budget into two buckets. Chatbot-tier tools (email drafting, basic summarization, simple research) are commodity features — don't overpay. Agentic platforms (Harvey, CoCounsel, Protege, Cecilia) are infrastructure investments — evaluate them as such.

Join the 5%. If your firm hasn't deployed a real AI agent yet, you're in the 95% majority. That's comfortable today. It won't be comfortable in 18 months when competitors who invested early have compounding advantages in speed, cost, and institutional AI knowledge.

The ABA didn't say agentic AI is fake. They said most claims about it are fake. The distinction matters — because the real thing is genuinely transformative for firms that adopt it with open eyes.

The Bottom Line: The ABA confirmed what smart firms already suspected — most 'agentic' claims are marketing hype, but the 5% of firms using real AI agents are building advantages that compound every quarter.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.