A&O Shearman didn't just buy an AI license and send a firm-wide email. They partnered with Harvey to deploy agentic AI across four high-value practice areas — antitrust, cybersecurity, fund formation, and loan review — with a structured implementation that's become the playbook for BigLaw AI adoption. Meanwhile, Linklaters built a 20-person dedicated AI team to drive adoption across the firm.
These aren't experiments. They're production deployments at two of the world's largest law firms, handling real client work. Here's what they figured out — and what your firm can steal from their approach.
A&O Shearman's four-practice deployment with Harvey
A&O Shearman chose four practice areas for their Harvey deployment, and the selection wasn't random. Each area shares key characteristics: high document volume, repeatable analysis patterns, and significant time cost under traditional workflows.
Antitrust screening. Agents review transaction documents and flag potential competition law issues across multiple jurisdictions. Before AI, this required teams of associates manually checking each deal against regulatory thresholds in the EU, UK, US, and other jurisdictions. Harvey agents run the jurisdictional analysis in parallel.
Cybersecurity compliance. Agents map contractual obligations against regulatory frameworks — GDPR, state privacy laws, SEC disclosure rules, NYDFS requirements. The complexity isn't any single regulation; it's tracking obligations across dozens of overlapping frameworks simultaneously.
Fund formation. Agents review limited partnership agreements, compare terms against market standards, and flag deviations. For a firm that closes hundreds of fund deals annually, the institutional knowledge encoded in these agents — what's market, what's not, what triggers negotiation — is enormous.
Loan review. Agents analyze credit agreements, identify non-standard covenants, and benchmark terms against the firm's database of prior deals. This is Harvey's sweet spot: high-volume document analysis with clear comparison criteria.
The pattern across all four: AI handles the volume analysis, lawyers handle the judgment calls. A&O Shearman didn't try to replace lawyers. They restructured how associates spend their time.
Linklaters' 20-person AI team: the organizational model
Linklaters took a different but complementary approach: they built a dedicated 20-person AI team that sits between the technology function and the practice groups.
This team isn't IT. It's not a bunch of engineers writing code. It's a mix of practice-group lawyers who understand the work and technologists who understand the tools. Their job: identify high-value workflows, build and validate AI agents, train lawyers on the tools, and monitor quality.
The organizational insight is critical. AI adoption fails when it's driven by IT alone (they don't understand the legal workflows) or by practice groups alone (they don't understand the technology's capabilities and limitations). Linklaters' hybrid team bridges both sides.
For firms considering their own AI deployment, the staffing question matters more than the technology question. Harvey, CoCounsel, and Protege are all capable platforms. The differentiator is whether your firm has people who can translate legal workflows into AI agent specifications — and monitor the results.
Linklaters reportedly invested millions in AI infrastructure before seeing ROI. That's a capital commitment most firms can't or won't make. But the model scales down: even a 3-person team (one lawyer, one technologist, one project manager) can drive meaningful adoption at a mid-size firm.
Implementation lessons from BigLaw's early adopters
Across A&O Shearman, Linklaters, and other early-adopting firms, five implementation lessons have emerged:
Lesson 1: Start with one workflow, not a firm-wide rollout. A&O Shearman didn't deploy Harvey everywhere at once. They picked four practice areas with clear use cases. Each deployment was a controlled experiment that had to prove ROI before expanding.
Lesson 2: Partner sponsorship is non-negotiable. AI deployments that start in IT die in IT. Every successful deployment had a senior partner champion who understood the business case, authorized the budget, and pushed adoption through the practice group.
Lesson 3: Measure time savings in hours, not percentages. "30% more efficient" means nothing to a managing partner. "This agent saves 40 associate hours per deal on due diligence" is a number they can multiply by the billing rate and see the ROI.
Lesson 4: Build feedback loops from day one. The first version of any AI agent is the worst version. Lawyers need a fast, low-friction way to flag errors, suggest improvements, and refine the agent's parameters. A&O Shearman's agents improve with every deal because lawyers feed corrections back into the system.
Lesson 5: Governance before deployment. Clio data shows 53% of firms have no AI policy. BigLaw firms learned early that deploying AI agents without written policies — supervision protocols, client disclosure rules, data handling procedures — creates unacceptable risk. Build the governance framework first.
The economics: what BigLaw's deployment actually costs
Nobody publishes exact numbers, but here's what the market signals suggest for a large-firm Harvey deployment:
Platform licensing: $150-300+ per user per month for Harvey enterprise access. For a 500-lawyer firm deploying to 200 users, that's $360,000-720,000 annually in platform costs alone.
Internal team: A dedicated AI team (even smaller than Linklaters' 20 people) costs $500,000-2M+ annually in salaries, depending on size and seniority. A&O Shearman and Linklaters both invested heavily in internal capability.
Agent development: Building and validating custom agents takes lawyer time. Budget 200-400 hours to develop, test, and refine a production-ready agent for a complex workflow. At partner billing rates, that's significant internal investment.
Training and change management: Getting 200 lawyers to actually use the tools requires training programs, office hours, champion networks, and ongoing support. Budget $50,000-150,000 for the rollout phase.
Total first-year investment for a large firm: $1-3M+. The ROI case works when you're displacing enough associate hours, reducing contract attorney costs, or winning new work based on speed and capabilities.
The math is different for mid-size firms. CoCounsel and Protege bundle with research platforms, reducing the incremental cost. A 100-lawyer firm might deploy meaningful agentic AI capabilities for $200,000-500,000 in the first year.
What mid-size firms can steal from BigLaw's playbook
You don't need A&O Shearman's budget to apply their lessons. Here's the scaled-down playbook:
Pick one high-volume workflow. Contract review and due diligence are the most proven use cases. Choose the workflow where you spend the most associate hours on repeatable analysis — that's your highest-ROI entry point.
Assign a workflow owner. You don't need 20 people. You need one senior associate or junior partner who understands the workflow deeply and will own the AI agent's development and quality. Give them 20% of their time for this.
Use the platform that matches your stack. If you're on Westlaw, start with CoCounsel. If you're on Lexis+, start with Protege. If you want custom agents, evaluate Harvey. Don't overcomplicate the platform decision — your existing research tools are the natural starting point.
Set a 90-day pilot. Deploy the AI agent on a defined set of matters. Measure time savings per matter. Collect lawyer feedback weekly. Refine the agent based on what breaks. At 90 days, you'll have data to justify expansion or cut losses.
Write the governance docs first. Before any agent touches client data, have a written AI policy, supervision protocol, and client disclosure template. It takes one afternoon with a committee. Not having it creates real liability exposure.
The firms that wait for agentic AI to be "mature" or "proven" are watching A&O Shearman, Linklaters, and their competitors build compounding advantages. The technology is ready. The question is whether your firm is.
The Bottom Line: A&O Shearman proved agentic AI works at BigLaw scale across four practice areas — the playbook (start narrow, assign ownership, build governance, measure hours saved) scales down to any firm size.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
