85% of corporate legal departments now maintain dedicated AI oversight, yet fewer than 20% have a formal AI Center of Excellence. That gap is where most departments stall — they've adopted tools without building the organizational structure to govern, scale, and measure them.
An AI Center of Excellence isn't a committee that meets quarterly to discuss best practices. It's an operational unit with budget authority, defined roles, and measurable outcomes. The legal departments pulling ahead in 2026 built theirs 12-18 months ago. The ones starting now can still catch up — but only if they skip the typical corporate playbook and build for speed.
What an AI Center of Excellence Actually Does in a Legal Department
An AI CoE coordinates the development, governance, and use of artificial intelligence across the legal function. In practice, it serves four roles. Strategy — deciding which AI tools to adopt, which use cases to prioritize, and how AI fits into the department's three-year plan. Governance — setting policies on data privacy, model quality, ethical use, and regulatory compliance for every AI tool the department touches. Improvement — measuring AI performance, identifying new use cases, and optimizing existing deployments. Support — training attorneys on AI tools, troubleshooting adoption issues, and serving as the internal help desk for AI questions. Without a CoE, these functions get scattered across random people who volunteered or got voluntold. The result is inconsistent policies, duplicated tool subscriptions, and AI initiatives that launch with fanfare and die quietly six months later.
Two Models: Hub-and-Spoke vs. Federated
Most legal AI CoEs follow one of two structural models. Hub-and-spoke works best for departments with fewer than 30 attorneys. A central team of 2-4 people defines standards, selects tools, manages vendor relationships, and provides training. Practice groups and business units consume AI services through this central hub. Governance stays tight, adoption stays consistent, and you avoid the problem of six different teams buying six different AI research tools. Federated works for larger departments (30+ attorneys) or organizations where legal operates across multiple business units. The central CoE sets governance frameworks, security standards, and approved vendor lists. Divisional AI leads within each practice group or business unit customize implementation for their specific workflows. Governance remains centralized; execution is distributed. The trap to avoid: starting with a federated model when you should start with hub-and-spoke. Decentralization before you have strong central governance creates chaos. Build the hub first. Federate when you have the governance muscle to maintain standards across distributed teams.
Team Structure and Budget: What It Actually Takes
A legal AI CoE doesn't require a massive headcount. The minimum viable team for a mid-size department (10-30 attorneys) looks like this: AI CoE Lead (1 FTE) — typically a senior legal ops professional or a tech-forward attorney. Owns strategy, vendor relationships, and reports to the GC. Data/Tech Analyst (1 FTE or shared with IT) — handles integrations, data quality, security reviews, and technical vendor evaluation. Training and Adoption Lead (0.5 FTE) — often a paralegal or legal ops coordinator who runs training programs and monitors adoption metrics. Governance Committee (not FTEs — 3-5 stakeholders who meet monthly) — the GC, a senior litigator, a transactional attorney, someone from compliance, and someone from IT security. Annual budget range for a mid-size CoE: $300,000-$600,000, including 1.5-2.5 FTEs ($200,000-$350,000 fully loaded), AI tool subscriptions ($80,000-$200,000), and training/change management ($20,000-$50,000). That's less than what most departments spend on a single AmLaw 100 firm relationship.
Governance Framework: The Non-Negotiable Policies
Every legal AI CoE needs six governance policies from day one. 1. Approved Tool List — which AI tools attorneys can use, which are under evaluation, and which are prohibited. Update quarterly. 2. Data Classification Policy — what client data, privileged information, and confidential material can and cannot be entered into AI systems. This is the policy that prevents your worst-case scenario. 3. Output Verification Standard — requirements for human review of AI-generated work product. Define what 'reviewed' means — not just glanced at, but substantively verified for accuracy, completeness, and legal correctness. 4. Vendor Security Requirements — minimum standards for AI vendors including SOC 2 Type II compliance, data residency, encryption standards, and data retention/deletion policies. 5. Ethical Use Guidelines — boundaries on AI use cases, bias monitoring requirements, and escalation procedures when AI output raises ethical concerns. 6. Incident Response Plan — what happens when an AI tool produces incorrect output that reaches a client, court, or opposing party. Who gets notified, how you document it, and what remediation steps follow. These six policies take 4-6 weeks to draft and approve. Don't wait for perfection — publish v1, iterate quarterly, and enforce from day one.
Metrics That Prove the CoE Is Working
The CoE that can't quantify its value loses budget in the next cycle. Track these metrics from launch. Adoption rate — percentage of attorneys actively using approved AI tools monthly. Target: 70% within 6 months. Time savings per use case — measure average minutes for key tasks (contract review, research memo, document summarization) before and after AI deployment. Cost avoidance — outside counsel spend displaced by AI-enabled in-house work. Track this quarterly and report it in dollars. Governance compliance — percentage of AI usage that follows approved policies. Audit monthly. 100% is unrealistic; 90%+ is the target. Tool utilization — are you using 80%+ of the features you're paying for, or are expensive platforms being used as glorified search engines? Track feature adoption by tool. Incident rate — number of AI-related errors, near-misses, or policy violations per quarter. This should trend down over time. Report these metrics to the GC monthly and to the C-suite quarterly. The departments that get expanded AI budgets are the ones that show the board a dashboard with hard numbers, not a slide deck with anecdotes.
The Bottom Line: An AI Center of Excellence isn't overhead — it's the infrastructure that turns scattered AI experiments into a governed, measurable capability. Start with a hub-and-spoke model, 1.5-2.5 FTEs, and $300K-$600K annual budget. Establish six non-negotiable governance policies in the first six weeks. Measure adoption, time savings, cost avoidance, and compliance from day one. The legal departments that built their CoE in 2024-2025 are now operating AI at scale. The ones that start in 2026 can still catch up — but only with a structure that prioritizes execution speed over committee consensus.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
