GPT-5.5 API pricing for law firms lands at $5/M input + $30/M output for standard, $30/M input + $180/M output for Pro per OpenAI's API pricing page as of April 23, 2026. Cached input drops to $0.50/M (90% off). Batch API runs at 50% off. Those are the sticker prices. The real question for law firm procurement: what does it cost per matter, per attorney, per month against realistic legal workloads — and how does that compare to ChatGPT Business at $20-25/user/month or Microsoft 365 Copilot at $30/user/month? This spoke models the math against typical legal usage profiles, with practical procurement recommendations by firm size.


The published pricing structure and what it means in practice

Per OpenAI's API pricing page and the GPT-5.5 launch announcement, the pricing structure as of April 23, 2026:

- GPT-5.5 standard: $5/M input tokens, $30/M output tokens. Cached input drops to $0.50/M (90% off). Batch API runs at 50% off both input and output. Above 272K tokens in a single context, prices double on input and 1.5x on output. - GPT-5.5 Pro: $30/M input tokens, $180/M output tokens. Pro is the high-effort variant; consumes more compute per query, produces more thoroughly-validated outputs. Cached input and batch discounts apply at the same percentages. - ChatGPT Plus: $20/user/month flat per ChatGPT pricing. Gets GPT-5.5 standard with usage caps. - ChatGPT Pro: $200/user/month flat. Gets GPT-5.5 Pro plus other Pro features. - ChatGPT Business: $25/user/month monthly or $20/user/month annual with 2-user minimum per OpenAI Business pricing. Gets GPT-5.5 standard with admin controls and explicit data-handling commitments. - ChatGPT Enterprise: quote-only per OpenAI Business pricing. Custom contract paper, privately hosted, org-wide controls. - Microsoft 365 Copilot: $30/user/month per Microsoft enterprise pricing. Embeds GPT-5.5 inside Word, Outlook, Teams.

The practical implication: API access, ChatGPT Business, and Microsoft Copilot are three separate procurement tracks with overlapping access to the same model. Different SLAs, different audit trails, different version-update cadence. Most firms running multiple tracks simultaneously do so for different use cases (API for legal-tech engineering, Business for attorney chat, Copilot for inline document work).

Three reference workloads with token counts and per-query costs:

Routine legal research query (70/30 input/output split): 7,000 input tokens (the prior conversation context, the documents pasted in, the question) and 3,000 output tokens (the answer, the citations, the reasoning trace). At GPT-5.5 standard rates: $5/M × 7K = $0.035 input + $30/M × 3K = $0.090 output = $0.125 per query. On 5,000 queries/month firm-wide: $625/month. On 50,000 queries: $6,250/month.

Memo drafting query (30/70 input/output split, output-heavy): 3,000 input tokens, 7,000 output tokens. At standard rates: $0.015 input + $0.210 output = $0.225 per query. On 5,000 queries: $1,125/month. On 50,000 queries: $11,250/month.

1M-context discovery review (per the 1M context for litigation discovery spoke): 430,000 input tokens (full 600-document production), 5,000 output tokens (structured summary). At standard rates: $2.150 input + $0.150 output = $2.30 per query. Subsequent queries against the same production at cached input rate ($0.50/M): $0.215 input + $0.150 output = $0.365 per query. Weekly associate spend on a single discovery production: $9-$13 across 30 queries.

GPT-5.5 Pro at $30/$180 multiplies per-query costs by six. Routine query: $0.75. Memo drafting: $1.35. Discovery review: $13.80. Pro pricing is meaningful enough that firms should track Pro usage separately from standard.

The second-order economics: cached input rate ($0.50/M, 90% off) is the saver on repetitive workflows. Firms that build tooling to take advantage of caching capture meaningful cost reduction; firms that don't pay full input rates on every query.

Firm-size cost comparison: API vs ChatGPT Business vs Microsoft Copilot

25-attorney mid-market firm running 200 queries per attorney per month (5,000 firm-wide):

- API access (GPT-5.5 standard): ~$625/month at typical 70/30 split. - ChatGPT Business at $20/user/month annual: $500/month, plus admin controls and explicit data-handling. - ChatGPT Business at $25/user/month monthly: $625/month for monthly billing flexibility. - Microsoft 365 Copilot at $30/user/month: $750/month, plus inline document integration in Word, Outlook, Teams.

For most 25-attorney firms, ChatGPT Business at the $20/user/month annual rate covers the use case at the lowest cost. API access becomes more cost-effective at higher per-attorney query volumes (above 250-300 queries/month firm-wide per attorney). Microsoft Copilot at $30/user/month justifies the premium when firms are already deeply invested in M365 and the inline integration matters operationally.

100-attorney upper-mid-market firm running same per-attorney volume (20,000 firm-wide):

- API access: ~$2,500/month standard, plus engineering investment. - ChatGPT Business annual: $2,000/month. - Microsoft Copilot: $3,000/month.

At this scale, the procurement math depends on workflow patterns. Litigation-heavy practices benefit from API access plus custom tooling (per the Codex CLI for legal-tech engineering spoke) — total cost including engineering investment lands around $4,500-$6,000/month firm-wide. Transactional practices typically prefer ChatGPT Business plus Microsoft Copilot for inline document work.

500-attorney AmLaw firm: Multi-track procurement is standard. API for legal-tech engineering, ChatGPT Enterprise (quote-only) for firm-wide chat, Microsoft Copilot for inline integration. Total spend lands in the $30,000-$60,000/month range across the firm depending on usage profile.

The hidden costs: Pro upgrades, cache misses, context overruns

Three pricing dynamics that bite firms running consumption-based deployment:

Pro upgrades on personal accounts. Associates who hit usage caps on ChatGPT Plus ($20/month) and upgrade themselves to Pro ($200/month for the $30/$180 model) burn firm reimbursement at six-times-standard rates without anyone in procurement tracking it. Mitigation: AI use policy specifies which workloads warrant Pro vs standard. Per the Pro vs standard upgrade spoke, most legal workloads don't warrant Pro.

Cache misses on repetitive workflows. GPT-5.5's cached input rate ($0.50/M, 90% off standard $5/M) only applies when the same prompt prefix is reused within a short window. Firms that don't structure their tooling to take advantage of caching pay full rates on every query, even when the context is repetitive. For matter-specific workflows where the same case file gets queried repeatedly, building cache-friendly prompt structure recovers 60-80% of input costs. The savings on a heavily-used matter can be hundreds of dollars per month.

Context overruns at 272K threshold. Per OpenAI's API pricing, contexts above 272K tokens incur 2x input pricing and 1.5x output pricing. A naive 1M-context load above the threshold costs more than the linear extrapolation suggests. For megadoc workflows (per the 1M context for litigation discovery spoke), firms should structure loads to stay below 272K when feasible or budget the overrun premium when not.

The second-order budget management: monthly billing review by practice area catches usage drift before it compounds. Firms running consumption-based pricing should have someone — legal-ops lead, IT director, or finance — looking at the bill weekly during the first 90 days post-launch, monthly thereafter.

Procurement recommendations by firm size

Solo and small firms (1-10 attorneys): ChatGPT Business at $20/user/month annual is the default. For 5 attorneys, that's $100/month with admin controls and data-handling commitments. Solos doing high-volume research workloads above 500 queries per attorney per month should evaluate API access; below that threshold, Business is more cost-effective. Don't try to manage API access at solo scale — the operational overhead doesn't justify it.

Mid-market firms (10-100 attorneys): Mixed deployment is the right pattern. ChatGPT Business for general attorney chat use ($20/user/month annual), API access for legal-tech engineering (per the Codex CLI for legal-tech engineering spoke), Microsoft Copilot for firms heavily on M365. Total spend lands around $30-$50/attorney/month all-in.

BigLaw and AmLaw 100: Multi-track procurement. ChatGPT Enterprise (quote-only) for firm-wide chat, OpenAI API for legal-tech engineering teams, Microsoft Copilot for inline document work. Total spend depends heavily on usage profile. Firms with active Anthropic deals (per the Anthropic eating the legal stack analysis) typically run Claude Opus 4.7 alongside GPT-5.5 — the GPT-5.5 vs Claude Opus 4.7 comparison spoke covers the cross-vendor procurement decision.

The procurement floor across all sizes: don't deploy on consumer Plus or Pro tiers for privileged client work. The Heppner ruling (SDNY Feb 17, 2026) confirmed consumer-AI exchanges aren't privileged. Business or Enterprise tier minimum on either OpenAI or Anthropic for any privileged matter.

The Bottom Line: My take: GPT-5.5 API pricing at $5/M input + $30/M output is competitive with ChatGPT Business for most legal use cases at standard rates. The Pro variant at $30/$180 is six times standard and warrants explicit policy controls. Cached input ($0.50/M) is the saver on repetitive workflows; firms that don't structure tooling for caching pay full rates unnecessarily. For most mid-market firms, ChatGPT Business at $20/user/month annual is the default; API access becomes cost-effective at high per-attorney volumes or for legal-tech engineering deployment.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.