Yes, lawyers can use ChatGPT -- but the conditions matter more than the permission. ABA Formal Opinion 512 governs AI use by attorneys and establishes clear guardrails: understand the tool's limitations, verify every output, protect client confidentiality, and disclose when required. The free and Plus tiers of ChatGPT present specific confidentiality risks because OpenAI may use conversations to train its models. Enterprise and Team tiers offer data protections that address most confidentiality concerns.

The practical answer is that ChatGPT is one tool on a spectrum. It's useful for brainstorming, drafting non-sensitive documents, and general research -- but it's not a legal research platform, and treating it like one is how attorneys end up sanctioned. The firms getting this right use ChatGPT for what it's good at and purpose-built legal AI tools for what they're good at.


What ABA 512 Says About ChatGPT and Similar Tools

ABA Formal Opinion 512 (2024) doesn't name ChatGPT specifically, but its framework applies directly. The opinion establishes four obligations for attorneys using any generative AI tool:

Competence: You must understand that ChatGPT can hallucinate -- it generates plausible-sounding text that may contain fabricated facts, non-existent cases, and incorrect legal conclusions. This isn't a bug; it's how the technology works. An attorney who doesn't understand this limitation is already failing the competence standard.

Confidentiality: Client information entered into AI tools may be stored, processed, or used for training. ABA 512 requires attorneys to evaluate the confidentiality protections of any AI platform before entering client data. This is where ChatGPT's tier structure becomes critical.

Verification: Every AI output must be independently verified before use. No exceptions. The opinion doesn't distinguish between expensive legal AI tools and free chatbots -- the verification obligation is universal.

Communication: Clients must be informed about AI use in appropriate contexts. This doesn't mean notifying a client every time you ask ChatGPT to rephrase a sentence, but it does mean informing clients when AI materially contributes to their legal work.

The Data Privacy Problem: Free vs. Enterprise

This is the issue most attorneys get wrong. ChatGPT's data handling varies dramatically by tier:

ChatGPT Free and Plus ($20/month): OpenAI's terms of service allow the company to use your conversations to train and improve its models. If you enter client names, case details, strategy discussions, or privileged information, that data may be incorporated into OpenAI's training data. This is a confidentiality violation under Model Rule 1.6. Full stop.

ChatGPT Team ($25-30/user/month): OpenAI states it does not train on Team workspace data. This addresses the most acute confidentiality concern, but attorneys should still review the data processing terms and ensure they align with their ethical obligations.

ChatGPT Enterprise (custom pricing): Enterprise offers the strongest data protections -- no training on business data, SOC 2 compliance, data encryption at rest, and administrative controls. For firms that want to use ChatGPT for client-related work, Enterprise is the minimum acceptable tier from an ethics standpoint.

The bottom line: if you're using ChatGPT Free or Plus for anything involving client information, you're violating your confidentiality obligations. Upgrade to Team or Enterprise, or don't enter client data.

What You Can and Can't Do with ChatGPT

Safe uses (with verification): - Brainstorming legal arguments and identifying research angles - Drafting non-confidential correspondence and templates - Summarizing publicly available documents - Explaining complex concepts in plain language for client communications - Generating first drafts of non-sensitive internal documents - Organizing research notes and outlining arguments

Risky uses (require enterprise tier + verification): - Drafting documents that reference client matters - Analyzing case-specific facts - Preparing initial drafts of court filings - Reviewing contracts with client-specific terms

Do not use ChatGPT for: - Final legal research -- use purpose-built legal AI tools with citation verification - Generating citations or case references without independent verification on Westlaw/Lexis - Any task involving privileged attorney-client communications (unless Enterprise tier with appropriate safeguards) - Anything you wouldn't show opposing counsel, because under Heppner, your ChatGPT conversations may be discoverable

ChatGPT is a general-purpose language model. It wasn't designed for legal research, and it shows. Stanford's 2025 study didn't test ChatGPT alongside Lexis+ AI and Westlaw AI, but independent testing consistently shows consumer chatbots hallucinate legal citations at rates exceeding 30%.

Purpose-built legal AI tools like Lexis+ AI, CoCounsel (Thomson Reuters), and Clearbrief operate differently. They're connected to verified legal databases, they can retrieve actual case law rather than generating text that looks like case law, and some (like Clearbrief) are architecturally incapable of hallucinating citations because they only reference verified sources.

That doesn't mean legal AI tools are perfect. Stanford found Lexis+ AI still hallucinated in 17-33% of queries. But the gap between legal-specific AI and general-purpose chatbots is significant enough that tool selection itself is a competence issue. Using ChatGPT for legal research when Lexis+ AI is available is like using Google instead of Westlaw -- technically possible, but hard to justify as competent practice.

Setting Up ChatGPT Use at Your Firm

If you're a managing partner deciding whether to allow ChatGPT, here's the framework:

1. Choose the right tier. Enterprise for firms handling sensitive client matters. Team at minimum for any firm. Free and Plus should be blocked for work-related use.

2. Write a firm AI policy. Define approved uses, prohibited uses, verification requirements, and confidentiality protocols. ABA 512 requires that supervising attorneys ensure competent AI use -- a written policy is how you prove you did that.

3. Train everyone. Associates, paralegals, and staff need to understand what ChatGPT can and can't do. The biggest risk isn't the technology -- it's a team member who doesn't know the limitations pasting client data into the free tier.

4. Mandate verification. Make it firm policy that no AI output goes into a filing or client deliverable without verification against primary sources. Build this into your review workflow, not just your policy manual.

5. Pair with legal AI tools. ChatGPT is good at some things. Legal research isn't one of them. Equip your team with purpose-built tools for legal-specific tasks and use ChatGPT for the general-purpose work it handles well.

The Bottom Line: Lawyers can use ChatGPT legally and ethically, but only with the right tier (Enterprise or Team for client data), mandatory verification of all outputs, and a clear understanding that it's a general-purpose tool, not a legal research platform.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.