Yes, it's ethical to use AI in court filings — but only if you verify every output, understand the tool's limitations, and comply with applicable disclosure requirements. ABA Formal Opinion 512 (2024) confirmed that existing ethics rules apply to AI tools, and the duty of competence means you can't file AI-generated work you haven't independently verified.
The ethics framework isn't complicated. The same rules that require you to verify a junior associate's research apply to AI output. The tool changes; the obligation doesn't. What trips attorneys up is the speed — AI generates a 20-page brief in minutes, and the temptation to skip thorough verification is where sanctions, malpractice, and bar complaints originate.
ABA Formal Opinion 512: The Governing Framework
ABA Formal Opinion 512 (July 2024) is the definitive ethics guidance on AI in legal practice. It doesn't create new rules — it maps existing Model Rules to AI tools and makes the obligations explicit.
Rule 1.1 (Competence): You must understand the AI tool's capabilities and limitations before using it. That means knowing it can hallucinate citations, generate plausible-sounding but incorrect legal analysis, and produce outdated information. "I didn't know it could do that" isn't a defense.
Rule 1.6 (Confidentiality): Client information entered into AI tools must remain confidential. Consumer AI tools that use input data for model training are a confidentiality breach. Enterprise tools with data protection agreements are the baseline.
Rule 5.3 (Supervision): The attorney who files the document is responsible for its contents, regardless of who — or what — drafted it. AI output gets the same scrutiny as work from a first-year associate. More, actually, because AI hallucinates with complete confidence.
Verification: The Non-Negotiable Requirement
Every court that's sanctioned attorneys for AI-generated filings has identified the same failure: the attorney didn't verify the output. Steven Schwartz in *Mata v. Avianca* (S.D.N.Y. 2023) filed a brief with six fabricated case citations generated by ChatGPT. He didn't check any of them in Westlaw. The sanctions weren't for using AI — they were for filing unverified work product.
Verification means checking every citation against the primary source. Not just confirming the case exists, but confirming the holding matches how it's cited. AI hallucinations aren't always obvious — a real case name paired with a fabricated holding is harder to catch than a completely invented citation.
The verification standard isn't "spot check." It's "comprehensive review." If you wouldn't file a brief from a new associate without reading every citation, you don't file AI-assisted work without the same diligence. The time AI saves in drafting should be reinvested in verification, not pocketed as pure efficiency.
Disclosure Requirements by Jurisdiction
Whether you must disclose AI use depends entirely on where you're filing. There's no uniform national rule — the requirements vary by circuit, district, and sometimes individual judge.
Federal courts with standing orders requiring disclosure: Over 30 federal judges have issued standing orders requiring parties to disclose AI use in filings. Judge Brantley Starr (N.D. Tex.) was first, requiring certification that no AI-generated text was used without human verification. The Northern District of Texas, Eastern District of Texas, and several New York districts have the most comprehensive requirements.
State courts: California, Florida, New Jersey, and several other states have issued guidance or rules on AI disclosure. The trend is toward more disclosure, not less.
Where no rule exists: Even without a mandatory disclosure requirement, Rule 3.3 (candor to the tribunal) may require disclosure if AI use is material to the proceeding. If a judge asks whether AI was used and you don't disclose, that's a separate ethics violation.
What "Ethical Use" Looks Like in Practice
The attorneys using AI ethically in court filings follow a consistent workflow:
Step 1: Use enterprise-grade tools. CoCounsel, Lexis+ AI, or Claude/ChatGPT with enterprise licenses that protect client data. Never paste client information into free-tier consumer AI.
Step 2: Draft, don't delegate. Use AI for first-draft research memos, initial argument outlines, and citation gathering. The attorney shapes the legal strategy, identifies the arguments, and makes the judgment calls. AI accelerates the mechanical work.
Step 3: Verify independently. Every citation gets checked in Westlaw or Lexis. Every legal proposition gets confirmed against the primary source. Every factual claim gets verified against the record.
Step 4: Disclose where required. Check local rules, judge-specific standing orders, and state bar guidance. When in doubt, disclose. No attorney has ever been sanctioned for disclosing AI use. Several have been sanctioned for not disclosing.
Step 5: Document the process. Keep records of which tools were used, what prompts were given, and what verification steps were taken. This creates a defensible record if AI use is ever questioned.
Where the Ethics Line Gets Blurry
The clear cases are easy. Using AI to draft a motion and verifying every citation? Ethical. Filing unverified AI output? Sanctionable. The gray areas are where most attorneys actually operate.
Research assistance vs. ghostwriting. Using AI to find relevant cases is no different from using Westlaw's AI-powered search. Using AI to write entire sections of a brief raises questions about whether the attorney exercised independent judgment. The ethical line is whether the attorney directed the analysis and made the strategic decisions, or simply accepted the AI's output.
Client consent. ABA Opinion 512 suggests that AI use should be disclosed to clients when it's material to the representation. If you're using AI to draft a $50M merger agreement, the client should know. If you're using AI to summarize deposition transcripts for internal purposes, the disclosure obligation is weaker.
Billing implications. Charging 5 hours for work that AI completed in 20 minutes raises Rule 1.5 (reasonable fees) concerns. The emerging consensus: bill for the value delivered, not the time AI saved. If AI-assisted research produces the same quality as 5 hours of manual research, a reasonable fee reflects the output quality — but the time savings should be passed through over time as AI becomes standard practice.
The Bottom Line: Using AI in court filings is ethical if you verify every output, protect client confidentiality, comply with disclosure rules, and exercise the same independent judgment you'd apply to any work product — the tool is irrelevant, the professional obligations are absolute.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
