The difference between a lawyer who gets mediocre AI output and one who gets exceptional work product isn't the tool. It's the prompt. Most lawyers type 'write me a motion to dismiss' and wonder why the result sounds like a law school exam answer.

Prompt engineering for legal writing isn't about learning magic words. It's about giving the AI the same context you'd give a first-year associate -- jurisdiction, standard of review, key facts, desired tone, and the specific argument structure you want. Here's how to do it right.


Forget generic prompt templates. Legal writing needs a specific framework. We call it JIRAC: Jurisdiction, Issue, Role, Audience, Constraints. Every legal prompt should include: the jurisdiction and court (federal, state, which district), the specific legal issue, the role you want the AI to play (senior litigation associate, appellate specialist, transactional lawyer), who will read the output (judge, opposing counsel, client), and constraints (word count, citation format, tone). A prompt that includes all five elements produces dramatically better output than 'write a brief about personal jurisdiction.' The JIRAC framework takes 30 seconds to set up and saves 30 minutes of editing.

Claude vs ChatGPT: Different Tools for Different Writing Tasks

Claude is better for legal analysis, nuanced argumentation, and maintaining consistent tone across long documents. It handles complex reasoning chains well and produces output that reads like it was written by a careful lawyer. ChatGPT is better for creative brainstorming, generating multiple argument variations quickly, and shorter-form writing like demand letters and client emails. For research memos, use Claude. For brainstorming 10 angles of attack on a motion, use ChatGPT. For contract drafting, Claude's attention to consistency across clauses is superior. Neither tool should be trusted for citations -- verify every single one.

Prompt 1: The Standard Setter. 'Before drafting, analyze this example of excellent legal writing from my firm and identify the style patterns, then apply them.' Feed it a brief you're proud of. Prompt 2: The Devil's Advocate. 'Now argue the opposing position as aggressively as possible. What are the three strongest counterarguments?' Prompt 3: The Simplifier. 'Rewrite this section so a CEO with no legal training understands it in one read.' Prompt 4: The Citation Checker. 'List every legal citation in this draft. For each, state the full case name, court, year, and the specific proposition it supports.' Then verify independently. Prompt 5: The Editor. 'Cut this by 30% without losing any legal arguments. Tighten the prose. Remove every unnecessary word.'

Advanced Techniques: Chain-of-Thought and Iterative Refinement

Don't ask for the final product in one shot. Break complex documents into stages. First prompt: 'Outline the argument structure for a motion to compel, covering these four discovery disputes.' Review the outline. Second prompt: 'Draft Section II based on this outline, applying the proportionality framework from Rule 26(b)(1).' Third prompt: 'Now add the factual support from these deposition excerpts.' This iterative approach produces dramatically better output because you catch structural problems before investing in full drafts. The lawyers getting the best AI output treat it like a conversation, not a vending machine.

AI can't do original legal research reliably. Both Claude and ChatGPT hallucinate citations. They invent case names, fabricate holdings, and create convincing-sounding cites to cases that don't exist. Always verify in Westlaw, Lexis, or Fastcase. AI also can't apply judgment about case strategy -- it doesn't know your client's risk tolerance, your relationship with the judge, or the dynamics with opposing counsel. And it can't catch factual errors in your source material. AI is a force multiplier for lawyers who already know what good looks like. It's a liability generator for those who don't.

The Bottom Line: Better prompts produce better legal writing. Period. Invest 30 minutes learning the JIRAC framework and iterative drafting approach, and you'll get AI output that's genuinely useful instead of generic. The tool isn't the bottleneck -- your instructions are.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.