The FTC's action against DoNotPay is the first federal enforcement case targeting an 'AI lawyer' product. DoNotPay marketed itself as the 'world's first robot lawyer' but never tested whether its AI actually performed at the level of a human attorney. The $193,000 settlement and consent order tell every legal tech company exactly where the line is on AI capability claims.
Background
DoNotPay launched in 2015 as a chatbot to help people fight parking tickets. It grew into a broader consumer legal services platform, eventually claiming it could handle everything from canceling subscriptions to fighting landlords to generating legal documents. The company raised millions in venture capital and positioned itself as a disruptive force in legal services.
The marketing was aggressive. DoNotPay called itself the 'world's first robot lawyer' and claimed its AI could produce 'perfectly valid legal documents' and fight legal battles on behalf of consumers. CEO Joshua Browder publicly challenged lawyers, at one point offering to use AI to argue a case in traffic court through an earpiece (the stunt was called off after state bar threats).
The FTC's Bureau of Consumer Protection investigated and found a gap between the marketing and reality. DoNotPay never hired attorneys to review its AI's legal output. It never tested whether the documents it generated were actually legally valid. It never verified that its AI performed at anything close to the level of a human lawyer. The claims were unsubstantiated.
What Happened
The FTC charged DoNotPay with deceptive trade practices under Section 5 of the FTC Act. The core allegation: the company marketed AI legal services using claims it couldn't back up. Calling your product a 'robot lawyer' that generates 'perfectly valid legal documents' is a testable claim, and DoNotPay never tested it.
The investigation revealed that DoNotPay's AI-generated legal documents contained errors and might not have been legally valid in the jurisdictions where consumers used them. Consumers relied on these documents for real legal matters, from demand letters to court filings, without knowing the AI's output had never been verified by a licensed attorney.
The FTC Commission voted 5-0 to approve a consent order. DoNotPay agreed to pay $193,000 in monetary relief, notify affected consumers about the limitations of its AI-generated legal content, and stop advertising that its service performs like a real lawyer unless it has competent evidence to support those claims.
The Ruling
The consent order, finalized on January 16, 2025 (under FTC File No. 232-3042), imposed three key requirements. First, the $193,000 payment for consumer relief. Second, a prohibition on claiming the service performs like a licensed attorney without 'competent and reliable evidence' to back it up. Third, a requirement to notify consumers who had used the service about its limitations.
The FTC's reasoning was straightforward: if you tell consumers your AI does what a lawyer does, you need evidence that it actually does. This isn't a new legal standard. The FTC has always required companies to substantiate their advertising claims. The DoNotPay case simply applied that existing framework to AI legal services.
The unanimous 5-0 vote signaled that AI capability claims are a bipartisan enforcement priority at the FTC. This wasn't a close call or a split decision. The commission agreed across ideological lines that unsubstantiated AI claims are deceptive.
Outcome: DoNotPay agreed to pay $193,000 in monetary relief, notify affected consumers, and cease advertising that its service performs like a real lawyer without sufficient evidence. The consent order was finalized by a unanimous 5-0 FTC vote.
Why This Case Matters
This case draws a clear line for the entire legal tech industry. You can build AI tools for legal tasks. You can market them aggressively. But you can't claim your AI performs like a lawyer unless you've tested it and can prove it. The FTC doesn't care about your intentions or your technology's potential. It cares about what you told consumers and whether it's true.
The implications extend beyond legal tech. Every company marketing AI as a substitute for professional services is on notice. AI medical advice, AI financial planning, AI tax preparation. If you claim your AI performs at a professional level, the FTC expects evidence. The DoNotPay precedent gives the FTC a template for enforcement across industries.
For the legal profession specifically, this case validates a long-standing concern: consumer-facing AI legal tools that operate without attorney oversight create real risks. Consumers don't know what they don't know. An AI-generated demand letter that uses the wrong legal standard or misses a jurisdictional requirement can cost someone their rights. The FTC stepped in because the market wasn't self-correcting.
Lessons for Attorneys
For attorneys advising legal tech startups: this case is your compliance blueprint. Every marketing claim about AI capabilities needs substantiation. 'Our AI generates legal documents' is fine. 'Our AI generates perfectly valid legal documents' requires proof that the documents are, in fact, perfectly valid. Get clients to test their AI outputs with licensed attorneys before making performance claims.
For attorneys worried about unauthorized practice of law (UPL): the FTC action doesn't directly address UPL, which is a state-level issue. But it creates federal enforcement pressure that complements state bar efforts. A legal tech company facing both an FTC action and state UPL complaints is in serious trouble. The federal and state enforcement frameworks reinforce each other.
For managing partners evaluating legal tech tools for their firms: the DoNotPay case is a reminder that marketing claims and actual performance aren't the same thing. Before adopting any AI tool that claims to do legal work, ask for evidence. What testing has been done? Who reviewed the outputs? What's the error rate? If the vendor can't answer those questions, the tool isn't ready for your practice.
The Bottom Line
FTC v. DoNotPay established that AI legal service companies must substantiate their capability claims with evidence. The $193,000 settlement and unanimous consent order send a message to every legal tech company: don't call your AI a lawyer unless you can prove it performs like one.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.