Walters v. OpenAI is the first court ruling on whether AI-generated falsehoods count as defamation. A Georgia court granted summary judgment for OpenAI after ChatGPT falsely told a journalist that radio host Mark Walters had been accused of embezzlement. The ruling means AI companies can shield themselves from defamation liability with disclaimers about their technology's limitations.
Background
Mark Walters is a radio host and Second Amendment advocate. In 2023, a journalist named Fred Riehl was using ChatGPT to research a legal case involving the Second Amendment Foundation (SAF). When Riehl asked ChatGPT to summarize the case, the model fabricated a detailed narrative claiming Walters had been accused of defrauding and embezzling funds from the SAF.
None of it was true. Walters had no connection to any embezzlement accusation. ChatGPT invented the entire story, complete with specific dollar amounts and organizational details that gave the fabrication an air of credibility. This is a textbook AI hallucination: confident, detailed, and completely false.
Walters filed suit against OpenAI in the Superior Court of Gwinnett County, Georgia, under Case No. 23-A-04860-2. It was the first defamation lawsuit targeting AI-generated false statements to reach a judicial decision, making it a test case for an entirely new category of liability.
What Happened
OpenAI moved for summary judgment, arguing that ChatGPT's output doesn't qualify as a 'statement of fact' for defamation purposes. The company pointed to its disclaimers, terms of service, and widespread public knowledge that AI chatbots can produce inaccurate information. OpenAI argued that no reasonable person would treat ChatGPT's output as a reliable factual source.
Walters countered that the output was presented as authoritative text, not flagged as potentially unreliable, and that a journalist actually received it while doing research. The falsehood wasn't some obviously absurd hallucination. It was a plausible-sounding accusation of financial crime against a real person, delivered with specificity that suggested factual basis.
The court sided with OpenAI on May 19, 2025. The judge found that ChatGPT's output wasn't reasonably understood as a factual assertion given the technology's known limitations, that OpenAI had implemented adequate safeguards, and that Walters, as a limited-purpose public figure, failed to demonstrate actual malice.
The Ruling
The court's ruling rested on three pillars. First, ChatGPT's outputs aren't 'statements of fact' in the defamation sense because the technology's fallibility is publicly known. Users and the public generally understand that AI chatbots generate unreliable content. Second, OpenAI implemented disclaimers and safeguards that further undermined any claim that the output was presented as factual.
Third, Walters qualified as a limited-purpose public figure due to his public advocacy on Second Amendment issues. That raised his burden to showing 'actual malice,' meaning OpenAI knew the output was false or acted with reckless disregard for its truth. The court found no evidence of actual malice. OpenAI didn't craft the false statement. Its model generated it through a probabilistic process.
The combination created a high bar for AI defamation claims: the technology's known unreliability plus disclaimers plus the actual malice standard for public figures made the claim unviable.
Outcome: The court granted summary judgment in favor of OpenAI, finding that ChatGPT's output was not reasonably understood as a factual assertion given its known limitations and disclaimers, that OpenAI implemented adequate safeguards, and that Walters (a limited-purpose public figure) failed to show actual malice.
Why This Case Matters
This ruling gives AI companies a roadmap for avoiding defamation liability: disclaim, disclaim, disclaim. As long as AI providers are transparent about their technology's limitations and the public generally understands that AI makes things up, the outputs may not be treated as factual assertions. That's a significant shield for the entire AI industry.
But the ruling also has a dark side. AI hallucinations about real people cause real harm. Walters had a fabricated embezzlement accusation floating around in ChatGPT's outputs. The court essentially said that because everyone knows AI lies, nobody should believe what it says, so the lies don't count as defamation. That logic gets uncomfortable when millions of people use ChatGPT as a research tool.
For private individuals (who don't face the actual malice standard), the calculus may be different. A private person defamed by AI wouldn't need to show actual malice, just negligence. Future cases involving private plaintiffs will test whether the 'known limitations' defense holds up when the plaintiff isn't a public figure.
Lessons for Attorneys
Attorneys representing people defamed by AI face an uphill battle after this ruling. The 'known limitations' defense is powerful. To overcome it, you'd need to show that the AI output was presented in a context where a reasonable person would treat it as factual, that disclaimers were inadequate or hidden, or that the plaintiff is a private figure with a lower burden of proof.
For attorneys advising AI companies, this case is a gift, but don't get complacent. The ruling relies heavily on disclaimers and public awareness of AI limitations. As AI becomes more integrated into daily life and outputs are presented with fewer caveats, that defense erodes. Companies should maintain prominent disclaimers, avoid presenting AI outputs as verified facts, and document their safety measures.
For managing partners: this case highlights the reputational risk of AI hallucinations about real people. If your firm uses AI tools that generate content about opposing counsel, judges, or parties, verify every factual claim. The legal standard for defamation may protect AI companies, but the professional responsibility standards for attorneys are stricter. A fabricated claim about a person in a court filing is sanctionable regardless of whether it's legally defamatory.
The Bottom Line
Walters v. OpenAI established that AI-generated falsehoods may not constitute defamation when the technology's limitations are publicly known and properly disclaimed. It's a major win for AI companies, but attorneys should watch for future cases involving private individuals and contexts where AI outputs are presented as authoritative.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.