The FTC's investigation into OpenAI is the first major federal regulatory probe of a generative AI company. Launched in July 2023 with a sweeping 20-page Civil Investigative Demand, it examines whether OpenAI's data practices and AI hallucinations constitute unfair or deceptive practices. This investigation will shape U.S. AI regulation for years.
Background
In July 2023, the Federal Trade Commission issued a Civil Investigative Demand (CID) to OpenAI, essentially a federal subpoena for information. The 20-page document was unusually detailed, covering virtually every aspect of OpenAI's business: training data sources, data security practices, user data handling, and the company's response to known problems with AI-generated false information.
The investigation was triggered by multiple concerns. First, the March 2023 ChatGPT bug that exposed some users' chat histories and payment information to other users. Second, mounting reports of ChatGPT generating false information about real people, creating what the FTC called 'reputational harm.' Third, broader questions about how OpenAI collected the massive amounts of data needed to train its models.
The CID demanded that OpenAI detail every source of training data, describe its data security infrastructure, explain what happened during the March 2023 data exposure incident, and document what it knew about its models' tendency to generate false information about real people. The scope signaled that the FTC viewed generative AI as a consumer protection issue, not just a technology curiosity.
What Happened
OpenAI responded to the CID but the investigation has remained open. The FTC has not issued a final complaint or consent order, but the investigation itself has already shaped the industry. The mere existence of a federal probe forced OpenAI and its competitors to upgrade data security, improve privacy policies, and invest in reducing hallucinations.
The FTC's legal theory is significant. The commission asserts jurisdiction under Section 5 of the FTC Act, which prohibits 'unfair or deceptive acts or practices.' The FTC is testing two theories: first, that OpenAI's data collection practices may be unfair to consumers whose data was scraped without meaningful consent; second, that AI hallucinations generating false information about real people may constitute an unfair practice causing reputational harm.
The hallucination theory is novel. The FTC has never previously treated false statements generated by software as a consumer protection violation. If the FTC ultimately finds that AI companies are liable for hallucinations about real people, it creates a regulatory framework that goes beyond what defamation law provides (as Walters v. OpenAI showed, defamation claims face high barriers).
The Ruling
There's no final ruling yet. The investigation remains ongoing as of early 2026. But the CID itself, issued July 13, 2023, establishes the FTC's jurisdictional claim and legal theories. The FTC asserts that Section 5 of the FTC Act gives it authority over AI companies' data practices, and that AI-generated false statements about real people may constitute unfair practices causing consumer harm.
The investigation's scope covers four areas: training data provenance (where OpenAI got its data and whether it had the right to use it), data security (how OpenAI protects user data, including the March 2023 breach), hallucinations (what OpenAI knows about its models generating false information and what it's doing about it), and consumer disclosures (whether OpenAI adequately informs users about these risks).
The FTC's approach mirrors its investigation of social media companies: establish jurisdiction, demand comprehensive disclosures, identify specific violations, then either negotiate a consent order or file a complaint. The investigation creates leverage even without a final action, as OpenAI must cooperate fully while managing the reputational and business risks of an open federal probe.
Outcome: The investigation is ongoing as of early 2026. The FTC has required OpenAI to detail all training data sources, describe its data security practices, and explain the March 2023 incident where some users could see other users' chat histories and payment information.
Why This Case Matters
This investigation defines U.S. federal regulatory posture toward generative AI. While Congress debates AI legislation, the FTC is using existing consumer protection law to regulate AI companies now. That means the rules are being written through enforcement actions, not legislation, which makes tracking the FTC's moves essential for any attorney advising AI companies.
The hallucination theory has industry-wide implications. If the FTC concludes that AI companies bear responsibility for false information their models generate about real people, every company offering a generative AI product faces potential liability. That would create pressure to invest heavily in hallucination reduction, implement better safety filters, and provide more prominent disclaimers.
The data collection aspect connects to the copyright cases. The FTC is examining how OpenAI acquired its training data through a consumer protection lens, while the NYT and other plaintiffs challenge the same practices through copyright law. These parallel legal tracks create compounding risk: AI companies face both regulatory enforcement and private litigation over the same underlying data practices.
Lessons for Attorneys
For attorneys advising AI companies: the FTC investigation is a roadmap of regulatory risk. Audit your client's data collection practices, data security infrastructure, hallucination rates, and consumer disclosures. The FTC's CID tells you exactly what the regulator wants to see. Build compliance around those four pillars before the FTC comes knocking.
For attorneys representing individuals harmed by AI hallucinations: the FTC investigation opens a regulatory path that sidesteps the defamation challenges exposed in Walters v. OpenAI. If defamation law doesn't provide relief because of the 'known limitations' defense, the FTC's unfair practices theory might. Track the investigation's outcome and consider filing FTC complaints on behalf of affected clients.
For managing partners: the FTC investigation signals that U.S. AI regulation is happening through enforcement, not waiting for legislation. Firms that build AI regulatory practices now will capture significant advisory work as the FTC's theories crystallize into precedent. The intersection of consumer protection, data privacy, and AI capability claims is becoming a distinct practice area.
The Bottom Line
The FTC's ongoing investigation into OpenAI is building the framework for U.S. federal AI regulation through enforcement. It covers data collection, security, hallucinations, and consumer disclosures. Every attorney advising AI companies needs to treat the FTC's four-pillar inquiry as a compliance checklist.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.