Italy became the first country to ban ChatGPT and later imposed the first GDPR fine against a generative AI company: EUR 15 million against OpenAI. This case set the tone for European AI regulation and forced every AI company operating in the EU to rethink data practices. Attorneys advising companies with European users need to understand what happened here.
Background
On March 30, 2023, the Italian Data Protection Authority (Garante per la Protezione dei Dati Personali) issued an emergency order temporarily banning ChatGPT in Italy. The Garante cited multiple GDPR violations: OpenAI had no lawful basis for processing the personal data of Italian users, it failed to implement age verification to prevent minors from accessing the service, and it didn't properly report a March 2023 data breach.
The data breach was significant. A bug in ChatGPT's system briefly allowed some users to see other users' chat histories and payment information. OpenAI disclosed the breach but, according to the Garante, didn't notify the Italian authority within the 72-hour window required by GDPR Article 33.
Italy's ban made global headlines. ChatGPT went dark for Italian users for approximately one month. OpenAI responded by implementing age verification gates, updating its privacy policy for European users, and providing options for users to opt out of having their data used for training. The service was reinstated, but the Garante's investigation continued.
What Happened
The Garante spent over a year investigating OpenAI's data practices before reaching its conclusion. The authority examined how OpenAI collected training data, how it handled user interactions, what personal data was processed, and whether users had adequate control over their information.
In December 2024, the Garante imposed a EUR 15 million fine, making it the first GDPR enforcement action against a generative AI company. The fine addressed the original violations: processing personal data without a lawful basis, inadequate age verification, and the data breach notification failure. The Garante also ordered OpenAI to conduct a six-month public awareness campaign in Italy about how ChatGPT processes personal data.
OpenAI called the fine 'disproportionate' and announced it would appeal. The company argued it had addressed the Garante's concerns promptly and implemented substantial privacy protections for European users. The appeal is pending, but the fine stands in the meantime.
The Ruling
The Garante's December 2024 decision rested on three core GDPR violations. First, OpenAI processed personal data included in its training corpus without a lawful basis under GDPR Article 6. The personal information of millions of Europeans was scraped from the internet and used to train models without consent, legitimate interest assessment, or any other recognized legal basis.
Second, OpenAI failed to implement effective age verification mechanisms. GDPR Article 8 requires parental consent for processing data of children under certain ages (typically 13-16, depending on the member state). ChatGPT had no meaningful barrier preventing minors from using the service and having their data processed.
Third, the March 2023 data breach notification was inadequate under GDPR Article 33's 72-hour reporting requirement. The EUR 15 million fine (~$16.5 million USD) reflected the severity and scope of the violations, though it's a fraction of the maximum possible GDPR penalty (4% of global annual turnover).
Outcome: The Garante imposed a EUR 15 million fine in December 2024 — the first generative AI fine under GDPR. OpenAI was also ordered to conduct a six-month public awareness campaign about how ChatGPT processes personal data. OpenAI called the fine 'disproportionate' and is appealing.
Why This Case Matters
This case set the regulatory template for AI in Europe. After Italy acted, data protection authorities in France, Germany, Spain, Poland, and other EU member states launched their own investigations into generative AI companies. The Garante proved that existing privacy law (GDPR) applies to AI companies, and other regulators followed.
The EUR 15 million fine is modest compared to GDPR's maximum penalties, but the precedent is what matters. The Garante established that scraping personal data from the internet to train AI models requires a lawful basis under GDPR. That's a fundamental challenge to how every large language model is built, since they all train on internet data containing personal information.
For U.S. law firms with European clients or operations, this case is a compliance warning. Any AI tool that processes personal data of EU residents must comply with GDPR. That includes not just AI companies themselves but also law firms using AI tools with client data that touches European individuals. The chain of GDPR compliance doesn't stop at the AI provider.
Lessons for Attorneys
Attorneys advising AI companies need to map their GDPR exposure now. If your client's AI product is available to EU users or trains on data containing EU residents' personal information, the Italian DPA ruling creates direct compliance obligations. Lawful basis for data processing, age verification, data breach notification procedures, and user opt-out mechanisms are all mandatory.
For firms with European offices or clients: review which AI tools your attorneys use and whether those tools comply with GDPR. If an attorney uploads a document containing a European client's personal data to a consumer AI tool, that processing needs a lawful basis. The firm, not just the AI provider, bears GDPR responsibility as a data controller.
For managing partners watching the regulatory landscape: European AI regulation is moving faster than U.S. regulation. The EU AI Act is layering additional requirements on top of GDPR. Firms that develop GDPR and AI Act compliance expertise now will capture advisory work as every AI company selling into Europe needs legal guidance. This is a growing practice area, not a one-time regulatory event.
The Bottom Line
Italy's EUR 15 million GDPR fine against OpenAI was the first regulatory penalty targeting generative AI and forced every AI company operating in Europe to reassess data practices. Attorneys advising companies with EU exposure need to treat AI compliance as a core GDPR obligation, not an afterthought.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.