Every law firm using AI on European client data is a data processor under GDPR — and most are getting the compliance basics wrong. Article 22 gives individuals the right not to be subject to purely automated decisions with legal effects, and when your AI tool flags a contract clause as high-risk or recommends a litigation strategy, you're in Article 22 territory whether you realize it or not.
The EU AI Act adds a second compliance layer that went into effect in stages starting August 2024. For law firms, the combination means you need GDPR compliance for the data and AI Act compliance for the system — and your AI vendor's assurances about both need to be in writing, specific, and auditable.
Article 22 and Automated Decision-Making in Legal Practice
GDPR Article 22(1) is deceptively simple: individuals have the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significant effects. The three exceptions are explicit consent, contractual necessity, or member state law authorization.
For law firms, the trigger question is: does your AI tool make or substantially influence decisions that affect people's legal rights? If your AI scores litigation risk and that score determines whether you take a case — that's arguably Article 22 territory. If AI ranks contract clauses by risk and your associates skip the ones AI flagged as low-risk — that's a human-in-the-loop question that regulators will scrutinize.
The European Data Protection Board's guidelines on automated decision-making (Guidelines 01/2022) clarify that merely having a human "rubber stamp" AI output doesn't satisfy the human intervention requirement. The human must have actual authority and competence to change the decision, access to all relevant data, and time to make a meaningful review. Managing partners: if your associates are overriding AI recommendations less than 5% of the time, regulators won't buy the "human oversight" defense.
Data Protection Impact Assessments for Legal AI
Article 35 requires DPIAs when processing is likely to result in high risk to individuals. AI-powered legal tools almost always qualify because they involve systematic evaluation of personal aspects, processing on a large scale, or innovative use of new technologies — all triggers listed in DPIA guidelines.
Your DPIA for legal AI needs to cover: the nature and scope of processing (what client data enters the AI), the purpose (legal research, document review, risk scoring), necessity and proportionality (why AI instead of manual review), risks to data subjects (hallucination, bias, unauthorized access), and mitigation measures (encryption, access controls, human oversight).
Most firms treat DPIAs as checkbox exercises. That's a mistake that costs real money. The Irish DPC fined Meta €1.2 billion in 2023 partly based on inadequate transfer impact assessments. A DPIA done properly takes 20-40 hours for a complex AI system. It forces you to map data flows you didn't know existed — like the fact that your AI contract review tool sends snippets to a US-based API for processing, which triggers Chapter V transfer requirements. Do the DPIA before deployment, not after the regulator asks.
Cross-Border Data Transfers and AI Processing
Most AI tools process data outside the EEA, which triggers GDPR Chapter V transfer requirements. After Schrems II invalidated Privacy Shield, firms relied on Standard Contractual Clauses (SCCs) plus Transfer Impact Assessments (TIAs). The EU-US Data Privacy Framework (DPF), adopted July 2023, restored an adequacy mechanism for certified US companies — but its long-term survival is uncertain (Max Schrems has already challenged it).
For law firms using US-based AI vendors: check whether your vendor is DPF-certified (Microsoft, Google, and OpenAI's parent are; many smaller vendors aren't). If they're not certified, you need SCCs plus supplementary measures — which for AI processing typically means encryption in transit and at rest, pseudonymization before data hits the AI model, and contractual prohibitions on government access.
The practical trap: even if your primary AI vendor is DPF-certified, their sub-processors might not be. Your contract review AI might use a cloud provider that uses a CDN that caches data in a non-adequate country. Map the entire processing chain, not just the first hop. Article 28 requires your processor agreement to cover all sub-processors with equivalent protections.
Building a GDPR-Compliant AI Workflow for Legal Practice
Start with data minimization (Article 5(1)(c)): don't feed your AI more personal data than the task requires. If you're using AI for contract analysis, strip party names and replace with identifiers before processing. If you're doing legal research, you rarely need actual client data at all — use anonymized fact patterns.
Lawful basis matters. For client work, legitimate interest (Article 6(1)(f)) is your most likely basis, but you need a documented balancing test. Consent from the client works but creates withdrawal headaches. Contractual necessity (Article 6(1)(b)) applies if AI processing is genuinely necessary to deliver the legal service — but "we use AI because it's faster" isn't necessity.
Retention is where firms fail most often. AI tools that retain prompts, cache results, or build firm-specific models from past queries are accumulating personal data with no defined retention period. Your AI vendor agreement must specify retention limits, deletion procedures, and what happens to derived data (embeddings, model weights influenced by your data). Article 17 (right to erasure) applies to AI-processed data — if a client requests deletion, you need to purge their data from your AI workflows too, not just your document management system.
GDPR Enforcement Trends and What's Coming for Legal AI
DPA enforcement against AI is accelerating. Italy's Garante temporarily banned ChatGPT in March 2023, then allowed it back with conditions including age verification and consent mechanisms. The French CNIL issued guidelines on AI and GDPR in 2024 covering training data, model deployment, and individual rights. Spain's AEPD created a dedicated AI regulatory sandbox.
The pattern is clear: regulators are moving from general GDPR enforcement to AI-specific enforcement. The EDPB's 2024 opinion on AI model training clarified that legitimate interest can be a valid basis for training but requires rigorous balancing tests and effective opt-out mechanisms.
For law firms, the highest-risk enforcement area is client data used without adequate safeguards. If a DPA investigates your firm and finds client personal data flowing to AI tools without DPIAs, without adequate transfer mechanisms, and without data processing agreements — the fine calculation under Article 83 starts at the higher of €20 million or 4% of global turnover. The legal profession's reputation for confidentiality won't be a mitigating factor; it'll be an aggravating one.
The Bottom Line: GDPR compliance for legal AI isn't optional, and the "we're just lawyers using a tool" defense won't survive regulatory scrutiny. Every firm using AI on European client data needs DPIAs, compliant transfer mechanisms, proper processor agreements, and documented lawful bases. The firms that treat this as a one-time compliance project instead of ongoing governance will be the first enforcement targets when DPAs turn their attention to the legal sector — and that attention is coming.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
