81% of healthcare data policy violations in 2025 involved HIPAA-regulated data — predominantly through personal AI accounts and unsanctioned cloud apps, according to Netskope's Healthcare 2025 report. That's not a technology problem. It's a governance problem. Healthcare organizations are deploying AI tools without the compliance frameworks that HIPAA demands, and legal teams are left cleaning up violations that were entirely preventable.

The HHS Office for Civil Rights proposed the first major update to the HIPAA Security Rule in 20 years on January 6, 2025, explicitly citing the rise in ransomware and the need for stronger cybersecurity — and the changes hit AI-deploying healthcare organizations hardest. The proposed rule removes the distinction between 'required' and 'addressable' safeguards, meaning every security standard becomes mandatory. For managing partners advising healthcare clients, this is the regulatory event that makes AI compliance non-optional.


HIPAA and AI: The Framework That Already Applies

HIPAA doesn't treat AI differently from any other system. If an AI tool creates, receives, maintains, or transmits electronic protected health information on behalf of a covered entity or business associate, the full HIPAA framework applies: minimum necessary access, access controls, encryption, audit logging, risk analysis, and breach notification. The problem is that most public AI tools — ChatGPT, Gemini, Claude's consumer versions, Copilot — don't sign Business Associate Agreements and don't meet HIPAA security standards. Using them with PHI constitutes a potential HIPAA violation. Period. This sounds obvious, but 81% of healthcare data policy violations involving HIPAA data went through personal AI accounts and unsanctioned cloud apps. Healthcare workers are using consumer AI tools with patient data because the tools are convenient, and their organizations haven't provided HIPAA-compliant alternatives. The legal exposure is straightforward: each unauthorized disclosure of PHI through a non-BAA AI platform is a potential breach requiring notification, investigation, and potential OCR enforcement.

Business Associate Agreements for AI Vendors

Any AI vendor processing PHI must operate under a BAA that outlines permissible data use and safeguards. This requirement is non-negotiable under HIPAA, but the practical application to AI vendors raises specific issues that traditional BAAs don't address. Training data provisions: The BAA must explicitly prohibit the AI vendor from using PHI to train, fine-tune, or improve AI models — whether the client's specific data or aggregated datasets. HHS has warned that training AI models on patient data without appropriate safeguards could result in impermissible disclosures under the Privacy Rule. Generative AI tools have produced in their outputs the names and personal information of individuals included in their training data. De-identification standards: If the AI vendor claims to de-identify data before processing, the BAA should specify which de-identification method is used — expert determination under 45 C.F.R. § 164.514(b)(1) or safe harbor under § 164.514(b)(2) — and require documentation that the method was properly applied. Incident response requirements: BAAs for AI vendors should include shorter notification timelines than the standard 60-day HIPAA breach notification window, given the speed at which AI-related data exposures can propagate.

The Proposed HIPAA Security Rule Update

OCR's January 2025 proposed update to the HIPAA Security Rule is the most significant regulatory change for healthcare AI compliance in two decades. The key changes for AI-deploying organizations include: removal of the 'required' vs. 'addressable' distinction — every safeguard becomes mandatory, eliminating the flexibility that organizations used to justify less rigorous AI security controls. Stricter encryption requirements that apply to data at rest and in transit — including PHI processed by AI systems. Enhanced risk analysis requirements that must explicitly address AI-specific risks, including model inference attacks, prompt injection, and training data extraction. Mandatory multi-factor authentication for systems accessing PHI, which affects how healthcare workers authenticate to AI platforms. For healthcare organizations that deployed AI tools under the old 'addressable' framework — choosing alternative measures rather than implementing full safeguards — the proposed rule eliminates that option. Every AI system processing PHI will need to meet the same security standards as core EHR systems.

This isn't just a healthcare provider issue — law firms handling healthcare litigation face the same HIPAA constraints. Medical malpractice firms, healthcare regulatory practices, and firms defending providers in enforcement actions routinely process PHI. Using general-purpose AI tools with that data creates the same HIPAA exposure that healthcare organizations face. The practical challenge is acute: a healthcare litigation attorney wants to use AI to analyze medical records, summarize treatment histories, or identify patterns across cases. But those medical records contain PHI, and most legal AI platforms aren't HIPAA-compliant. The solutions are limited: use a HIPAA-compliant AI platform with a signed BAA (OpenAI's healthcare offering launched with BAA capability), de-identify data before AI processing (which may eliminate the analytical value), or avoid AI entirely for PHI-containing work (which creates competitive disadvantage). Managing partners at healthcare-focused firms need to make this investment decision — a HIPAA-compliant AI platform costs more than consumer tools, but the alternative is potential breach liability that dwarfs the technology spend.

Healthcare legal teams — both in-house and outside counsel — need a compliance roadmap that addresses AI-specific HIPAA risks. Inventory all AI tools touching PHI. Conduct a comprehensive audit of which AI platforms are being used by attorneys, paralegals, and support staff with healthcare data. The 81% violation rate through personal accounts tells you that unsanctioned tools are in use. Implement approved AI tools with BAAs. Select enterprise AI platforms that sign BAAs, meet HIPAA security standards, and explicitly prohibit training on client data. Deploy them organization-wide and block access to non-approved alternatives for PHI-related work. Update risk analysis for AI-specific threats. The proposed Security Rule requires explicit AI risk analysis. Address model inference attacks, prompt injection, training data extraction, and unauthorized PHI disclosure through AI outputs in your risk assessment. Train staff on HIPAA-AI intersection. Healthcare workers and legal professionals need specific training on what can and cannot be input into AI systems. Generic AI training doesn't address HIPAA-specific constraints. Prepare for the proposed Security Rule. Even if the final rule differs from the January 2025 proposal, the direction is clear — stricter security standards with less flexibility. Begin implementing the proposed requirements now rather than waiting for the final rule.

The Bottom Line: HIPAA applies to AI the same way it applies to every other system processing PHI — but 81% of healthcare data violations are happening through personal AI accounts and unsanctioned tools. The proposed HIPAA Security Rule update eliminates the 'addressable' flexibility that organizations relied on, making every safeguard mandatory. Healthcare legal teams need HIPAA-compliant AI platforms with BAAs, explicit training data prohibitions, and AI-specific risk analyses. The cost of compliant AI tools is a fraction of the breach liability from non-compliant alternatives.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.