The Overlooked Cyber Risk in Healthcare AI: Protecting the Models Themselves

By Michael Gray, CTO, Thrive
LinkedIn: Michael Gray
LinkedIn: Thrive

The healthcare sector’s AI ambitions are accelerating, from augmented telehealth and diagnostics to streamlined billing, claims, and scheduling. But as health systems integrate AI deeper into their workflows, one critical question often gets overlooked: How secure are the models themselves?

For CIOs and CISOs, the risk goes far beyond traditional data breaches. AI models fine-tuned on patient health information (PHI), imaging, electronic health records (EHRs), or claims data can become open gateways to HIPAA-protected content.

According to a recent IBM report, 13% of organizations have already experienced breaches of AI models or applications, and 97% lacked proper access controls.

As AI becomes foundational to care delivery and operations, protecting these models must become a core tenet of enterprise risk management to ensure a stable security posture.

Why AI Models Introduce New Attack Surfaces

AI models differ from traditional applications. They don’t come ‘ready-made.’ Instead, they must be trained on patient data and institutional context before they become useful. The training process itself opens new vulnerabilities. Even when underlying datasets are well protected, the models built on top can leak or distort sensitive information in unexpected ways:

  • Data leakage. Models may inadvertently return fragments of PHI during queries.
  • Model inversion. Hackers use sophisticated prompting to reverse-engineer details from training data without directly accessing the underlying databases.
  • Prompt injection. Malicious instructions inserted into prompts can override safeguards and manipulate outputs.

These threats are as much about model integrity and reliability as about data exposure. Traditional cybersecurity builds walls to keep people out. Securing AI means also building walls and controls that keep sensitive data in.

Common Defense Gaps

Healthcare IT leaders are well-versed in data security. Yet even in these practiced environments, recurring gaps appear:

  • Lack of data-minimization policies. Without clear rules defining what patient data models can access, how long it’s retained, and how it’s disposed of, PHI can persist in logs or hidden datasets, lingering beyond intended use.
  • Shared AI environments. Efficiency often drives departments to host multiple models within the same infrastructure, creating crossover risk where a single breach can compromise multiple models.
  • Weak access control and audits. Every interaction should follow the principle of least privilege, with checks and balances so no one party has unrestricted access or control.

Addressing these weaknesses is essential for any healthcare enterprise managing AI workloads.

Three Pillars of AI Security in the Healthcare Sector

Most organizations in the healthcare sector already possess the foundations needed for AI governance. The key is to apply familiar cybersecurity and compliance principles to this new context.

Here are three core pillars to build from today:

PHI Minimization and Data Policy
After deployment, PHI risks don’t disappear. Data can linger and ‘haunt’ models or logs unless its lifecycle is clearly governed.

To mitigate risk, organizations should:

  • Limit training datasets to only what’s necessary for the model’s task(s) and intended purpose.
  • Isolate models from broader data sources.
  • Establish retention and destruction timelines that align with HIPAA and OCR.
  • Ensure all training data is de-identified and encrypted.

Segmented AI Environments
We discussed the need to isolate model datasets to reduce crossover risk. That same concept of segmentation applies to your AI environments themselves.

Avoid monolithic models tied to centralized enterprise data. Instead, create smaller, purpose-built models by department or dataset. Segmentation limits the “blast radius” of incidents and simplifies oversight, because each environment can be audited and monitored independently.

Continuous Monitoring and Validation
Security can’t stop at deployment. Enable automated, continuous logging and auditing for all model access and inference requests.

Monitor for:

  • Abnormal access patterns or query spikes.
  • Potential data leakage.
  • Model manipulation.
  • Credential and permission drift, especially after staffing changes or third-party projects end.

This proactive monitoring, validating, and recalibrating process helps detect and contain potential breaches early.

Beyond Compliance: Resilience and Trust

The bottom line is that compliance is a checkpoint, not the finish line. There are no magic checkboxes for lasting security.

To achieve that end goal, healthcare IT systems should operate under the assumption that technologies, and therefore the threat landscape, evolve constantly. In turn, workflows, protections, and safeguards need to keep up.

Embedding governance across the AI lifecycle, from data ingestion and training to deployment and decommissioning, ensures accountability by design.

AI models are now part of the healthcare digital infrastructure. Securing them on all fronts is fundamental to the reliability, compliance, and resilience of modern health systems and the continued trust that patients and partners place in them.