A New Era, Few Guardrails: Strategies for Healthcare Leaders to Mitigate AI Risks Today

By Andrew Mahler, JD, CIPP/US, AIGP, CHC, CHPC, CHRC, Vice President of Privacy, Compliance, and Audit Services, Clearwater Security
LinkedIn: Andrew Mahler
LinkedIn: Clearwater

The Perils of Unchecked AI Adoption in Healthcare

Imagine a large health system implementing an advanced AI-powered imaging tool designed to assist radiologists in identifying abnormalities in chest CT scans. The AI vendor’s marketing materials include data demonstrating faster turnaround times and reduced error rates, promising enhanced efficiency and accuracy. Initially, the tool performs well, aiding in the detection of routine conditions such as pneumonia. However, over time, radiologists begin to observe troubling inconsistencies. The AI reliably flags common issues but repeatedly overlooks subtle early-stage pulmonary nodules, particularly in patients with complex medical histories, including older adults or those with pre-existing pulmonary conditions.

Continuing with our thought experiment, assume a subsequent internal audit uncovers the root cause: the model’s training data underrepresented diverse patient populations, excluding key demographics like elderly individuals. As a result, the AI struggles to recognize nuanced indicators outside its “norm.” Despite the tool functioning within its design parameters, trust in the tool and providers’ deference to its “advanced” capabilities allowed these gaps to persist undetected, ultimately compromising patient safety and exposing the system to potential liability.

As you might expect, this scenario is not hypothetical; it reflects real-world challenges faced by healthcare providers and leadership amid the rapid integration of artificial intelligence. Organizations, often eager to leverage new tools and technology for operational improvements, can easily make assumptions that these technologies will create untapped efficiencies across patient cohorts and clinical settings. Yet, as this example illustrates, such assumptions can lead to unintended risks and highlight the critical need for proactive governance in an era where innovation outpaces regulatory frameworks.

The Rapid Surge of AI Adoption Amid Regulatory Lag

Healthcare organizations have been experimenting with machine learning and predictive analytics for years, but the advent of generative AI tools like ChatGPT catapulted AI into the mainstream, making it tangible and integral to daily operations. From automating administrative tasks to supporting clinical decision-making, AI is now embedded in electronic health records (EHRs), diagnostic imaging, and personalized treatment planning.

The accelerated interest in – and use of – AI has not yet been matched by corresponding regulatory requirements, leaving compliance professionals in the precarious position of navigating uncharted territory without the right training, guidance, or even best practices. The result is a governance and training vacuum: as AI adoption skyrockets, organizations are left with underdeveloped (or non-existent) structured approaches to governance and validation, which amplifies risks related to data privacy, algorithmic bias, and clinical efficacy.

The Fragmented and Politicized Nature of AI Regulation

The U.S. regulatory landscape for AI in healthcare remains fragmented, influenced heavily by political shifts and competing priorities. During the Biden administration, Executive Order (EO) 14110 (issued October 2023), emphasized the safe, secure, and trustworthy development of AI, directing agencies like the Department of Health and Human Services (HHS) to establish AI assurance policies, safety programs, and strategies to combat discrimination in AI-driven healthcare tools. This was complemented by efforts to align AI with federal nondiscrimination laws and promote ethical innovation.

In contrast, the Trump administration’s EO on “Removing Barriers to American Leadership in Artificial Intelligence,” issued in January 2025, revokes or modifies aspects of prior orders, prioritizing deregulation to foster AI innovation and economic competitiveness. This shift is evident in the White House’s “Winning the AI Race: America’s AI Action Plan,” released in July 2025, which focuses on reducing regulatory hurdles while encouraging public-private partnerships. At the federal level, Congress has yet to enact comprehensive AI legislation, leaving a void filled by agency-specific actions. For instance, the Food and Drug Administration (FDA) regulates AI as Software as a Medical Device (SaMD) under its 2025 updates, emphasizing premarket review and postmarket surveillance for high-risk applications. The Federal Trade Commission (FTC) addresses AI through antitrust and consumer protection lenses, while the Centers for Medicare & Medicaid Services (CMS) integrates AI considerations into reimbursement policies.

State-level initiatives add further complexity, creating a patchwork of requirements. California, Colorado, and Utah were early adopters with laws mandating transparency in AI decision-making and bias mitigation. By mid-2025, additional states like Nebraska, Arizona, Maryland, and Texas have enacted legislation targeting AI in healthcare utilization management and prior authorization, requiring human oversight for claim denials and disclosures for AI-driven interactions. For example, Nebraska’s LB 77, effective in 2026, mandates clinical review of AI-assisted insurance decisions to prevent automated rejections that could delay care. Similar bills in nearly two dozen other states underscore growing concerns over AI’s role in patient-facing processes.

While the U.S. contends with this patchwork approach, the European Union has established a more unified and stringent framework through the EU AI Act, effective August 1, 2024 (with phased implementation), banning unacceptable-risk AI systems and imposing obligations on general-purpose AI models. High-risk AI applications in healthcare, such as diagnostic and predictive tools, face rigorous requirements including conformity assessments, transparency, human oversight, and alignment with GDPR for data protection. Globally, AI regulations in healthcare are proliferating, with common principles of risk management and ethics emerging across jurisdictions; for instance, the UK’s AI Safety Institute promotes similar standards, China’s rules emphasize state oversight of generative AI, and international efforts like the WHO’s ethics guidelines and UNESCO’s recommendations advocate for equitable, human-centered AI deployment.

This “Wild West” environment – including these international developments – highlight the importance for U.S. healthcare leaders to consider cross-border/state implications to navigate potential conflicts and adopt best practices.

Challenges in Defining AI Within Healthcare Contexts

Defining “AI” in healthcare is deceptively complex, varying by context and application. For a small clinic, AI might manifest as an inexpensive natural language processing tool for documentation assistance. In contrast, an academic medical center could deploy sophisticated predictive analytics platforms trained on vast longitudinal datasets for population health management.

This definitional ambiguity complicates governance, as federal agencies work to employ their own standards: the FDA focuses on AI’s intended use and risk classification in medical devices; the FTC emphasizes consumer harms like deceptive practices; and CMS prioritizes AI’s impact on value-based care and equity. Without a unified taxonomy, compliance teams may overlook essential safeguards, such as informed consent protocols, rigorous validation testing, or periodic equity audits, heightening risks of misapplication, regulatory non-compliance, and business risks.

Limitations of HIPAA in Addressing AI-Specific Risks

The Health Insurance Portability and Accountability Act (HIPAA) provides foundational protections for protected health information (PHI), enforcing privacy and security rules. However, HIPAA falls short in regulating core AI elements, including model training methodologies, algorithmic transparency, and fairness assessments.

This regulatory gap is particularly acute given AI’s dynamic nature. Many systems rely on expansive datasets that may inadvertently retain or repurpose PHI, evolve through continuous learning, or introduce biases from underrepresented training data. Without explicit guidance or standards, healthcare entities risk violations related to data minimization, accountability, and equitable outcomes.

The Intersection of Privacy and Security in AI Systems

AI blurs traditional boundaries between privacy and security, transforming privacy concerns into potential security vulnerabilities, and potential security vulnerabilities into potential ethical concerns. In addition, models that retain sensitive training data or adapt based on real-time inputs can create opaque “black boxes,” where decision pathways are difficult to audit, let alone monitor.

Some hidden risks can include:

  • Traceability Issues: inability to reconstruct AI-driven decisions, complicating audits and legal defenses.
  • Data Provenance: uncertainty about training data sources, potentially exposing organizations to breaches or misuse claims.
  • Equity Impacts: biased outputs disproportionately affecting vulnerable (or specific) populations, inviting scrutiny under civil rights laws.

Addressing these requires a holistic approach, integrating technical safeguards with ethical reviews.

Why “Compliance Minimums” Fall Short in AI Governance

Meeting baseline regulatory requirements today does not guarantee future resilience, as AI systems are prone to “drift”—gradual performance degradation due to changing data patterns or unannounced vendor updates. What often begins as a supportive tool for documentation can quickly morph into an autonomous decision-maker, introducing unforeseen risks.

As healthcare leaders, advocates, and strategists, we cannot afford to await comprehensive mandates or requirements. Proactive measures are essential to safeguard patients, mitigate risks, and align AI with our organizational missions.

Ten Actionable Steps to Reduce AI Risks Today

To navigate this uncertain terrain, healthcare organizations should consider implementing a multifaceted strategy combining education, transparency, and oversight. Below are ten practical steps, grounded in established frameworks, such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework:

  1. Educate Your Team: train clinicians, administrative staff, and IT teams on AI capabilities, limitations, and alignment with security, compliance, and ethical standards. Incorporate case studies to illustrate real-world pitfalls.
  2. Promote Patient Transparency: ensure your patients understand AI use. Consider whether informed consent is needed to clearly explain AI tools’ functions, data usage, and potential risks. Develop patient-facing materials compliant with state disclosure laws.
  3. Map Your Systems: map all AI systems in use, documenting their locations, functionalities, data flows, and accountable parties. Use this to identify high-risk applications.
  4. Vet Your Vendors: require vendors to disclose algorithmic logic, privacy protocols, bias mitigation strategies, and validation methods. Consider embedding these requirements into business associate agreements (BAAs) and conduct regular audits/assessments.
  5. Risk-Stratify AI Applications: classify tools based on potential harm to patients (e.g., low-risk administrative vs. high-risk diagnostic), assigning proportional oversight levels as per guidelines, such as NIST.
  6. Implement Layered Safeguards: consider combining organizational policies, pre-approval workflows, and continuous monitoring. Integrate tool-specific checks, such as real-world performance evaluations, to detect drift early.
  7. Align AI with Strategic Objectives: ensure AI initiatives support clinical goals such as reducing provider burnout, enhancing outcomes, or addressing care disparities. Prioritize evidence-based tools with demonstrated ROI.
  8. Check for Drift: establish routine assessments using benchmarks for accuracy, fairness, and reliability. Leverage recommended safety programs to track equity metrics over time.
  9. Foster Cross-Functional Governance: assemble oversight committees comprising compliance, clinical, legal, security, and ethics experts. Avoid silos by integrating AI reviews into enterprise risk management processes.
  10. Adopt Proven Frameworks: draw from resources, such as NIST and the EU AI Act to build internal policies. If expertise is lacking, engage specialized experts versed in AI compliance to customize implementations.

Conclusion: Proactive Leadership in an Uncertain Future

As AI transforms healthcare, leaders must establish and implement robust boundaries now, rather than reacting to regulatory enforcement or adverse events. By prioritizing ethical, transparent, and risk-aware deployment, organizations can harness AI’s benefits while protecting patients and minimizing risks. In the current gray zone of regulation, foresight and collaboration are key—consulting with experts in AI regulatory compliance can provide the tailored guidance needed to thrive.