The New Regulatory Whack-a-Mole: AI Compliance

By Leigh Burchell, Vice President of Policy and Government Affairs, Altera Digital Health
LinkedIn: Leigh C. Burchell
LinkedIn: Altera Digital Health

Artificial intelligence (AI) presents the potential to rapidly reshape healthcare, offering opportunities to assist clinicians, empower patients, streamline operations and reduce system costs. Yet, as this transformative potential expands, so does interest in—and in some cases, concern about—the use of AI among lawmakers and regulators.

Healthcare is one of the most highly regulated industries—with good reason, from a safety perspective alone. However, as regulations are added, they present more challenges for provider organizations struggling to keep up, particularly the smaller ones, as well as their technology partners.

In one survey of risk and legal leaders from healthcare and biotech organizations about compliance, 60% of respondents said that developing an AI compliance governance structure is proving difficult.

A top source of compliance burdens? The patchwork that is the U.S. healthcare system’s approach to AI regulation.

Compliance contradictions

As learned by those who have been in the health IT industry since before the pre-HITECH Act days, variability in technology regulations across state lines inevitably results in unnecessarily high resource allocation toward duplicative or opposing requirements. That distracts from innovation and creates confusion for provider organizations who deliver care in multiple jurisdictions.

With those lessons learned, the health IT industry has advocated in recent years in favor of federal frameworks to address rapidly evolving challenges touching healthcare technologies and, recently, this message has been extended to AI requirements and oversight. Everyone in the industry benefits when we avoid a set of multiple inconsistent—and sometimes even conflicting—rules across the country. Unfortunately, however, the volume of recently passed and still-in-process state-level AI laws means we are currently looking at the possibility of considerable challenges.

Currently, in the absence of strong guardrails from Congress or the Administration, many states are taking very different approaches in AI-related laws that are being considered or have already passed. For example, Colorado has signaled taking a comprehensive approach toward all AI (not just health-related) that could categorize many or most healthcare-related AI systems as “high-risk” and impose wide-reaching documentation and disclosure requirements. In contrast, Utah’s lighter regulatory stance primarily emphasizes transparency through required disclosures to patients, while California also focuses on transparency but of different data and in different ways than Utah. For health IT companies considering development of new AI functionalities or providers delivering care in states with different requirements, that inconsistency will mean, at a minimum, the need for more compliance resources and at most might cause hesitation about embracing AI innovations.

Doable directives

As mentioned, a more predictable and less resource-intensive alternative to varying state AI laws would be a federally managed AI regulatory framework. We have learned through experience with the federal oversight of health IT software in recent decades that a single, consistent approach provides industry with the benefit of clear guidance, reduces legal ambiguity and enables EHR vendors to focus more directly on advancing patient care technologies.

Clear, unified federal regulations rooted firmly in ethical principles that promote responsible AI use would also help health IT organizations better serve provider organizations. Developers could put more time and resources into building solutions that take aim at longstanding industry challenges, from EHR-induced cognitive burdens to revenue cycle inefficiencies.

A centralized governance approach to AI would simplify compliance for healthcare organizations doing business across the country as well, especially in light of increasing industry consolidation. Among rural providers, for instance, 17% of unprofitable hospitals merged with organizations outside their own geography in an effort to find a sustainable business model. If faced with less regulatory burden stemming from variant requirements, healthcare organizations could instead reallocate compliance funds to other priorities, such as launching new service lines or expanding community outreach to address patient factors specific to the geographies in which they provide care.

AI alignment

Across stakeholder groups, there is growing recognition of the need to foster AI technologies that significantly enhance healthcare delivery while also ensuring that the industry is responsibly managing risks. By approaching regulation with collaboration, adaptability and most importantly, consistency, we can collectively create an environment in which ethical, effective innovation in artificial intelligence is protected and safe thanks to a predictable national framework.