Making Healthcare AI more Actionable, Ethical, and Patient Accepted with the TRUSTED Framework

By Patrick Higley, Vice President, AVIA’s Center for Operational Transformation
LinkedIn: Patrick Higley
LinkedIn: AVIA

Healthcare AI holds enormous potential to transform patient experience, improve efficiency, and drive better outcomes for patients. But achieving that potential will require navigating not just a range of clinical, operational, and logistical challenges, but also ensuring that the technology is implemented safely, responsibly, and fairly. Almost as critically, as healthcare AI becomes a fact of life, patients will need to be educated about the nature and applications of healthcare AI and reassured that it is trustworthy, reliable, and useful.

Part of the solution to these challenges must be the adoption of norms for the development and use of healthcare AI that can be publicly attested and adhered to. But as a new and still-emerging technology, there is not yet clear regulatory guidance on the full length and breadth of accepted and acceptable applications of AI, unlike most other clinical technologies. This makes the role of widely recognized industry standards absolutely critical.

To this end, the AVIA Generative AI Collaborative—a strategic collaborative formed in 2023 and consisting of 30 health systems and partner organizations like the American Hospital Association—has developed the TRUSTED Framework. The outcome of a several month-long effort that involved participating health systems as well as organizations like the American Hospital Association, TRUSTED is a comprehensive roadmap designed to ensure the ethical and responsible use of AI in healthcare, synthesizing the key components of a variety of clinical and ethical AI frameworks. The framework makes it possible for provider organizational governance bodies and committees to set high-level guidance on a variety of aspects of healthcare AI to ensure safe, ethical, and effective use of AI in lieu of regulatory clarity.

Why we created the TRUSTED framework

The TRUSTED framework was developed to support health system leaders across a multitude of disciplines align on the most important elements of responsible AI development and deployment. Based on cross-industry frameworks from the National Institute of Technology and the healthcare-specific frameworks of the Coalition for Healthcare AI CHAI (healthcare industry), the Trusted framework is meant to be healthcare provider specific, providing the grounding and guidelines as health systems move to act on the opportunity of AI.

The framework is necessary not just to ensure the successful and ethical deployment of AI within health systems—it’s also essential for creating norms and expectations that will help patients understand and feel comfortable with the implications of AI on their healthcare. Healthcare AI is also a complex and somewhat uncanny technology that is poorly understood even by many with otherwise technical backgrounds. It is not a doctor, nor is it comparable to one—it cannot “know” a patient, it cannot be seen by patients, and its operation is often difficult to understand and to explain. These technical complexities of AI mean that for most patients it will necessarily remain a “black box”, and they’ll need to trust their providers to be responsible stewards of it.

How TRUSTED drives consensus—and results

Health systems cannot afford to wait for a legal or regulatory consensus to emerge around healthcare AI. Foundational work—both for the broader technology and for health systems’ ability to adopt it—is already underway, which means that a set of immediately workable guidelines is long overdue.

At the heart of the TRUSTED framework is a set of shared values that aligns with the best aspects of traditional healthcare: a declaration that AI healthcare must be Trustworthy, Responsible, User-Centric, Secure & Safe, Transparent, Equitable, and Dependable. Which is to say, it must meet the same high standards as other forms of healthcare, and its development must likewise be guided by these principles. Patients must feel secure in the knowledge that they will be treated fairly and equitably by healthcare AI, that their data will be handled safely and securely, that it will act in their best interests—and a wealth of other key considerations that must guide its development, if it is to be successful.

For instance, a health system operating within the TRUSTED framework will be committed to testing AI models across multiple geographically and demographically diverse data sets, ensuring a solution’s ideal target outcomes match the actual target and do not lead to higher outcome inequalities. The framework can also guide health systems setting critical data and security policies to ensure that sensitive patient data is handled responsibly, and guide efforts to initiatives that offer the best outcomes for patients—not just the health system itself.

TRUSTED is also a powerful tool to give healthcare agencies the security to partner with third-party solution companies and organizations, by providing a north star to guide the development and adoption of new AI solutions and technologies, especially those emerging from outside the traditional healthcare ecosystem.

Towards an equitable, responsible standard for healthcare AI

The emergence of healthcare AI presents both unprecedented opportunities and considerable challenges for the healthcare industry. While its potential to revolutionize patient care and outcomes is immense, realizing these benefits will require a concerted effort to address the manifold clinical, operational, and ethical considerations involved in its development and deployment.

By synthesizing best practices from other frameworks as well as from the larger industry as a whole, TRUSTED not only provides a roadmap for the ethical and responsible use of AI but also serves as a catalyst to enable collaboration and innovation in healthcare AI, enabling health systems to navigate the regulatory landscape with clarity and purpose. By adhering to its principles, organizations can ensure that AI technologies prioritize patient welfare, uphold data privacy and security, and promote equitable healthcare outcomes—and position themselves, and the larger industry, to realize the transformative potential of AI to the fullest and most beneficial extent.