By Heather Bassett, M.D., Chief Medical Officer, Xsolis
LinkedIn: Heather Bassett
LinkedIn: Xsolis
Host of AI Amplified
Our patients can’t afford to wait on officials in Washington, DC, to offer guidance around responsible applications of AI in the healthcare industry. The healthcare community needs to stand up and continue to put guardrails in place so we can roll out AI responsibly in order to maximize its evolving potential.
Responsible AI, for example, should include reducing bias in access to and authorization of care, protecting patient data, and making sure that outputs are continually monitored.
With the heightened need for industry-specific regulations to come from the bottom up — as opposed to top-down — let’s take a closer look at the AI best practices currently dominating conversations among the key stakeholders in healthcare.
Responsible AI without squashing innovation
How can healthcare institutions and their tech industry partners continue innovating for the benefit of patients? That must be the question guiding the innovators moving AI forward. On a basic level of security and legal compliance, that means companies developing AI technologies for payers and providers should be aware of HIPAA requirements. De-identifying any data that can be linked back to patients is an essential component to any protocol whenever data-sharing is involved.
Beyond the many regulations that already apply to the healthcare industry, innovators must be sensitive to the consensus forming around the definition of “responsible AI use.” Too many rules around which technologies to pursue, and how, could potentially slow innovation. Too few rules can yield ethical nightmares.
Stakeholders on both the tech industry and healthcare industry sides will offer different perspectives on how to balance risks and benefits. Each can contribute a valuable perspective on how to reduce bias within the populations they serve, being careful to listen to concerns from any populations not represented in high-level discussions.
The most pervasive pain point being targeted by AI innovators
Rampant clinician burnout has persisted as an issue within hospitals and health systems for years. In 2024, a national survey revealed the physician burnout rate dipped below 50 percent for the first time since the COVID-19 pandemic. The American Medical Association’s “Joy of Medicine” program, now in its sixth year, is one of many efforts to combat the reasons for physician burnout — lack of work/life balance, the burden of bureaucratic tasks, etc. — by providing guidelines for health system leaders interested in implementing programs and policies that actively support well-being.
To that end, ambient-listening AI tools in the office are helping save time by transforming conversations between the provider and patient into clinical notes that can be added to electronic health records. Previously, manual note-taking would have to be done during the appointment, reducing the quality of face-to-face time between provider and patient, or after appointments during a physician’s “free time,” when the information gleaned from the patient was not front of mind.
Other AI tools can help combat the second-order effects of burnout. Armed with the critical information needed to recommend a diagnostic test available to them in the patient’s electronic health record (EHR), a doctor still might not think to recommend a needed test. AI tools can scan an EHR — prior visit information, lab results — to analyze potentially large volumes of information and make recommendations based on the available data. In this way the AI reader acts like a second pair of eyes to interpret a lab result, or year’s worth of lab results, for something the physician might have missed.
Administrative tasks outside the clinical setting can save burned-out healthcare workers (namely, revenue cycle managers) time and bandwidth as well.
Private-sector vs. public-sector transparency
How can we trust whether an institution is disclosing how it uses AI when the federal government doesn’t require it to? This is where organizations like CHAI (the Coalition for Health AI) come in. Its membership is composed of a variety of healthcare industry stakeholders who are promoting transparency and open-source documentation of actual AI use-cases in healthcare settings.
Healthcare is not the only industry facing the question of how to foster public trust in how it uses AI. In general, the key question is whether there’s a human in the loop when an AI-influenced process affects a human. It ought to be easy for consumers to interrogate that to their own satisfaction. For its part, CHAI has developed an “applied model card” — like a fact sheet that acts as a nutrition label for an AI model. Making these facts more readily available can only further the goal of fostering both clinician and patient trust.
Individual states have their own AI regulations. Most exist to curb profiling, the use of the technology to sort people into categories to make it easier to sell them products or services or to make hiring, insurance coverage and other business decisions about them. In December, California passed a law that prohibits insurance companies from using AI to deny healthcare coverage. It effectively requires a human in the loop (“a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand”) when any denials decisions are made.
By vendors and health systems making their AI use transparent — following evolving recommendations on how we define and communicate transparency, and promoting how data is protected to end users and patients alike — hospitals and health systems have nothing to lose and plenty to gain.
Check out our newest podcast, AI Amplified: Healthcare AI Discussions with Dr. Heather Bassett
AI is here. It’s real. And it’s making a major impact. If it seems like artificial intelligence is everywhere, you’re not wrong. Dr. Heather Bassett, Chief Medical Officer at Xsolis, is at the forefront of innovation and she cuts through the noise as she speaks with industry leaders who are making a real impact in the world of healthcare AI. AI Amplified focuses on the amazing innovations in AI, challenges the industry is facing, lessons learned, and how to ensure future success to bring back the joy in medicine.