The debate is real, AI is real, but how will we use AI responsibly in healthcare innovation? Will the debate suck all the air out of the room and stifle innovation? There will be no stone left unturned with AI in healthcare in 2024. Here is what the thought leaders are saying about it.
And join us for the next few weeks as we look at what we might see in 2024.
In 2024, AI is set to continue revolutionizing healthcare, bringing unparalleled advancements. The evolution of AI in the sector will see widespread use of virtual assistants, chatbots, and predictive analytics tools, empowering healthcare providers with informed decision-making and driving down costs. However, this transformative force also presents ethical challenges, including the risk of biased algorithms, privacy concerns, and the potential displacement of human healthcare providers. Systems must be implemented in the development of any Generative AI solution to ensure accountability and responsibility as we move towards a future where patient-centered AI guarantees accessible, affordable, and efficient healthcare for all.
2024 will be another year of choppy waters for most health systems and healthcare providers. I expect financial performance to gradually improve as inflation slows, but it seems unlikely that margins will return to pre-pandemic levels anytime soon. Savvy health systems will aggressively challenge existing cost structures, particularly labor expenses, and find ways to reduce costs and gain long-term financial certainty in as many administrative and clinical areas as possible.
AI will remain a hot topic. I predict 2024 will be the year many health systems closely examine their data strategies and make investments to prepare a solid foundation for using AI in the future.
Implications of AI for healthcare in 2024 and beyond
It’s a foregone conclusion that AI will be adopted for the benefit of healthcare. We’ve seen healthy skepticism in specific healthcare sectors because of AI’s history, but now we’re seeing equally healthy optimism in people looking to solve complex problems. AI isn’t a silver bullet, and it won’t do people’s jobs for you, but it’s crucial to enable humans behind the healthcare system to be better at what they do, better serve patient experiences and achieve better patient outcomes.
Generative AI, like ChatGPT, has shown people what’s possible and given AI the exposure it needed to overcome the hurdle of adoption in healthcare. Concerns remain about how the models are trained and how AI reaches conclusions that we must be careful of. Adoption of AI is coming from more than the CIO now — like innovation leads pushing for AI as a part of their regular programs. We’re seeing use cases codified outside the IT department. There are plenty of use cases that AI solves that have actual, real utility to the organization, which is a net positive. Both feasibility and value are there.”
The ethics of AI
It’s important for those of us who build AI to take ethics into consideration. We need to install the guardrails to ensure an equitable process for training new models, defined processes for how we iterate and documentation for how we process feedback. We’re in a state of formation and pioneering what things should look like — and it’s a race. There’s always potential for bad things to happen when you’re moving fast, so collectively we need guidance in specific areas for AI that will help steer future developments. That won’t come to fruition in 2024, but we’ll see more norming and well-formed use cases and ideas.
After a year of buzz, healthcare leaders are exploring practical application of generative AI to improve operational performance. Adoption of digital solutions powered by generative AI is accelerating as leaders understand how to manage the source documents from which AI draws content and erect guardrails to ensure accuracy and security. The best of these technologies also leverages conversational AI, which uses machine learning and natural language processing to ensure communications are conducted in context and therefore easier to understand. The net result is a set of tools that can help relieve the administrative burden on staff by automating workflows and providing patients with self-service options that make it easier to schedule appointments as well as prepare for care and procedures.
As we move into 2024, we will continue to see more discussions about AI’s role in healthcare, which is one reason I was delighted to see President Biden’s executive order establishing new standards for AI safety and security, and guidance on patient safety and health equity. Transparency in how these systems operate is also important. It’s important to remember that this technology needs to learn to walk before we run head-on into adoption in healthcare. Everyone is looking for a new shiny object that will assist in driving better healthcare, but I think these AI technologies have a lot of learning to do before that can happen.
The recent strike among Kaiser Permanente healthcare workers, which resulted in a groundbreaking deal to integrate AI into clinician workflows, highlights the profound shift happening across the healthcare industry. The pandemic rightly directed attention towards patient care, but with provider burnout at an all-time high, health systems are increasingly prioritizing back-office technologies tailored to support clinicians. Emerging technologies like AI and machine learning are helping health systems alleviate burnout by assisting with diagnosis and reducing administrative burden. We’ll continue to see the development and implementation of tools designed to improve the provider experience, reduce burnout risks, and cut care delivery costs, all while enhancing patient outcomes.
Looking to 2024, I predict there will be increased action and focus to ensure AI-based solutions are built responsibly. AI tools must meet the quadruple aim of healthcare – improving outcomes, reducing cost to deliver care, enhancing patient experience, and improving clinician experience. AI solutions that don’t meet this aim will be thwarted, while new, responsible entrants will make considerable progress with revenue-cycle based solutions as some of the first to undergo this process.
At Ronin, we are committed to developing and delivering safe, equitable, and effective machine learning systems and believe responsible AI use in healthcare demands a three-pronged approach:
- Rigorous model validation to ensure high performance to prevent foreseeable issues.
- Continuous performance monitoring to promptly detect changes in low performing algorithms and drift early.
- Rapid issue correction triggered by changes in performance, which involves root-cause analysis, data updates, and model retraining to uphold accuracy.
This strategy has helped our company foster trust between clinician users and our AI-driven platform, and holds the potential to transform clinical outcomes, patient experiences, and reduce healthcare costs. Responsible AI paves the way for a future where technology and human expertise seamlessly collaborate to enhance patient well-being.
As AI continues to be deployed across healthcare settings, responsible development of this technology is crucial. This is particularly true when AI intersects with human life. As we continue to assess AI applications for healthcare, those that will be truly transformative and eagerly adopted by healthcare professionals will need to follow good machine learning practices (GMLP) and comply with the regulations that the healthcare industry poses, to ensure that the medical community is using the technology responsibly and securing the health and safety of the patients.
There are two main issues that will need to be addressed: 1. Clinical validation – there will need to be clear guidelines for clinical validation as AI based decision support tools cannot be viewed as a pharmaceutical or a medical device. This is a new category in the health space and practitioners need to understand and trust the research they are reading, especially as there are no peer reviewed experts to assist the readers, as in other health spaces. 2. Technological validation – need to move away from the “axiom” that AI is a black box and see how tech companies are making it more transparent and explainable. There needs to be more than just a score or a prediction, but rather insights and explanations, in biological terms that explain what the prediction is relying on, as much as possible. Responsible AI should not merely automate processes but offer clear insights into decision-making.
These principles must shape the broader narrative of AI integration into healthcare. Ethical considerations, transparency, and active engagement in discussions are vital to ensure AI’s potential aligns with societal values, making responsible development of AI in healthcare hinge on a delicate balance between technological innovation and ethical responsibility.
While the industry will continue to assess generative AI’s reliability in clinical settings, we will see steady acceleration of this technology, used by both payers and providers, toward improving member and patient experience. For example, generative AI will help patients and plan members with deeper self-service around key areas like patient access. Human-in-the-loop use cases will expand as well, as genAI empowers more service staff with real-time guidance for answering patient and member questions and recommending next steps. Finally, the technology’s ability to understand large bodies of natural language will help organizations uncover more opportunities and flag potential “hotspots” in the consumer journey by analyzing calls, texts, and chats automatically.
Artificial intelligence solutions in healthcare bring the promise of lowering clinician burnout by performing routine, time-consuming tasks such as documentation, enabling clinicians to work at the tops of their licenses in a less-stressful environment. AI has also demonstrated the potential to efficiently offer complete and thoughtful responses to patient questions. However, providers using AI solutions should always ensure there is a ‘human in the loop’ to provide oversight as the technology continues to mature.
Unlike in other fields where AI can train itself, healthcare AI requires a guided process with humans actively involved in training the technology. There is also no quick fix to the problem of healthcare data intricacy. However, one proven method of gaining health leaders’ trust in AI is to have certified healthcare data experts take on the dual process of performing their tasks while AI training. As the trainers familiarize themselves with the technology, the latter also grows its capability of performing tasks accurately. Although the full training process can take up to several years to complete, this human-centric approach both improves AI in healthcare’s accuracy and impact, as well as provides a solution to other prevalent issues such as cost and the nursing shortage.
Healthcare organizations are plagued by the burden of false positives in patient record mergers, leading to wasted time and resources for data stewards who are tasked with cleaning up patient files. By leveraging AI-powered automation to reduce false positives to near zero across millions of records, healthcare organizations can empower their data stewards to focus on resolving genuine discrepancies and streamline patient data management, ultimately saving valuable time and money.
While everyone is excited (rightfully so) about the potential for AI in healthcare, I think we need to approach the opportunity carefully. We should not be afraid to embrace AI tools, but we should put in place guardrails to ensure the responsible application and use of these technologies. In 2023, we saw a rush of investment go into AI tools and applications. However, the infrastructural layers for deploying these AI tools are still lacking or under development. We are also still in the early days of understanding which areas of healthcare will benefit most from adopting AI, how front and center AI tools need to be compared to those that operate in the background, and which workflows and processes will and should be replaced with AI.
The industry needs commitment, not hype related to AI in mental health: As we move into the new year, the industry will continue the hype-cycle around the myriad uses of AI in mental health. New data and tools will spark conversations and innovations, but companies will over promise and under deliver for the next 6-18 months. To move the needle on mental health care and treatment, the industry needs real and valuable applications – transitioning from the promise of the possible to the delivery of accessible, high-quality, and personalized treatment plans for each patient. Actionable delivery will need to encompass understanding of patient populations based on high quality real-world data, industry partnerships for bedside decision support and taking the hype out of what artificial intelligence (AI) and technology tools can (and can’t) do – AKA, AI won’t replace doctors in 2024, but it can aid in earlier, more accurate diagnoses and treatment plans. Truly making a difference in mental health next year will require evolution and long-term commitment, otherwise, we’ll stay in the same hype-cycle and patients will continue to suffer from a lack of access to high quality mental health care.
We’ve witnessed remarkable strides in AI, particularly in generative AI where text summarization and transformation capabilities have become quite advanced. Other machine learning approaches are showing promising results in the domain of health, where models can now detect some diseases from images or audio. Customized models tailored for specific tasks have become increasingly prevalent, offering targeted solutions to various problems. However, the development of an off-the-shelf model capable of providing insightful analysis for arbitrary data remains a work in progress. This will remain true in 2024. We will see more applications of AI in simple tasks, but training models for complex tasks will remain costly, and real data analysis will still require human insight.
In 2024, EHR Data Will Be Crucial for Advancing Care in Conjunction with Generative AI
It is an exciting moment in time within the healthcare sector as we’re witnessing the accelerating pace at which technology is both developed and adopted. Generative AI is no exception to this ongoing innovation, with new use cases emerging rapidly each day. In 2024, EHR data will play an even larger role in contextualizing care as adoption of AI rapidly increases across the continuum. Namely, EHR data will provide an additional layer of context for providers leveraging AI, aiding in the reduction of rehospitalization rates and accelerating the transition to value-based care. As providers start to realize that generative AI enhanced by EHR data drives improved care outcomes, they will have the relevant insights and tools necessary to take on risks, improving both clinical and financial outcomes.
Artificial intelligence will continue to advance in sophistication and capability, and, in particular, its application in ambient monitoring to enable acute-care providers to improve care. Faced with a longstanding shortage of front-line healthcare workers, provider organizations will invest more heavily in ambient monitoring technology. These solutions will boost care quality and improve clinician decision-making at the bedside as they gather and analyze more patient data.
AI in Cancer Diagnosis: Reshaping the Physician’s Toolkit
AI is a critical addition to the pathologist’s toolbox for diagnosing cancer and its effectiveness is supported by conclusive evidence. Studies have shown pathologists using AI demonstrated greater accuracy, objectivity, and efficiency during diagnosis. Beyond the immediate benefits for clinical decision support, physicians will increasingly integrate AI tools and AI-powered findings for handling tedious, repetitive tasks—such as counting cells, evaluating biomarker expression, measuring features within an image, and deducing statistical insights from large datasets—where machine-learning based algorithms inherently excel. This shift allows physicians to direct their focus toward more intricate cases requiring their utmost experience and skill. While AI cannot replace pathologists, those who integrate AI into their practice are poised to outpace clinicians who resist embracing this technology and persist in using manual tools and processes.
Healthcare data is sensitive and subject to strict regulations, making it challenging to share real patient information for research and development. Especially with AI, developers need to test and fine-tune algorithms without exposing patient records, creating a unique challenge for the healthcare industry. As we move into 2024, it’s critical that we keep in mind that if we plan to fully leverage generative AI and LLM solutions to answer advanced analytical and research questions on cohorts of patients, new and unique privacy-proofing measures will be necessary.
Digital Transformation and AI
Healthcare is a people business, and digital transformation and AI are about pairing the human component of healthcare with groundbreaking technology to provide value at each step and benefit the human experience. From the patient engagement perspective, AI also holds huge promise for improving productivity. The technology has the ability to engage with and shoulder non-clinical workloads for our clinicians, allowing teams to reallocate time to patients.
New developments in Natural Language Processing have created the ability to take large scale repositories of clinical notes and convert them into structured data that can be easily indexed and searched. This enables many different applications for large enterprises such as identification of cohorts in the patient population. This technology should be built in a way that allows each piece of information to be verifiable by the human users of the information produced by these language models. Being verifiable includes direct links to the original data so that each piece of information can be viewed in the context of the original clinical report. This ensures that the data being assessed by human experts are not subject to any “hallucinations” or invented facts, which would make it unusable in a clinical setting. Verifiable output is a key way to ensure that AI can be used responsibly in healthcare.
The AI has to be right: the role of AI in nursing education
In 2024, we will see students and professors continue to experiment with the use of AI in education. Both students and educators are looking for ways to improve the traditional workflows of the classroom. By leveraging AI, faculty can reduce some of the workload burden with development of lesson plans, and more efficiently testing student knowledge and adjusting learning accordingly. For students, the proper use of AI can give them access to trusted learning materials in an easier to find, and digestible, conversational format.
Education companies will act as fast movers since they are already dealing with time-pressed students – who are also savvy consumers –who expect to be engaged and leverage personalized study resources. For medical and nursing education, the AI has to be right – students must graduate clinically competent and confident, and they cannot learn from content that is not evidence-based, current and accurate. While there are many discussions and pilots happening currently, 2024 will be the year we see both of these groups push their institutions for real-life implementations.