It seems everyone has jumped on the AI bandwagon in healthcare these days. Behind the hype are fundamental structural and cultural considerations that will either make these efforts successful or doom this nascent effort into being another industry fad until a later time. While some may argue which elements should be on the list, I’m offering three that aren’t typically discussed.
1. The Product Doesn’t Solve an Addressable and Relevant Clinical Need
The “whiz kids” from Silicon Valley and other tech cities are working on their “blue sky” projects that promise to disrupt every industry and create all sorts of things. But the questions they should be asking in healthcare is not whether the product is cool, novel, or can get VC funding but whether physicians actually need, want, or almost more importantly, will use it. The most overlooked aspect is whether physicians can access the AI as a part of their existing workflow.
While there is nothing wrong with thinking big and bold, many of the challenges in healthcare are simpler, pedestrian issues. For example, one major health system executive said to me recently, “This hotshot AI is all well and good but we need something to solve a more basic issue. Our hospital needs a machine learning system that can process thousands of images and categorize by body part.”
Another challenge I often hear is that the AI is so narrowly focused that it’s not relevant to their facility. For example, the product may only be relevant to a particular tumor type using an image that was captured by a specific manufacturer using a particular imaging software. That type of AI is not broad enough to adopt on a larger scale.
Lastly, a practical complaint about AI is that the tech companies aren’t in tune with the fact that any new product has to work within a clinician’s workflow. Most tech companies haven’t worked with hospitals previously and don’t fully realize that physicians aren’t going to change their workflow because a vendor created an algorithm. Even if the product has been approved by the FDA, it doesn’t mean physicians are going to use it. History is littered with great healthcare solutions that were never adopted broadly.
2. Data used to create the AI is not Heterogeneous Enough
In order to provide the best long-term statistical validity, AI should be based on a wide range of data sources. However, considering the many technical, administrative, and legal barriers to accessing, governing, and sharing medical data from just one setting, it is a monumental task to accessing diverse data along many settings and across broad geographies. There are many industry examples where AI has been developed using prodigious volumes of data within large, renowned health systems, but can’t be applied elsewhere because it was created with homogeneous data.
Health systems by their very nature strive to be homogeneous in order to achieve scale. They put tremendous efforts into standardizing practice, reducing clinical variability, and managing technology with uniformity. The result is efficiency, predictability, and improved clinical operations. Unfortunately, these practices also distort the data, which results in workable algorithms that operate well in a specific health system but not as well when exposed to the real world of varied approaches.
To develop reliable AI, each type of data should be as multidimensional as possible – from patient characteristics such as ethnicity, age, geography, and economics to variability in clinical settings such as equipment, procedure, and supplies.
Another critical component in heterogeneity is access to a variety of data types, including non-structured data where clinical value exists, such as medical imaging. For decades, healthcare has relied almost exclusively upon structured data found in claims, pharmacy, lab, and EHR systems. However, the industry is relying on data that has clear elements of bias given it was largely created for reimbursement rather than clinical decision-making by providers.
It is becoming clear that the results of analyses based on these data types are not adequate for research today as innovation solutions become more complex, precise, and targeted, and organizations such as the FDA begin to mandate the use of real world data (RWD) for clinical trials.
As the types of available data explode, the ability to include other types of data in research grows, and as computational power becomes more available, advanced data is becoming necessary to meaningfully demonstrate accuracy. As an example, imaging in the form of DICOM pictures and radiology reports contains interpretations and other information, providing rich diagnostic and outcomes data.
Access to heterogeneous, advanced data offering deeper clinical context is necessary to build more accurate models to answer high-value questions that can function across a variety of healthcare settings.
3. The AI is Not Vendor-Agnostic
Many big manufacturers and healthcare technology companies are getting into AI, and no modality is maturing faster than radiology. The problem with the AI provided by these solution providers is that the algorithms generally only work with that specific company’s tech stack. That means the solution is not interoperable with other vendor products. For example, there is AI that helps identify breast tumors that are hard to see with the human eye, but it only works when used with breast screening equipment from the same manufacturer. In order for a hospital or physician to benefit from this AI, they are trapped into using that equipment, making it difficult to move to a different supplier in the future. While a hospital may not think that’s a problem today, especially if the AI benefits patients immediately, the move may hamper future cost reduction efforts and the system-wide improvements resulting from better interoperability.
Anyone who works in healthcare knows of the dangers of being locked into a proprietary system. You have no other choice but use one company’s specific formats, processes, interfaces, and other requirements. Time and time again, however, the promise of immediate cost savings and/or clinical benefits overshadows long-term implications like slower innovation, higher costs, more system complexity, and frustrated patients and physicians.
AI will likely achieve the same notoriety as proprietary systems if clinical purchasers and solution developers are not careful. Developers should create solutions that are platform- and vendor-agnostic and have the ability to work seamlessly for broad populations. Every time a developer builds something to meet a proprietary standard, they reinforce the business models of narrow networks, further fortify proprietary tech stacks, and create future technical debt for years to come.