CDS and Imaging Appropriateness

WilliamHyman

William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

The premise of Clinical Decision Support (CDS) is that automated patient specific “advice” can guide clinician practice toward improved patient care and ultimately better outcomes. A related value is better utilization of resources by avoiding unnecessary clinical activities that are potentially harmful, and/or expensive. Here we expect those providing services and collecting fees for doing so to figure out how to provide fewer of those services and thereby earn less. From the bad analogy department, this is like tasking your local pizza store to work toward selling less pizza.

Advanced imaging is a clinical activity of interest here, especially with respect to the potential for harm and the cost. As with all clinical activities, what we want is the unfettered use of imaging when needed, and for it not to be used when it is not needed. This objective is embodied in a legislatively created CMS mandate, beginning in January 2017, that referring physicians must use physician-developed appropriateness criteria when ordering advanced imaging for Medicare patients, in an effort to reduce duplicate and/or unnecessary imaging and associated costs. The “stick” here is that imaging performed not in conformance with the criteria would not be paid for. The necessary appropriateness criteria to do this do not yet exist, but they are mandated to be specified by HHS by November, 2015. When they do exist they could then be built into a CDS such that the CDS operating on some set of patient-specific facts would indicate whether or the criteria for use have been met. Of course a key question in this regard is how good the CDS is.

With this background RAND has reported a study on the use of an imaging appropriateness CDS that was created from a group consensus on a structured rating process. Despite this design effort most imaging orders in the study were unable to be matched either way to the criteria, ie 66% of the time the CDS was not robust enough to align with the real world of real patients. There was however a small decrease in the number of “inappropriate” orders when the CDS was in use. However for orders that were flagged as inappropriate, only 10% were changed and less than 1% were cancelled. This means that the users actively rejected, or didn’t really consider, most of the inappropriateness advice from the CDS. These results might be a critique of the particular CDS in question rather than the concept, but building a better CDS that captures more patients, and demonstrating that it is correct remains a challenge. Managing responsibility for imaging providers not getting paid when the order is inappropriate will also be a challenge.

Something not reported in the study is clinical outcomes. How many patients whose imaging order was appropriate by the criteria used and were imaged actually benefited from that imaging? Similarly, how many “inappropriate” orders produced beneficial findings, and how many patients not imaged would have benefited if they had been?

An important broader lesson here is that building a CDS is relatively easy. Building a good CDS is much harder. In this regard back when CDS-like things were called “expert systems” there was then, as is there is now, a key question of where does the knowledge base come from and how good is it. An insightful student of mine had a title for a paper we never wrote that would still be good today: “Expert System or Some Guy’s Opinion?”. We might amend that here to be “Reliable CDS or Some Group’s Opinion?”