Clinical Decision Support and Workflow

EHR System Technical Functionality vs. UsabilityStage 2 Requires 5 Clinical Decision Support Interventions

William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

The use of Clinical Decision Support (CDS) is a requirement of Meaningful Use of EHRs. For Stage 2 an eligible professional must implement 5 clinical decision support interventions related to four or more clinical quality measures at a relevant point in patient care. In addition drug-drug and drug-allergy interaction checks must also occur. The “relevant point” requirement creates a challenge in the interaction of the CDS with the clinical and EHR workflow. This was noted in the recent AHRQ report on its 5 year long demonstration projects.

Two specific workflow issues are noted in the report, both after the CDS is developed, validated and implemented. One is that at the time you would want the CDS is fire up and provide its decision support, all of the patient’s current data elements that the CDS operates on would have to be in place. Thus a lab value not yet available would (hopefully) prevent the CDS from activating. If the lab value were available later the CDS could process the now complete information, but the patient encounter may then be over and the output of the CDS would have to be processed in some way apart from that encounter.

The second challenge is that even if all of the necessary information is available to the CDS, the practitioner would have to be actively engaged with the EHR simultaneously or nearly simultaneously with addressing the patient. While this is certainly an imagined use, my personal experience is that the providers’ interaction with the EHR may occur later, after the patient is no longer available.

In both of these cases a CDS prompt might be generated when the record is not being looked at or when the patient is gone. The workflow then must accommodate messaging to the appropriate practitioner (or other user) about a patient they are no longer dealing with. Depending on urgency, the practitioner could re-engage with the patient or note to act on the CDS output at the next encounter. In the latter case a review of pending messages might have to take place pre-patient encounter, unless the practitioner were, as noted above, dealing with the EHR and the patient simultaneously and interactively. An alternative is that the workflow would have to be changed to assure that the decision support is provided while the patient encounter is still under way–or while the patient is waiting locally for the process to play itself out.

An unrelated issue is that in one of the two projects two different CDSs were developed for the same potential interaction, one centrally and the other locally. It was reported that agreement was “almost perfect” for 7 out of 11 of the preventive care reminders, but was as low as one-third for the others. This was explained by subtle differences in rule logic, terminology mapping, and coding practices, yet it was “not possible to say that one approach was more correct than the other”. This is the core challenge of CDS, how good is the advice? It also led the authors of the report to the curious question “How often does CDS need to be correct or useful in order for clinicians to accept and use it?”suggesting that it is ok for the CDS to give wrong advice as long as it isn’t wrong too often. This might be part of the special claims of the software world which include error acceptance and the need to “upgrade” (i.e., fix what was never right in the first place) as inherent characteristics of their products. Is it equally acceptable for other medical devices to sometimes give erroneous information?