CDS Moves Compliance from Dismal to Dismal

William HymanWilliam A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University,
Read other articles by this author

One use of a Clinical Decision Support system (CDS) is to remind doctors of what the “established” standardized methodology is. One key issue here is established by whom? Another is, depending on complexity, why the clinical staff doesn’t follow the established procedure whether or not it is supported by a CDS. Possible reasons are lack of knowledge of the standardized procedure, not believing in it, or thinking (rightly or wrongly) that the current situation, or many situations, is an exception. High non-compliance rates indicate a lack of seriousness about the standard or a lack of oversight and/or consequences for doing things in a different way. Lack of oversight might mean no one is looking at compliance, or they are looking but not acting on disparities.

In this context a recent¹ study provides a good example of the application and limits of CDS. In this case the issue was providing appropriate thromboprophylaxis for atrial fibrillation. The issue was “discordant” therapy, meaning contrary to the recommendations of the CDS and the underlying protocol. (We need not debate here whether a CDS gives recommendations, advice, suggestions, hints, or something else.) In one arm of the study discordant therapy occurred in 63% of patients whose providers were not given use of the CDS. These seem to me to be a remarkably high number since simple arithmetic tells us that without the CDS only 37% of patients in this study group were getting the therapy that at least the creators of the CDS hoped to standardize. This is within the range of other studies which found 19 to 81% non-compliance with recommended therapy with a mean of 60%. After provision of the CDS, discordant therapy dropped to 59%, a statistically significant difference, However this may be an example of a significantly different result having little significance. Other than the 4% improvement in the stratified arm of the study, overall the CDS had no effect. In this case the CDS operated outside of the EHR which could have diminished effective use. Survey results with a 52% return indicated that physicians did not follow the recommendations for the ambiguous reasons of “patient preference” and “specialists are managing anticoagulation therapy”, the later meaning that the rcommendations were being given to the wrong provider. There was a 9% response that the user simply disagreed with the CDS recommendation. Disagreeing reflects one of the core questions in the quest for an actually effective CDS. How good are the recommendations over what populations with what other medical conditions that can properly impact provider decision making?

The study does not report what I think are two other important things. One is the degree to which the physicians not using the CDS were otherwise aware of the recommended protocols which the CDS embodied. Second, we are not told anything about clinical outcomes of the compliant versus the discordant groups and we therefore cannot tell if compliance mattered.

As always with CDS, this study raises key questions about the need for and quality of the advice. Why do so many physicians not comply with guidelines? When is this good and when is it bad? Do we want our providers acting on the recommendations of unproven software algorithms? What constitutes proper proof? And how do we find out if the recommendations have a positive effect on patient outcomes? In this regard we might want to remember that everyone doing the same thing isn’t helpful if it isn’t the right thing.

¹Impact of an Atrial Fibrillation Decision Support Tool on thromboprophylaxis for atrial fibrillation, MH Eckman, et al, Am Heart J 2016;176:17-27.