CDS and the “Learned Intermediary”

EHR System Technical Functionality vs. UsabilityClinical Decision Support

William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

Part of the discussion in the FDASAI regulatory report, and subsequent workshop and panel discussions (available online) has been what kind of regulation, if any, is necessary for a reasonably sophisticated and clinically important Clinical Decision Support system (CDS). One of the possible criteria that was addressed was that a CDS which provides advice/suggestions to a “learned intermediary” (LI), e.g. a doctor, might need little regulation because it remains the doctor who is making the ultimate decision. In this regard it might be remembered that the concept of LM arises from liability law rather than from science or medicine.

There are several issues buried within this simple concept such as the LIs actual ability to do what the CDS did (e.g. if a complex calculation was involved), the time available to make an independent decision, and the actual expertise of the user. The latter addresses the question of how learned does the LI need to be to safely use the CDS, and how their learnedness might be determined. Another issue is whether what the LI might decide is influenced by the design of the CDS user interface.

Consider the following thought experiment in which there is a comparative CDS trial in which the same information is presented in different ways by different CDSs. This might be as simple as the order of things on a list, or more involved including choice of language, a graphical interface, highlighting, etc. A group of supposed LIs is then, for a hypothetical patient for which a complete record is available, presented with the output of the various CDSs, where the intellectual content of the output is the same for each CDS. The LIs are then asked to decide whether to follow the advice of the CDS or make a different independent decision. If they make the same decision regardless of the format of the information, then it can be tentatively concluded that design of the CDS (as opposed to the content) has not influenced their decision. However if the decisions made vary based on the design of the CDS interface, then the LI concept breaks down because the user’s actions are not then independent of the design of the CDS. It might then follow that the design of the CDS should be subject to some greater pre-market scrutiny because the design itself does influence what would be an actual diagnosis or treatment of an actual patient. Even for a single design, a trial could reveal whether or not a spectrum of users reaches the same decision based on the information presented to them. Or why the decisions might be different.

If design influences response, then it would be good to know if a proposed design is an effective one in regards to eliciting good decision making. If the CDS fell under FDA scrutiny then the FDA could request the results of a study aimed at determining effectiveness. Or perhaps some other agency would be the appropriate home for such regulation. In addition, once marketed actual field results of the CDS use could be tracked and adverse experience reported. But reported to whom? The FDA has in place a mandatory reporting system for device issues which might be an appropriate platform or model. Reporting via Patient Safety Organizations (PSOs) was also suggested at the workshop, although PSO participation is currently voluntary, and the integration of information from the various PSOs up to a master data base remains a concept rather than a reality.

If important CDSs have the potential to cause LIs to systematically reach the wrong conclusion, then this is clearly a problem. How to address this problem remains to be resolved.