Human Factors and EHRs

William HymanWilliam A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

Usability, or lack thereof, has been one of the significant complaints about EHRs throughout the Meaningful Use era. Too hard to get information in, too easy to get wrong information in, too disruptive to work flow, too hard to see what information is there, excessive pop-ups, etc. This all fits into the arena of what has been generally called human factors in the US. Good human factors design can make the difference between a product that can theoretically be used successfully, and one that will actually be used successfully by real people in their real environments of use. The usability issue is certainly not limited to EHRs. Medical devices have also seen their share of human factors attention. In this regard the FDA, which regulates medical devices but not EHRs, has recently issued a Guidance Document on what documentation and testing they expect to see for new pre-market submissions.

These expectations begin with a detailed task analysis and identification of critical tasks along with a hazard analysis. Basically this asks how the system is actually intended to be used. An appropriate hazard analysis latter includes what-can-go-wrong in terms of use error. Use error here is not a typo. It is different from user error in that the term user error appears to immediately assign blame while use error merely identified what might happen. In particular use error includes the possibility that error should be blamed on the design in that the EHR may lay traps for the user. This was well stated in an early paper on anesthesia systems which was titled in part: An error waiting to happen. These task and hazard considerations of course should be based on a thorough understanding of the users and their environment. This requires either clinical experience or deep access to the clinical environment. This should all be documented so that it can be examined and re-examined, and compared to the final product. In particular there should be clear identification of how each hazard was addressed, with reliance on “be careful” instructions limited to situations in which better design could not be reasonably implemented. Expert review from outside the design team is also expected, and this does not mean only the person at the next desk. Instead there should be a structured review by clinical experts as well as by more general users.

As the design develops, and certainly when it is “complete”, it should be subject to realistic simulated use testing, again by real users and not just by the design team. The Guidance Document has an interesting discussion of how many testers are needed. It cites a 2003 study of software evaluation in which the probability of finding coding faults was measured as a function of the number of testers. In brief, fifteen testers found at least 90% of the faults. Thus the FDA recommends 15, which is the same guideline provided by ONC. Yet the cited study is not for human factors issues which are in general harder to detect because they involve some subjectivity. Fifteen is quite a bit greater than EHR usability testing has seen, with at least one certified EHR being reported to have been tested by a single internal subject.

Successful human factors design mirrors successful design in general and contains five major components. In medical device terminology these are design input which sets performance criteria, design itself which creates a system that address those inputs, design review which monitors progress and addresses problems, design verification which makes sure the software works, and design validation which ultimately tests whether real user needs, including usability, have actually been achieved. In this context it can be fairly stated that EHR usability has been a failure which arose from semi-mandatory adoption. In turn it can be concluded that EHR usability testing has been a failure since all commercially available EHRs have been subject to some form of such testing and passed with the users then hating them. We need to do better.