If We Keep Saying It Is Good Will That Make It Good

William HymanWilliam A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

Many initiatives, especially ones that are made mandatory, follow a similar pattern of hype. In our arena we see this in the adoption of EMRs, the establishment of HIEs, and the incorporation of AI. The classic hype profile is that enthusiasm rises well above what is rationale, before settling down to something closer to actual value, unless the unfulfilled over-hype kills it off.

The initial step in most initiatives is to declare in advance how wonderful it will be. In healthcare this generally means better patient outcomes and/or cost savings, with the amount of money predicted to be saved often being grandiose, and then never measured. In fact, it is often not clear who it is that is even supposed to be saving all this money. Is it the payors or the providers? Payors spending less generally means providers are earning less. Herein lies the fallacy of the notion that people who sell services will be motivated to sell fewer of those services in order to reduce cost for the buyers. This is like asking the pizza industry to cut back on the sale of pizza.

If providers are the ones saving money, where does that savings go? Greater profit? Higher salaries? Reinvestment? Charitable care? This is related to efficiency claims, but greater efficiency isn’t the same as saving money unless that improvement really means earning more by seeing more patients perhaps, or actually spending less (eg by laying off excess workers or saving on utility bills if you lock up early).

The patient outcome side of the equation suggests that there is a discrete set of measures which can be assessed one at a time, or rationally combined into a single overall outcome conclusion. One example of a single discrete measure is the number of catheter related infections. It is clearly good to reduce these, and the incidence is relatively easy to measure. Another discrete parameter is readmissions, although we have learned that the number of readmissions is a highly manipulatable number, including the fascinating notion that some patients in a hospital have been “admitted” and others have not, even if in the latter case they have been there for several days. We might also notice that if you discharge a patient who subsequently dies, you don’t get dinged for a readmission. These realities can lead to bean-counter gamesmanship rather than actual improvement in healthcare delivery. We have seen much of this manipulation again in the rollout of MACRA and MIPS wherein we pretend that measuring things differently will result in positive change.

If there are good attributes that are to be fairly counted then these have to be actually measured and compared pre and post intervention in order to establish added value. However, there is the related problem of separating multiple causes from effects. For example, the adoption of an intervention includes the motivation of an organization to make that adoption, and perhaps to up their game to make sure that it is successful. If the results are then good is it because of the intervention itself or the general commitment to do better? In this regard I am intrigued by certain claims of the Patient Safety Foundation Movement wherein organizations tell us how many lives they are going to save by adopting certain protocols. What scares me is the idea that if they had not made this commitment they would have let all those people die. Challenges may also arise when there is actual measurement and some outcomes are better but others are worse. The determination of a “total” benefit from a set of benefits may be done, but is rarely fully defensible. This is the type of challenge that can be overcome by never actually measuring anything that matters, but simply claiming that the net effect is good.

While some report good results from the adoption of EMRs other studies report EMR induced errors including from design defects and use errors. I distinguish here “user” error (with an r) from “use” error, the latter being something that the design of the system either causes or fails to protect against. Addressing use error is part of human factors design which considers real world needs and the real working environment of the user as opposed to some fanciful ideal in which everyone has the time, temperament and training to do everything perfectly and never make an input error or overlook some information that is already in the record, no matter how obscure the location. In this regard we know that there has been widespread dissatisfaction with EMRs since their mass roll-out, generally associated with the means of use of the EMR being incompatible with clinical practice and reality. Maybe they will get better, although perversely pretending they are good reduces the urgency of fixing them.

So if we assert that things are good, and not only good but better, we can skip all this measurement and proof bother. Or as Les Gelb of Pentagon Papers fame recently noted (paraphrased), it isn’t that there are lies that is most important, it is pretending to believe the lies.