Did You Know We Needed New Meaningful Use Measures?

William HymanWilliam A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

The EHR mandate and rollout brought us specific measures of Meaningful Use (MU) that users must attest to through the three MU stages. As has been discussed over the years, these measures by definition prove MU (in capital letters), but it has never been shown that the measures actually result in meaningful improvements in quality of care. In other circles this is known as surrogate endpoints, ie it’s difficult to measure what you really want to know (assuming you do really want to know) so instead you measure something else and assert that it is a proxy for the desired end result.

Keeping up with the measures has been an ongoing challenge as they have been changed, added to and deleted. This challenge impacts EHR designers through certification and test requirements and it impacts end users who must do the self-reporting, while also struggling with actually using the EHR itself which they largely hate, and the ONC’s meeting on EHR burden sold out. An interesting resource in this regard is the Requirements for Previous Years page at CMS which has multi-item entries for 2011 and 2012, 2013, 2014, and 2015. Then there are the separate pages for 2016, 2017, and 2018. One would hope that evolving requirements represent improvement, and if this were actually the case it would be harder to complain about it, but no less hard to deal with it if you are subject to the reporting requirements. However, there is little if any proof that new measures are any better than old ones, or whether or not the measures actually accomplish anything of value.

Given this 9 year odyssey it was of interest to see the 2018 Call for Measures offering stakeholders the opportunity “to be involved in the focus on and ongoing evolution of Medicare EHR Incentive Program measures.” The premise here is that new measures are needed, and that creating them will be inherently good.

Suggested measure subject matter includes health information exchange and interoperability; continued improving in program efficiency, effectiveness, and flexibility; measuring patient outcomes; and emphasis on patient safety. The continue improving phrase reflects the inherent self-congratulatory aspects of the program in that it suggests that things have already been improved. Measuring patient outcomes sounds good, but is it an admission that patient outcomes have been previously overlooked? And who can be against patient safety?

Additional criteria for the selection process include whether a proposed measure would reduce the reporting burden, is duplicative of existing or previously removed objectives and measures, or if it would “include an emerging certified health IT functionality or capability”. I don’t know what the last one means.

One of the risks of self-perpetuating activities is the there is a compulsion to change things, and claim success if anyone asks, whether warranted or not. But changing things is relatively easy compared to actual improvement. In this regard I like to ask change proponents (i) what is the exact problem you are trying to solve, (ii) how exactly will what you are proposing solve that problem, and (iii) how will you know if it did. Rational answers to these questions are rarely forthcoming.