While there were many interesting things addressed in the May 23rd webinar “ABCs of the QPP: Learnings & Trends in Value-Based Care – A Retrospective” which was hosted by Answers Media Network, I was intrigued by the timeline of programs past, present, and planned. Starting in 2008 and extending through 2020, 15 programs were identified. In order these are PQRS, MU (phase 1), CMMI, CPS, MSSP, VM, BCPI, MU (phase2), CIR, OCM, CPC+, MIPS, BPCI-A, Primary Cares and Pathways to Success. All of these programs were introduced with enthusiastic claims about how wonderful they were and what important impacts they would have. But for most of these past and present programs there has been little research to address the question of what they have accomplished, or will accomplish, beyond meeting their internal metrics. Thus, programs arrive with hype, and depart in silence, to be replaced by the next new program which will of course be wonderful.
This sequence might be partially explained by the imperative for new initiatives among those who create such programs. This imperative is heavily oriented toward doing just that—create a new program. This is coupled with a lack of desire to do the research to see if they really work. This seemingly endless process is compounded by the failure to ask and answer they key questions of any new initiative. First, what exactly is the problem we want to solve? Second, how exactly is this new program going to solve that problem? Third, what research will we do to determine if it really did solve the problem? Having gotten that far we might ask if the effort is sustainable, either naturally or by coercion. To actually be meaningful such research has to look at real outcomes not just surrogate end points. Measuring compliance is one thing, measuring actual accomplishment is another. The right question isn’t how many people signed up, but how was healthcare changed for the better? Moreover, actually answering the did it matter question can take time which is antithetical to the new initiative mindset. Even worse, the research might show that the initiative had no value, or worse.
The continuous flow of new programs reminds me of an observation I made while in academia. Administrators, and in particular newly appointed administrators, liked to put forth new initiatives on a regular basis, often fueled by competitive funding. These new initiatives paid little attention to what we were already doing or how well we were doing it, or they seemed to assume that we were doing nothing while waiting for the next initiative. In this context innovative often just meant new and different without regard to being better. These new programs were better and more important by definition, even if there were no criteria for how it was more important or how success would be measured. Naysaying was not acceptable. Playing the new initiative game required at least some level of participation in order for mid-level administrators to be seen as being supportive. And mid-level administrators couldn’t be seen as being supportive without the faulty getting on board, which faculty did to show they were team players. Most such initiatives were short lived because of the constant need for creating something newer.
If your job requires creating new initiatives, then that is what you do if you want to be successful. If you are downstream from this you have little choice but to go along. But wait, maybe the plethora of programs is part of a grand scheme to get to a final outcome without revealing that outcome all at once. And stringing out programs provides an opportunity to back off from previously created onerous requirements in the name of reducing regulatory burden.