An interesting descriptive analysis by Meddings et al. (2018) finds that CMS is delivering somewhat different messages on quality based on whether the message is conveyed to patients (via Hospital Compare) or to hospitals directly (via the hospital readmissions reduction program [HRRP]). They find the following:
Of the 2956 hospitals that had publicly reported HF [heart failure] grades on Hospital Compare, 91.9% (2717) were graded as “no different” than the national rate for HF readmissions, which included 48.6% that were scored as having excessive HF admissions, and 87% received an overall readmission penalty. Of 120 (4.1%) hospitals graded as “better” than the national rate for HF, none were scored as having excessive HF readmissions and 50% were penalized. AMI [acute myocardial infarction] data yielded similar results. Among 2591 hospitals penalized for overall readmissions, 26.6% had only 1 condition with excess readmissions and 27.5% had 2 conditions.
Although this result may be seen as inconsistent, statistically, one can view this as simply a different definition. Hospital Compare reports grades based on a risk-adjusted 95% confidence interval that the hospital’s unplanned readmissions differ from the 30-day national average. On the other hand,
the hospital readmissions reduction penalty (HHRP) program uses the actual risk-adjusted readmission rate–rather than the 95% confidence interval–to determine payment.
Why would these similar programs use different methods?
I would argue that they aim to answer different research questions. For HHRP, the goal is to retrospectively evaluate how well a hospital did on readmissions. Here, there is no statistical uncertainty–all readmissions are observed and they prior performance is known with certainty. It may be the case that small hospitals had a year with bad luck, but the though is that if that is the case, these penalties would even out over the years.
The research question asked for Hospital Compare by consumers, on the other hand, is likely not how well a hospital has done in the past but how well they are likely to do in the future. Consumers use Hospital Compare to measure past quality only in able to predict future quality. Although future quality is not known, there clearly is more uncertainty and Hospital Compare takes this into account.
Thus, while the headline of this article may make it appear that CMS is doing something non-sensical, the respective approaches do make sense conceptually. How CMS makes these predications (e.g., should they have used a Bayesian approach for Hospital Compare using the national average rate as the prior to create credible intervals and reported those?) I will leave for another post.Source:
- Jennifer Meddings, MD, MSc; Shawna N. Smith, PhD; Timothy P. Hofer, MD, MSc; Mary A.M. Rogers, PhD, MS; Laura Petersen, MHSA; and Laurence F. McMahon Jr, MD, MPH. Mixed Messages to Consumers From Medicare: Hospital Compare Grades Versus Value-Based Payment Penalty. American Journal of Managed Care. 2018;24(12):e399-e403