The answer is yes, but maybe not as much as you may thought.
A paper by Ryan and Bao use data from a randomized controlled trial (RCT) called IMPACT (Improving Mood-Promoting Access to Collaborative Treatment) to determine if errors in physician quality profiling are due mostly to random variation or missing data. For this report, the authors outcome of interest is remission of depression symptoms at 6 months after the start of treatment.
The IMPACT data include both a clinical registry used by care managers in the trial to document exposure to the intervention and to track patient outcomes (“registry data”) as well as longitudinal research interviews, which independently assessed patient outcomes at regular intervals (“research data”). The authors use both the registry and research data to to generate parameters for the simulation.
The authors describe their simulation model as follows:
To initiate the Monte Carlo simulation, we assigned each provider a true quality score and a rate of missingness, each drawn randomly from distributions shown in Table 1. On the basis of data from the IMPACT trial, we assumed a correlation between missingness and true quality that was common to all providers. Then, 200 patient-level random draws, first determining whether an observation was missing or nonmissing and then determining patient remission (conditional on missing status), were taken for each provider. We then aggregated information from these draws, calculating provider-level “observed” quality scores under one of the two conditions: (1) using remission outcomes only from patients with nonmissing data, so that profiling error was a combination of error from missing data and from random sampling; and (2) using remission outcomes from patients with both missing and nonmissing data, so that all profiling error was due to random sampling.
The authors reached a number of findings:
- Measuring quality using relative rather than absolute measures had lower error rates. In fact, relative profiling approaches had profiling error rates that were approximately 20 percent lower than absolute profiling approaches.
- Most profiling error is due to random sampling, not missing data. Between 11 and 21 percent of total error was attributable to missing data for relative profiling, while between 16 and 33 percent of total error was attributable to missing data for absolute profiling.
There is an important caveat to these findings, however. The reason the missing data error is so small is due to the observation that in IMPACT data, missing data were not strongly related to the remission outcome in the IMPACT data. If patients with observations missing from the data were much more (or less) likely to experience remission, then the missing data error would have made up a much larger share of the overall profiling error.
- Ryan, A. M. and Bao, Y. (2013), Profiling Provider Outcome Quality for Pay-for-Performance in the Presence of Missing Data: A Simulation Approach. Health Services Research, 48: 810–825. doi: 10.1111/1475-6773.12038