Both the Inflation Reduction Act and Affordable Care Act contained language that made cost-effectiveness analysis difficult to implement, while allowing for the analysis of comparative effectiveness with respect to the relative health benefits across treatments. This leads to the natural question, how often are would reimbursement decisions based on comparative effectiveness alone be the same as those based on cost-effectiveness?
A paper by Glick et al. (2015) uses data from 2,027 cost-effectiveness ratios from 819 articles in the Tufts Cost Effectiveness Analysis Registry. The primary endpoints was a binary variable representing agreement vs. disagreement between a payer’s/policymaker’s adoption recommendation inferred from the comparative effectiveness result for each study and the adoption recommendation inferred from the cost-per-QALY ratio from the cost-effectiveness study. Using this approach, they find that:
…disagreement between the two types of analyses occurs in 19 percent of cases. Disagreement is more likely to occur if a treatment intervention is musculoskeletal and less likely to occur if it is surgical or involves secondary prevention, or if the study was funded by a pharmaceutical company.
…lowering the threshold for the QALYs from $100,000 to $50,000 decreased the proportion of agreement (from 81 percent to 68 percent). Increasing the threshold to $200,000 raised the proportion of agreement (from 81 percent to 89 percent).
While the concordance between comparative effectiveness and cost effectiveness is useful, this still means that there is no concordance for approximately one-in-five treatments. Moreover, published CEA studies may not be a random sample of all health technologies, particularly if treatments that are efficacious but not cost effective are less likely to have a published CEA study. Thus, while comparative effectiveness is the first step in value assessment, formal cost effectiveness is also needed to insure a treatment’s price properly reflects its value.