Diagnostic Testing HTA HTA

HTA criteria used to evaluate diagnostics

When evaluating a new diagnostic, HTA agencies must assess two separate issues: analytical and clinical validity. Analytical validity basically indicates whether the test works; is it able to accurately predicts the presence or absence of a particular biomarker of interest. Clinical validity is whether the test matters in clinical practice.  It could be the case that a test perfectly predicts a biomarker, but the presence or absence of the biomarker would not have a big impact on the treatment recommended by a physician. 

Two papers in Value in Health evaluate the criteria health technology assessment (HTA) bodies use to evaluate new diagnostic tests.

Garfield et al. 2016 examines a series of case studies evaluated by Australia’s Medical Services Advisory Committee (MSAC), Canada’s CADTH, UK’s NICE and it’s Diagnostic Assessment Programme (DAP), US’s Evaluation of Genomic Applications in Practice and Prevention (EGAPP) and Palmetto’s MolDX Program, and Germany’s Institute for Quality and Efficiency in Healthcare (IQWiG).  The missions of these different HTA agencies varied:

Palmetto, for example, serves as a reimbursement gatekeeper similar to Australia’s MSAC for diagnostics, whereas NICE’s DAP assesses diagnostics in all phases of a product life cycle that could already have reimbursement.”

In the US, Medicare has authorized Palmetto GBA to develop an HTA evaluating new diagnostics.

Palmetto’s MolDX Program was implemented in 2012 and conducts HTAs on both US Food and Drug Administration-approved diagnostics and LDTs. The goals of the program are 1) focusing Medicare coverage to diagnostics that demonstrate clinical validity and utility; 2) tracking utilization for reimbursement through the implementation of unique codes for each diagnostic; 3) creating a consistent and standardized approach for making coverage and pricing decisions for diagnostics; and 4) building a body of evidence demonstrating the effectiveness of diagnostics in the real-world setting by linking specific tests with clinical decision making and patient outcomes.”

Diagnostic companies face significant uncertainty when determining what evidence (if any) is needed to support an HTA submission.

  1. HTA eligibility unclear for diagnostics. There is no clear mandate as to which diagnostics need formal HTA.  For instance, most in vitro diagnostics do not undergo a formal HTA
  2. HTA eligibility also unclear for laboratory-developed tests (LDT). There is no uniform approach for LDTs (a.k.a. “in-house” or “home-brew” tests).  It is not clear whether they should be formally evaluated by HTA agencies along with regulatory-approved tests, or whether payers should consider them differently with regard to pricing and reimbursement.
  3. Evidence requirements unclear. Evidence requirements are not clearly delineated with no universal guidance for outcomes to be measured, appropriate study types, performance requirements, comparative effectiveness, and economic thresholds.
  4. Impact of HTA recommendations on payer decisions not clear.  How HTA recommendations affect payer reimbursement, access, and pricing is also unclear and varies substantially across health care systems
  5. Life science response unclear.  Given the uncertainty above, it is not clear how diagnostic test innovators would make decisions about their proposed pricing

For example, the National Institute for Health and Care Excellence (NICE)’s Diagnostic Assessment Programme (DAP) has well-defined requirements for assessment and the submission document is not extensive.  However, “the timeline is long, taking more than 2 years in some cases.” 

Another paper by Chen, Peirce and Marsh (2020) examines the criteria NICE’s DAP uses to evaluate new diagnostics and how cost-per-QALY estimates affect reimbursement.  This study found:

…[among] 22 evaluations, 91 decision problems were identified for further analysis, of which 52, 15, and 24 received “recommended,” “not recommended,” and “not recommended–only in research” guidance, respectively. The overall consistency rate of the DAC [Diagnostics Advisory Committee] decisions with the £20 000/QALY threshold was 73.6%. Diagnostic technologies that were not recommended, despite an ICER less than £20 000/QALY, were associated with a larger number of decision-modifying factors favoring the comparator, versus recommended diagnostic technologies with ICERs less than £20 000/QALY. For technologies with ICERs greater than £20 000/QALY, the number of decision-modifying factors was comparable for positive and negative recommendations.

Sources:

  • Garfield S, Polisena J, Spinner DS, Postulka A, Lu CY, Tiwana SK, Faulkner E, Poulios N, Zah V, Longacre M. Health technology assessment for molecular diagnostics: practices, challenges, and recommendations from the medical devices and diagnostics special interest group. Value in Health. 2016 Jul 1;19(5):577-87.
  • Chen G, Peirce V, Marsh W. Evaluation of the National Institute for Health and Care Excellence Diagnostics Assessment Program Decisions: Incremental Cost-Effectiveness Ratio Thresholds and Decision-Modifying Factors. Value in Health. 2020 Aug 18.

Leave a Reply

Your email address will not be published. Required fields are marked *