The commentary focuses on the use of instrumental variables when conducting health care effectiveness reserach. They define instrumental variables concisely as:
An instrumental variable (IV) is a variable, generally found in administrative data, that is assumed to randomize a treatment to estimate cause and effect relationships, thus controlling for known and unknown patient characteristics affecting health outcomes. An important assumption is that the IV randomizes treatment but does not directly affect the patient outcome.
There are a number of commonly used instrumental variables that are used to predict use of health care services. These include a patient’s distance from specific types of providers (e.g., hospitals, specialty providers) and a provider’s average treatment patterns (which generally are correlated with the patient’s likelihood of getting a specific treatment but may not be correlated with the patient’s outcomes except through the treatment received).
The commentary takes aim at the first commonly used IV as applied to access to cardiac catheterization laboratories.
…the central and (we argue, dubious) assumption is that isolated people having a heart attack who live hours away from a hospital with a cath laboratory— and are therefore less likely to receive invasive procedures—are identical in their likelihood to survive as those lucky patients living very close to a cath hospital.
The authors claim that patients who live far from a cath hospital may differ from those who live closer. For instance, maybe the cath hospital is located in an urban core; or in a more affluent part of town. If this is the case, unobserved differences in patient population correlated may be correlated with both outcomes, the probability of the receive of treatment, and distance from a cath hospital. Clearly one can control for some, but not all of these factors using administrative data.
Another issue is that while IV may help identify a causal effect of treatment, it may not give policymakers a clear pathway to improving outcomes. Is the answer creating more cath hospitals? Giving incentives for people to move closer to a cath hospital?
At the same time, while Soumerai and Koppel’s critique has many valid points, their critique is a bit too broad. IV does have its limitations. However, identifying a real-world causal effect of a given treatment is a key research question of interest. How patient outcomes could be improved further is also an important question, but one should not expect a single study to answer all research questions on a single topic and other types of analyses may be preferable to get at the “how” questions rather than at the “what happened” research questions.
Soumerai and Koppel do cite some higher quality IV such as cases where public policies result in lotteries (e.g., the Oregon Medicaid expansion, birth date lotteries), and call for more use of longitudinal (e.g., interrupted time series) rather than a cross-sectional designs.
Although these methods are good to recommend, policy lotteries are not available for all research questions. More broadly, Soumerai and Koppel make a clear point that research design is key and that all analytic approaches should be tested rigorously with sensitivity analyses and falsification checks of assumed causal pathways.
- Soumerai, S. B. and Koppel, R. (2017), The Reliability of Instrumental Variables in Health Care Effectiveness Research: Less Is More. Health Services Research, 52: 9–15. doi: 10.1111/1475-6773.12527