Instrumental variables: Can I use patient level IVs to correct for endogeneity in patient characteristics?

Does a treatment improve patient health?  Does a policy intervention improve quality of life?  Does more education increase income?  These are fundamental questions that are difficult to answer with standard observational approaches.  The reason?  Selection bias. 

Patients who are sick take medicine; patients who are sicker may take more medicine.  Thus, one could observe that patients who take highly effective medicine have worse outcomes not because of any causal relationship but because sick people are the ones taking medicine. 

One way to account for selection bias is to use instrumental variables.  An instrument must satisfy 3 key conditions:

  • It must be correlated with the potentially endogenous treatment variable;
  • It must have no direct effect on the outcome (other than through its impact on the probability of treatment); and
  • It must be independent of the unmeasured confounders of the treatment–outcome relationship after conditioning on the measured confounders (See Garabedian et al. 2014 for an explanation)

What if a researcher wants to know if certain physician or hospital characteristics are associated with better patient outcomes.  In standard analyses, if unobservable better quality physicians prefer to live in rich areas where patients have unobservably better baseline health and these physician characteristics are correlated with observed characteristics, it could be the case that researchers misattribute a causal effect to a specific physician attribute.

Can patient-level variables serve as an instrument for provider characteristics?  According to a paper by Konetzka et al. (2019), at first, the approach seems promising.

Take the example of estimating the effect of for‐profit status of a hospital on patient outcomes. Patients treated at for‐profit hospitals may be different in unmeasured ways from patients treated at other hospitals—in preferences for preventive care, level of social support, or health status. To solve this problem, an instrument Wi can be used. A logical choice may be the differential distance instrument, using the additional distance a patient would have to travel to reach a for‐profit hospital versus the nearest hospital. The use of this instrument could result in groups of patients that are balanced on observed and unobserved characteristics, minimizing confounding by patient selection

This approach may be valid of physician characteristics are not selected (e.g., male gender).  However, some physicians/organizations may choose their characteristics.  Physicians may choose their specialty; hospitals may deicde strategically whether to be a for-profit or non-profit entity.  In the case of measuring the effect of being a for-profit hospital, researchers would need an instrument that “exogenously pseudo‐randomizes hospitals to be for‐profit—say, an instrument based on what competitor hospitals in the market are doing, or a change in laws about for‐profit status”.

Using both instruments may be valid, but using only the patient-level instrument to control for provider level selection bias is problematic.

It is a mistake, however, to use a patient‐level instrument Wi to attempt to solve the endogeneity of an organization‐ level treatment variable or to imply that it does. If the goal of the analysis is to estimate a causal effect of for‐profit status, an erroneous approach might be to use patient‐level differential distance as an IV. Although this would likely result in treatment and comparison groups balanced on patient‐level characteristics, it does nothing to solve the critical provider selection issue.

The authors claim that the Joyce et al. (2018) paper does the best job of addressing the provider selection issue.  While they also use a patient-level instrument, they also conducted a series of robustness checks using a dose–response strategy based on the percent of patients in a dementia Special Care Unit and comparisons to patients without dementia in order to isolate the causal effect of Special Care Units from unmeasured hospital quality.


Leave a Reply

Your email address will not be published. Required fields are marked *