Uncategorized

Improving DCE Response Rates

Non-response bias is a key problem when conducting any type of survey, including surveys that use a discrete choice experiment (DCE) methodology.  Non-response bias occurs when sampled individuals who respond to the survey differ from those who do not respond in ways that would affect the survey response.  For instance, assume that in the real-world, half of patients prefer treatment A and half prefer treatment B.  However, if a large share of patients that prefer treatment A decide not to respond to the survey, the resulting research may suggest that treatment B is strongly preferred, when in reality this is not the case.

A paper by Watson, Becker, and Bekker-Grob (2016) attempt to identify factors that affect DCE response rates using a meta-regression approach.  Using social exchange theory, they hypothesized that response rates are a function of perceived survey benefits and costs.  The identified survey benefits based on the severity of the disease and whether the survey respondents were limited to patients who had that disease (or physicians treating that disease).  They hypothesize that the benefits (and thus response rates) would be higher for more severe diseases (e.g., cancer) or when the sample population was limited to patients with the disease.  They collected data on eight factors that affected the survey’s cognitive burden to respondents.  These included:

  • Number of attributes,
  • Number of choice set alternatives,
  • Number of choice sets to be answered,
  • Includes an opt-out alternative,
  • Includes a cost attribute,
  • Includes a risk attribute,
  • Elicits time preferences
  • Same hypothetical alternative appears in all choice sets

Average response rates across all 64 studies conducted by mail that were included in the meta-regression was 50.4%.  The average number of attributes was 5.3 and the average number alternatives was 2.6 with 63% of DCEs presenting respondents with exactly 2 options. The average number of choice sets the respondents answered was 12.7.  About half (51.6%) of DCEs surveyed patients or caregivers, 21.5% health care professionals, and 20.4% the general public.  Out of the 132 unqiues studies, 64 were conducted by mail, 24 were self-completed, 16 were interviews, 8 were conducted over the internet, and 20 were placed in an “other” category.

Using this sample, the authors find the following:

Increasing the number of attributes from between two and four attributes to five attributes, six attributes or seven or more attributes decreases response rates. Including an opt-out increases response rates. We find a mixed effect of the number of choice sets: compared with eight choice sets, response rates are higher for DCEs with fewer (3 7) and more (> 8) choice sets. Contrary to expectations, studies with risk attributes have higher response rates. The inclusion of a cost attribute, time preferences or using a constant comparator design does not significantly affect survey response rates.

The perceived benefit of response increases response rates. Surveys of patients or healthcare professionals rather than the general public have higher response rates…reminders increase responses rates…

In summary (my emphasis):

Our findings suggest that researchers who want to minimise the cognitive burden should focus on the number of attributes included in the DCE choice sets.

 

Source:

Leave a Reply

Your email address will not be published. Required fields are marked *