Discrete Choice Experiments: Best Practices

What is the difference between a conjoint analysis and a discrete choice experiement?

ISPOR’s guidelines provide some information.

Conjoint analysisis a broad term that can be used to describe a range of stated-preference methods that have respondents rate, rank, or choose from among a set of experimentally controlled profiles consisting of multiple attributes with varying levels. The most common type of conjoint analysis used in health economics, outcomes research, and health services research (hereafter referred to collectively as outcomes research) is the discrete choice experiment (DCE). The premise of a DCE is that choices among sets of alternative profiles are motivated by differences in the levels of the attributes that define the profiles. By controlling the attribute levels experimentally and asking respondents to make choices among sets of profiles in a series of choice questions, a DCE allows researchers to effectively reverse engineer choice to quantify the impact of changes in attribute levels on choice.

Using a DCE, one can measure the marginal rate of substitution across treatment attributes.  If cost is included as an attribute, one can also estimate respondent willingness to pay for each of the other attributes.  DCEs are limited, however, in that it can only measure preferences across measured attributes.

What are the important DCE components?

The ISPOR guidelines mention ten key components:

  1. research question,
  2. attributes and levels,
  3. construction of tasks,
  4. experimental design,
  5. preference elicitation,
  6. instrument design,
  7. data collection plan,
  8. statistical analyses,
  9. results and conclusions,
  10. study presentation

What statistical analysis is appropriate for DCEs?

First, ISPOR recommends running some basic summary statistics.  You can measure the share of times each attribute was chosen by “counting the number of times each attribute level was chosen by each respondent, summing these totals across all respondents, and dividing this sum by the number of times each attribute level was presented across respondents”

One can use an OLS regression as well where you measure the probability of that a specific profile was chosen and the independent variables are a constant and a vector of attribute levels.  The coefficients can be interpreted as marginal probabilities. “OLS yields unbiased and consistent coefficients and has the advantage of being easy to estimate and interpret. Nevertheless…researchers must assume that the errors with which they measure choices are independent and identically distributed with mean 0 and constant variance.

Most often, however, these data are analyzed using a conditional logit analysis.  In this case, the dependent and independent variables are identical, but one must assume that the residuals follow an independently and identically distributed type 1 extreme-value distribution.  Multinomial logit are sometimes used as well, where multinomial logit typically describes models that relate choices to the characteristics of the respondents making the choices, and  conditional logit relates choices to the elements defining the alternatives among which respondents choose.

The conditional logit requires two key assumptions.

First, the model assumes that choice questions measure utility equally well (or equally poorly) across all respondents and choice tasks (scale heterogeneity). Second, conditional logit does not account for unobserved systematic differences in preferences across respondents (preference heterogeneity).

Another alternative is a mixed logit or random parameters logit model (RPL). This method assumes that “the probability of choosing a profile from a set of alternatives is a function of the attribute levels that characterize the alternatives and a random error term that adjusts for individual-specific variations in preferences.” Whereas conditional logit estimates only the average preference weigth of each attribute across respondents, random logit captures preference heterogeneity by measuring the standard deviation of effects across the sample.

Other statistical approaches include hierarchical Bayes and latent class finite mixture models

How are independent variables represented?

With effects coding, each P value is a measure of thestatistical significance of the difference between the estimatedpreference weight and the mean effect of the attribute. Withdummy-variable coding, each P value is a measure of the statisticalsignificance of the difference between the estimated preference weight and the omitted category.

Researchers must also determine whether to code the variables as continuous or indicator variables. If one uses continuous variables, the analysis has more power. Unlike the indicator variables, however, you cannot identify non-linear relationships if a simple continuous model is used.

How do you measure goodness of fit?

If you use OLS, you can use the standard R-squared metric.  For the conditional logit, however, this statistic is not appropriate.  Instead you can use the log-likelihood metric.  Higher (less negative) values in the log likelihood are associated with a greater ability of a model to explain the pattern of choices in the data.

Also, you can include main effects or interactions. Main effect assumes that attribute preferences are independent of other attributes. For instance, one’s preferences for treatmetn safety may decrease if a treatment is more efficacious. If this is the case, one can model these non-linear preferences using interaction terms.


Leave a Reply

Your email address will not be published. Required fields are marked *