We must accept that there is subjectivity in every stage of scientific inquiry, but objectivity is nevertheless the fundamental goal. Therefore, we should base judgments on evidence and careful reasoning, and seek wherever possible to eliminate potential sources of bias.(Brownstein et al. 2018)
In many cases–for instance building a cost effectiveness model–there are some parameters for which there is not quantitative estimates or previous literature. For instance, extrapolating clinical treatment benefits beyond a given trial is inherently subjective; even data-driven approach that fit existing data well assume that the trends observed in the trial continue into the future.
One solution to this problem is to ask a panel of experts what they think. Asking experts in a structured manner may produced better estimates than using assumptions by study authors. There are two general approaches to summarizing the estimates from the experts.
- Mathematical aggregation (a.k.a. pooling). In this approach, a probability distribution is fit to each expert’s judgement and those individual probabilities are pooled mathematically. The mathematical approach can allow for more heterogeneous beliefs, but requires some type of pooling rule (e.g., averaging, taking the median and rescaling) to create the aggregate distribution.
- Behavioral aggregation. In this approach, the goal is to have experts discuss their knowledge and opinions and ultimately reach a group consensus, to which an aggregate distribution is fit. The behavioral aggregation approaches (e.g., Delphi Panel) allows for more information sharing across experts, but certain personalities may dominate the discussion and lead to a probability distribution weighted towards few or a single individual’s guesses. The behavioral approach is also succeptible to groupthink.
Nevertheless, even experts are not great at judging probabilities. This occurs due to a number of biases (e.g., anchoring bias, availability bias, range-frequency compromise, overconfidence bias). Additionally, the expert elicitation must be collected in a rigorous manner. There are a number of protocols on could consider.
- Cooke protocol. As outlined in Cooke 1991 this mathematical aggregation approach weights expert responses by their likely accuracy. The likely accuracy is calculated using a “seed” value. The seed value is known to the facilitator and should be similar in nature to the probability distribution of interest. Expert who more accurately predict the seed value are weighted more.
- SHeffield ELicitation Framework (SHELF). The SHELF protocol is a behavioral aggregation method that uses two rounds of judgments. In the first round, individuals make private judgements; in the second round, those judgments are reviewed before the group agrees on consensus judgements. According to O’Hagan (2019) 4-8 experts is optimal. Experts included should not naturally defer to others, but should also be willing to take into account the opinions of other experts.
- Delphi Panel. The Delphi panel is similar to SHELF as it is a behavioral aggregation method with two or more rounds except that anonymity is maintained in terms of who gave which answers. Like SHELF there is interaction between experts. At the end of a Delphi Panel, however, individuals provide their judgments individually and a pooling rule is required across expert final distributions.
Some other issues to consider:
- Communicating existing evidence to experts: Before beginning the aggregation, evidence dossiers are prepared which give the experts all the most relevant existing evidence in a single document.
- Outcomes of interest. In many expert elicitation protocols, researchers may seek information on multiple parameters. However, asking about these individuals separately may be problematic if these parameters are correlated and what researchers really seek is a joint–rather than marginal–probability distribution. Creating these joint distributions, however, is necessarily much more complex than focusing only on the marginal.
Source:
- O’Hagan A. Expert knowledge elicitation: subjective but scientific. The American Statistician. 2019 Mar 29;73(sup1):69-81.
1 Comment