Expert elicitation techniques – why are they important?

Date
Category
NCRM news
Author(s)
Jose Pina Sanchez, University of Leeds

It is not always possible to collect quantitative data to estimate a wide variety of population parameters. There may be logistical, ethical or physical barriers that prevent data collection. Therefore, there are often gaps in quantitative models that need to be filled in other ways. Frequently, we turn to scientific and expert knowledge to fill these gaps, and this is often done in an ad-hoc manner, relying on gut feeling whilst disregarding the well-known issues of biases in judgements and the limitations of making a single best guess.

From the 6-7th December in Leeds, I will be running a course on ‘Expert elicitation techniques for social scientists’. In this course, we will be introducing the participants to a more formal process for capturing expert knowledge and translating it into something that will be useful in subsequent quantitative analyses. The goal of expert elicitation techniques goal is to make assumptions behind judgements explicit, and to standardise the process involved in gathering associated qualitative and quantitative evidence. Here, well-designed protocols have been established that help us to capture expert knowledge and convert it into probability distributions in a transparent manner. The protocols for expert elicitation have been designed with the aim of reducing the impact of the biases and heuristics of human judgement.

Interest in expert elicitation has been growing in recent years, as quantitative research in different fields embraces more probabilistic analyses and Bayesian methods. Gaps in quantitative models can be filled in a rigorous and transparent manner, even when data collection processes are not possible or are too costly. Although it is not a full replacement for a well-designed study, it can help us understand where uncertainties are now according to current scientific understanding and where future data collection will be most effective. These methods have a history of use in climate change, safety risk assessments and health economics. There is, however, an untapped potential for these techniques to be more widely used in other disciplines of the social sciences - where data quality is not always optimal, and quantitative models can be improved using sensitivity analyses adjusting for widespread and pervasive issues such as measurement error and missing data.

Over the past decade, efforts have been made to standardise and formalise the procedures. In this course, we will focus on the Sheffield elicitation framework, which is one of the tried-and-tested protocols that has been utilised extensively over the past decade. The basis of this approach to expert elicitation is behavioural aggregation, where experts come together to discuss the scientific question at hand and form an opinion as to what a rational person would believe to be the current state of scientific knowledge, given the group’s discussions. Part of this process is fitting probability distributions to judgements, and the course will cover the methods for doing this and the types of distributions that are commonly used.

The course syllabus covers the basic principles of expert elicitation, the key elements of conducting an elicitation exercise using the Sheffield elicitation framework and some demonstration of the software that has been developed to help implement that framework. Overall, participants should be able to leave the course understanding what expert elicitation is about and what it can be used for and have a basic understanding of where to start with conducting their own elicitation exercise.

For more information on the course, visit the NCRM website at www.ncrm.ac.uk.