NCRM International Visitor Exchange Scheme (IVES)
Uses and Applications of Qualitative Research Methods in Policy Evaluations Using Observational Designs: a Review and Synthesis (2017 - 2018)
Dr Alasdair Jones (London School of Economics) (A.Jones@lse.ac.uk) is visiting Associate Professor Jennifer Curtin and Professor Peter Davis (University of Auckland).
The objectives of my proposal are as follows:
- To develop, through collaboration with experts in policy evaluation methodology, my understanding of the application of social research methodologies for public policy evaluation using observational research designs (and in particular of the role that qualitative research has played and can play in such evaluations);
- To review and synthesise published approaches to research design for public policy evaluation that integrate a qualitative research component, and to write this research up for publication in methods journals and other venues (e.g. briefings for my host institution and NCRM);
- To engage in methodological training and capacity development in an applied research methodology/public policy context;
- To present my work to date on the role of qualitative methods in policy evaluation during my time at the host institution and to use the research conducted in the institution as the basis of an NCRM training programme contribution on my return to the UK;
- To use this visit as an opportunity to instigate a research proposal for an empirical policy evaluation study (most likely evaluating the public health and social capital impacts of a transport intervention given my research interests) that firmly and purposively integrates qualitative methods in the study design. This study could involve a collaboration with scholars from my proposed host institutions;
- To forge new, and hopefully productive, centre-to-centre links between the Department of Methodology (LSE) and COMPASS/PPI (University of Auckland).
Evidence-based policy-making is high on the political agenda these days. Rather than proceed on the basis of a hunch that a policy will result in the intended effects, the argument goes that policy-makers should use evidence about the effects of existing and previous policies as the basis for future decisions. To this end, many social scientists, who as a group have been accused in the past of working in ‘ivory towers’, have become interested in developing ways of evaluating the effects of government policies on their intended (and unintended) outcomes for society. There has been an increased emphasis on seeking to generate evidence about what policies do and do not work, and why.
Given the complexities of social life, and multiple competing explanations for changes in social behaviour (from an individual’s changing circumstances to global economic forces), such research is not straightforward. Ideally, experiments similar to those carried out in laboratories would be conducted, whereby we could see what happens when a policy is (and is not) introduced (and compare the results). Such experimentation, however, is rarely, if ever, practicable in the social world, and instead social scientists have to carry out their research in real world (or what are known as ‘observational’) settings. In such settings, researchers are unable to control, for instance, who is affected by a policy and who is not.
As an example, in previous work colleagues and I investigated the public health impacts of a policy that granted children (12-17 year-olds) in London free bus travel (e.g. how this affected children’s levels of physical activity, the extent to which they were victims of crime and the likelihood they would be involved in a road accident). To measure these things we used various sets of official statistical data (e.g. police crime data and regional travel data). As the free bus pass policy applied to all young people in London we could not compare these measures for young people who received free bus travel against young people who did not. Instead, we considered how levels of physical activity (for instance) changed for children when they received their free bus pass compared to adults in London (25-59 year-olds) who did not receive free bus travel.
These comparisons revealed a number of differences in health-related activities and outcomes for these groups after the introduction of the policy (September 2005). However, these differences did not in themselves reveal why, for instance, levels of accidents started to decline at a greater rate for children than for older people after the policy came into being. Moreover, there were some puzzles in our findings that warranted further investigation – for instance contrary to popular opinion at the time, distances walked by young people did not decline when the bus pass was introduced.
To help move us closer to explanation, the study used interviews with young people who used the free bus passes as a means to try to better understand how their use of these passes affected their health-related behaviour and outcomes. These interviews were very revealing and suggested, for instance, that the amount of distance young people walked before and after they got their bus pass did not decline (as might at face value be expected) because the bus pass became a means to undertake more activities and visit more places where (which often involved additional walking).
Intriguingly, this approach to talking to people affected by a given policy (to try to understand in their terms how it affects them) is rather under-used in evaluation studies. While a lot of attention has been given to developing sophisticated technical ways of improving researchers’ capacity to draw ‘causal’ conclusions from numeric data collected for policy evaluation studies, attention to the contribution that more ‘qualitative’ (e.g. interview-based) research can make to our understandings of the effects of policy has been lacking.
In the proposed study I want to do three main things, therefore. First, working with an international expert in health policy evaluation in real-world (‘observational’) settings, I want to review existing policy evaluations incorporating qualitative methods, in order to describe how researchers have used these methods to inform the findings of their work. In turn, I want to set out some principles for how researchers might in future better incorporate qualitative methods into their evaluation studies from the outset, so that they can harness these methods’ explanatory potential. Finally, I want to develop and deliver a training programme for other social researchers out of this work.