How do you analyse large amounts of qualitative data? Try our step-by-step guide

NCRM news
Professor Rosalind Edwards of University of Southampton, Dr Susie Weller of University of Oxford, Dr Emma Davidson and Professor Lynn Jamieson of University of Edinburgh
An aerial view of a natural landscapeAn aerial view of a natural landscape

Data-driven research skills feature on the UKRI ESRC’s agenda for supporting social scientists across all career stages. NCRM has pioneered work in this field for qualitative researchers who increasingly need to develop the conceptual and methodological capacities to access and analyse large and complex qualitative data sets, and now this has resulted in the publication of a guide to working with big qual.

The backdrop to this initiative was the push for open science and data sharing, and the accompanying expansion of archived digitised qualitative data sets. This context presented exciting opportunities for pooling data sets from different research projects to address comparative questions, generalise and test claims about social processes. But a knotty issue remained – how to work with such large volumes of data and yet retain the depth and richness of analytic engagement that is the hallmark of rigorous qualitative analysis?

This was the methodological question that motivated our team and led to our development of a breadth-and-depth approach to analysing large volumes of qualitative data, in our NCRM work package Working across qualitative longitudinal studies: feasibility study looking at care and intimacy (2015-2019). With the relationship between breadth of analysis and depth of analysis in mind, we explored the possibilities for big-qual analysis. We drew on data from the Timescapes Qualitative Longitudinal Data Archive, a specialist resource of qualitative longitudinal research. We identified and merged six connected qualitative research projects that focused on shifts in personal and family relationships over time and across the lifecourse. The result is the breadth-and-depth method of analysis, which is the focus of our book.

The four steps of the breadth-and-depth method

We use an archaeological metaphor to describe the breadth-and-depth method as it proceeds through an iterative four-step process. We’ve likened the first step to an archaeologist’s aerial survey, looking down from a plane to systematically survey the data landscape. The aim of this step is to review potential sources of data stored in an archive or archives that look like a good fit with your chosen research topic. We also provide guidance on how to manage the large data assemblage that you’ll have pulled together once you’ve identified your material.

Step 2 is akin to doing an archaeological geophysical survey, mapping features of the data landscape that lie just below the surface. The aim here is to identify potential areas of conceptual and substantive interest in your big-qual data assemblage. Computational text mining software is used to undertake recursive text analysis. The automated text analysis can count frequencies, look at keywords in context, spot co-location and proximity, and measure the relative rate of the use of a word (keyness). This assessment provides you with an overview of patterns in your big-qual data assemblage and a sense of what merits closer investigation for the research questions you’re pursuing.

The first two steps are concerned with the breadth part of the breadth-and-depth method. With Step 3, we start to move into the depth element by sampling the interesting keyword features identified in the previous step. Drawing on our archaeological metaphor, we’ve likened this third step to shallow test-pit sampling. The idea is to dig deep enough into the data to show whether there’s anything of interest or not – but be warned! Avoid the temptation to go into the data in great depth at this stage. Rather, you’re identifying fruitful cases for an in-depth analysis in the next step.

Archaeologically, Step 4 is deep excavation. The emphasis here is on moving to greater depth through immersion in whole cases. For this step, you’re applying whatever qualitative method of analysis that suits your research purposes. You’re utilising the strengths of a qualitative approach that is sensitive to context and multi-layered complexity – an in-depth analysis that focuses on rich detail, can represent intricate social realities and can produce nuanced social explanations. It's also an approach that can speak back to the broader analytic contexts provided by Steps 2 and 3.

How our book is organised

The chapters in our book are organised around the broad stages of the breadth-and-depth method, initially setting the scene by looking at the turn to big data and the field that our analytic method enters into. For example, we look at the challenges to simplistic qualitative and quantitative dualisms, and at the various possible relationships between theory and method that can be pursued using our breadth-and-depth approach.

The central chapters each elaborate the steps of the method with interesting examples from our own and other big-qual projects in a range of international contexts. For instance, there’s an aerial-survey account of how a team of researchers undertook an initial exploration of HIV and biomedicalization across 12 UK qualitative data sets (Catherine Dodds), and discussion of test-pit sampling in a study of young people and food choice (Mary Barker and colleagues).

The final chapters reflect on the implications of large-scale qualitative analysis. Thinking points for ethical practice include issues of data sovereignty and working with integrity at scale. And what of the future for this big-qual data-driven analytic method? We conclude that ethical dilemmas and practical challenges will always be present, including for a breadth-and-depth approach, but that there is huge potential for innovative ways of exploring and reusing a range of qualitative data.

Read more about the development of the breadth-and-depth method

Listen to an episode of NCRM's podcast on teaching big qual