Is the educational What Works agenda working?

Date
Category
NCRM news
Author(s)
Maria Pampaka, Julian Williams, University of Manchester, Matt Homer, University of Leeds

Knowing ‘what works’ in educational contexts, as in any of the social sciences, has always been problematic both in theory and in practice. The central idea from the wider ‘what works’ debate is about using evidence to make better decisions, giving rise to a call for evidence-based practice, which is primarily linked to the use of Randomised Control Trials (RCTs) to ‘test’ interventions and measure their efficacy. Interest in whether the ‘what works’ agenda is working has led us to edit a special issue1 on the theme for International Journal of Research & Method in Education. The strong response to the call for papers necessitated the issue becoming a double issue with considerable interest for methodologists.

In the UK, the education focus has been ‘Improving education outcomes for school-aged children’ led by Sutton Trust/Educational Endowment Foundation (EEF) and influenced by Ben Goldacre and the ‘nudge unit’2. In the US, the Department of Education3 has tried to accumulate and use findings from supposedly high-quality research to answer the question ‘What works in education?’ aimed at providing educators with the information they need to make ‘evidence-based decisions’ via the ‘What works Clearinghouse’ (WWC). Such ideas have been rehearsed in previous decades4 with a strong history of opposition too, including from Biesta (2007)5, Hammersley (2005)6 and Thomas (2012)7.
Discussion continues about what evidence should entail and the balance, integration or synthesis between RCTs and other (qualitative and quantitative) approaches. There remains unresolved debate about performativity, effectiveness, equality, equity, bridging gaps (social class, gender, ethnicity, etc), assessment, improvement, and causality, and the best methods for investigating these. The special issue became a platform for discussing methods/methodologies to contribute to this debate and thus towards working solutions.
The dominant theme in submissions for the issue was related to impact evaluations and educational RCTs, along with systematic reviews and meta-analysis. Another theme was effective communication and dissemination to reach maximum impact with the relevant stakeholders. The relatively orthodox ‘what works’ approaches engage with the validity and reliability questions along with advancements in analytical approaches especially regarding clustering of participants, which is common in educational RCTs, by presenting how technical improvements related to Cluster randomised trials (CRTs) and partially nested RCTs might help make the approach ‘work better’ in practice.

Other papers in the first part of the special issue question the so-called, superiority of RCTs as the norm for ‘What works’ suggesting that some kind of integration with other approaches would be beneficial, boosting RCTs with implementation-specific measures or through integration of experimental and improvement science. Issues around the involvement of research participants (mainly teachers) in both research studies and reviews emerge as relevant to establishing what works as well as for the impact agenda (e.g. of Economic and Social Research Council). None of the contributions, however, focus explicitly on the learners’/students’ agency. In fact, previous work, has questioned the impact of current testing/assessment practices and proposed the measurement of ‘alternative learning outcomes’8, including attitudes, dispositions, and aspirations which need consideration to capture the complexities of teaching-learning relationships. These issues will be pursued more vigorously in the second part of the special issue (39(4)) addressing ‘what works’ from more robustly critical perspectives. Other methodological issues to be addressed in Part 2  include debates around inference, measurement issues dealing with missing data and imputation techniques, single case studies and longitudinal designs. We welcome responses and views on any of these issues being debated among education researchers but which affect the wider methods community.

References
1 Pampaka, M., Williams, J. & Homer, M. (2016) (Editors). Special Issue: Is the educational ‘what works’ agenda working? Critical methodological developments, International Journal for Research and Methods in Education, 39 (3&4).
2 http://www.badscience.net/2012/06/heres-a-cabinet-office-paper-i-co-authored-about-randomised-trials-of-government-policies/
http://www.theguardian.com/politics/2012/jun/20/test-policies-randomised-controlled-trials
3 Evidence for What Works in Education http://ies.ed.gov/ncee/wwc/default.aspx
4 Hanley, P., B. Cambers, and J. Haslam.  (2016) Progress in the Past Decade: An Examination of the Precision of Cluster Randomized Trials Funded by the U.S. Institute of Education Sciences,  International Journal of Research and Method in Education, 39, 3.
5 Biesta, G. (2007) Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1-22.
6 Hammersley, M. (2005) Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmer’s case for research-based policy making and practice, Evidence & Policy, 1(1), 85-100.
7 Thomas, G. (2012) Changing our landscape of inquiry for a new science of education, Harvard Educational Review, 82(1), 26-51.
8 Pampaka, M., Williams, J.S., Hutcheson, G., Black, L., Davis, P., Hernandez-Martinez,P. & Wake, G. (2013) Measuring Alternative Learning Outcomes: Dispositions to Study in Higher Education, Journal of Applied Measurement 14(2), 197–218.