Three-level meta-analysis models in R: An Introduction

Presenter(s): Francesca Zecchinato


decorative image to accompany text

This resource provides an introduction to the key concepts and steps to conduct a three-level meta-analysis. It is useful when first approaching this method.

The resource includes a practical example from research using R-Studio, with a dataset (csv file) and R file for you to download, and a video tutorial to watch.

What are meta-analyses?

Meta-analyses represent a subset of systematic reviews that are popular in all bio-medical scientific fields and particularly useful to take stock of what is known in a specific domain. 

Systematic reviews and meta-analyses attempt to answer a research question by systematically finding all the relevant existing evidence (included in single studies) that has addressed that question. While a systematic review synthesises the results in a narrative way (e.g. describing the differences/similarities found in the included studies), a meta-analysis combines the results from the single studies quantitatively, yielding an overall statistic that quantifies the size of the effect (such as the effectiveness of an intervention, the risk of developing a disease, the strength of an association between variables etc.). Hence, a meta-analysis involves pooling (i.e., combining) the effect sizes (i.e., metrics that quantify the direction and magnitude of the relationship between two entities) from individual studies to obtain an overall summary statistic.


Pooling Effect Sizes: Fixed-Effect vs Random-Effects Models

As soon as we decide to run a meta-analysis, we are required to make a key decision: we have to assume either a fixed-effect model or a random-effects model.

According to the fixed-effect model, all effect sizes are part of a single and homogeneous population. As a consequence, all studies share the same true effect size (that is, the overall effect size that we want to calculate in our meta-analysis, denoted with θ ).  

The model assumes that the only reason why a study k's observed effect size, denoted ϵk is different from θ  because of its sampling error ϵk  (i.e., because we are testing only a small sample from the population). 

In mathematical terms, 

θ̂ₖ = θ + ϵₖ 
 

We quantify ϵ through the standard error (SE), which is indicative of a study precision. Hence, with decreasing SE, we expect the observed effect size θ̂ₖ to become an increasingly good estimator of the true effect size θ.

This model is often considered too simplistic, since we can expect studies to differ for a wide range of reasons (for example, different measures used, different time lag, different designs). Thus, some degree of between-study heterogeneity can usually be anticipated.

The random-effects model sees each study as an independent draw from a universe of populations. Here, the assumption is that there is not only one true effect size, but a distribution of true effect sizes, and the effects of individual studies deviate due to two sources of variance: one caused by the sampling error (ϵk) and one represented by the between-study heterogeneity (ζₖ), introduced by the fact that the true effect size θₖ of study k is only part of an overarching distribution of true effect sizes with mean μ

Thus, our goal, when running a meta-analysis, is to estimate the mean of the distribution of true effects. 

The mathematical formula of the random-effects model becomes:

θ̂ₖ = μ + ζₖ + ϵk

Moreover, fitting a random-effects model involves the estimation of τ2, which refers to the variance of the distribution of true effect sizes (i.e., the between-study heterogeneity). For this purpose, the Ι2 statistic is commonly used (Higgins & Thompson, 2002). The Ι2 statistic quantifies the percentage of variability in the effect sizes that is not caused by the sampling error and the values are usually interpreted in relation to identified thresholds (low = 25%, moderate = 50%, high heterogeneity = 70%).

As noted by Harrer, Cuijpers, Furukawa, and Ebert (2021), a crucial assumption of the random-effects model is that the size of ζₖ is independent of k. That is, we assume that there is nothing which indicates a priori that ζₖ in one study is higher than in another. This is known as the exchangeability assumption of the random-effects model (Higgins, Thompson, & Spiegelhalter, 2009).


The issue: Dependencies in the data

To conduct a meta-analysis, we have to analyse data at two levels, at least – the participant-level, and the study-level. This is done by taking effect sizes from individual studies (level one) and then pooling them (level two) to obtain an overall summary statistic. In doing so, statistical independence of each effect size at level one is assumed. However, this is often not the case, and this assumption can lead to unreliable results (for example, false positives). In fact, effect sizes can be dependent (i.e., correlated) for many different reasons: for instance, some studies included in the meta-analysis might have contributed more than one observed effect size, or different effect sizes may have been drawn from the same participants (e.g., because several different methods were used to assess the outcome of interest). 

It is important to take into account these dependencies in the data! But how?


The solution: Three-level meta-analytic models

A (random-effects model) three-level meta-analysis is a statistical technique that allows us to include all the available data, accounting for dependencies between the observed effect sizes (e.g., taking into consideration that each study or sample contributes more than one effect size).

A three-level model contains three pooling steps. 

Level 1: researchers themselves “pool” the results of individual participants in their primary studies and report the aggregated effect size. 

θ̂ij = θij  + ϵij

Level 2: these effect sizes are nested within several clusters (k), which can be either individual studies or subgroup of studies.

θ̂ij = kj + ζ(2)ij 

Level 3: pooling the aggregated cluster effects leads to the overall true effect size μ . 

kj = μ + ζ(3)j 

Where θ̂ij is an estimate of the true effect size θij . The term ij  can be read as “some effect size i nested in cluster j”, kis the average effect size in cluster j and μ is the overall average population effect.

Thus, the random-effects three-level meta-analysis model is:

θ̂ij = μ + ζ(2)ij + ζ(3)j  + ϵij


Distribution of variance across levels

In contrast to the random-effects model, the random-effects three-level meta-analysis model has two heterogeneity terms. One is ζ(2)ij and refers to the within-cluster heterogeneity on level 2 (that is, the true effect sizes within cluster j follow a distribution with mean k). The other, ζ(3)j , is the between-cluster heterogeneity on level 3. 

Thus, we can calculate a multilevel version of Ι2: level 2 Ι2 refers to the proportion of total variance explained by differences within clusters, level 3 Ι2 refers to the proportion of total variance explained by differences between clusters (Cheung, 2014).


Fitting three-level meta-analysis models in R

The Metafor package (Viechtbauer, 2010) is recommended for fitting meta-analytic three-level models in R. It uses restricted maximum likelihood (REML) procedures to estimate the between-study heterogeneity.

As a practical example, we will now go through the steps to conduct a random-effects three-level meta-analysis using R-Studio version 4.2.3 (R Team, 2023).

In our example, we will conduct a meta-analysis aimed at measuring the association between paternal anxiety and child emotional problems. Pearson product-moment correlation coefficients (r) were chosen as the effect size indicator and extracted from primary studies. The dataset is provided in this ZIP folder. The dataset contains 13 columns. The first one, author_year, displays the name of the study; each primary study has an ID (study_id), and in the column obs_id we have numbered the extracted effect sizes. Column sample_id displays the ID allocated to each of the samples, that are labelled in column sample_label. The es_r column shows the correlation between paternal anxiety and offspring emotional outcomes, while n stands for the sample size. The columns SE_r and var_r refer to the standard error and variance of the correlation coefficients, while the columns fisher_zSE_z, and var_z are the Fisher-z transformed correlations, as well their standard error and variance. The es_r_id column contains a unique ID for each effect size (i.e. each row in our data frame).

What we should note about this dataset, is that it contains repeated entries in author_year column. This is because most studies in this meta-analysis contributed more than one observed effect size, and the same sample was used by multiple studies (as we can see from the sample_id and sample_label columns)

Looking at this structure, it becomes clear that effect sizes in our dataset are not independent. Instead, they follow a nested structure. Thus, it might be a good idea to fit a three-level meta-analysis in order to adequately model these dependencies in our data.

The video below illustrates how to prepare the data file, which arguments are needed and how to run the meta-analysis in R-Studio. 

The R code is provided in this ZIP folder,



   Download transcript    |   Download slides [ 1306 Views ]



About the author

Francesca Zecchinato is a final year PhD student based at the Centre for Innovation in Mental Health, School of Psychology, University of Southampton (UK). Her research focuses on the intergenerational transmission of mental ill-health, with the aim of better understanding the role of parents’ mental health, particularly anxiety, in children’s development and preventing young people’s mental ill-health. For her PhD project, she is using both quantitative and qualitative methods, conducting laboratory-based experiments, qualitative studies using semi-structured qualitative interviews, a systematic review with meta-analysis, and a population-based longitudinal study.

Primary author profile page



BACK TO TOP