Bivariate multilevel meta-analysis of log response ratio and standardized mean difference towards robust and reproducible evidence syntheses in environmental and biological sciences

Contact

If you have any questions, mistakes, or bug to report, please contact corresponding authors:

  • Dr. Yefeng Yang

Evolution & Ecology Research Centre, EERC School of Biological, Earth and Environmental Sciences, BEES The University of New South Wales, Sydney, Australia

Email:

  • Professor Shinichi Nakagawa, PhD, FRSN

Evolution & Ecology Research Centre, EERC School of Biological, Earth and Environmental Sciences, BEES The University of New South Wales, Sydney, Australia

Email:

Update

Last update: April 2024

We will update this tutorial when necessary. Readers can access the latest version in our GitHub repository.

Credit and citation

If our paper and tutorial have helped you, please cite the following paper:

Yefeng Yang, Coralie Williams, Alistair M. Senior, Kyle Morrison, Lorenzo Ricolfi, Jinming Pan, Malgorzata Lagisz, Shinichi Nakagawa. Bivariate multilevel meta-analyses of log response ratio and standardized mean difference towards robust and reproducible evidence syntheses in environmental and biological sciences. 2024

Setup R in your computer

Our tutorial uses R statistical software and existing R packages, which you will first need to download and install.

If you do not have it on your machine, first install R (download). We recommend also downloading RStudio, a popular integrated development environment for coding with R, created by a company named posit (download).

After installing R, you must install several packages that contain necessary functions for performing the analyses in this tutorial. If the packages are archived in CRAN, use install.packages() to install them. For example, to install the metafor package (a common meta-analysis package), you can execute install.packages("metafor") in the console (bottom left pane of R Studio). To install packages that are not on CRAN and archived in Github repositories, execute devtools::install_github(). For example, to install orchaRd (a meta-analysis visualization package) from Github repository, execute devtools::install_github("daniel1noble/orchaRd", force = TRUE).

Key package list:

metafor, clubSandwich, orchaRd, ellipse, tidyverse, here, DT, readxl, stringr, ggplot2, pander.

These packages might take a little while to install; you may wish to grab a cup of coffee while waiting!

Load custom functions

We also provide some additional helper functions that are necessary for our worked examples. The most straightforward way to use these custom functions is to source them with:

source(here("custom_func","custom_func.R"))

Alternatively, paste the source code (included in the file custom_func.R) into the console, and hit “Enter” to let R “learn” these custom functions.

Example dataset

Let’s load the example dataset from [1]:

dat <- read_excel(here("data","Burgess_2021_Global Change Biology.xlsx"))

For simplicity, we will only keep the columns that are necessary to demonstrate the meta-analytic techniques of interest in this tutorial.

Table S1
The variables in the first worked example [1].

Table S1 includes the coded variables that were extracted from published papers.

You can find more details about the variables in the paper by [1], if interested.

Here, we will use the descriptive statistics will be used to compute effect sizes, including sample means (Control.Mean and Stress2.Mean), standard deviations (Control.SD and Stress2) and sample sizes (Control.Reps and Stress2.Reps) in two groups.

Mean difference effect sizes

Log response ratio (lnRR)

We can use the function escalc() in metafor package [2] to compute the point estimate of log response ratio (\(lnRR\)) and associated sampling variance (\(s_{lnRR}^2\)) with:

dat_lnRR <- escalc(measure = "ROM",  # the lnRR is specified; "ROM" means ratio of means
                   m1i = Stress2.Mean, # mean of group 1 (e.g., group to which environmental stressor applied) 
                   m2i = Control.Mean, # mean of group 2 (e.g., control group)
                   sd1i = Stress2.SD, # standard deviation of mean of group 1
                   sd2i = Control.SD, # standard deviation of group 2 
                   n1i = Stress2.Reps, # sample size of group 1 
                   n2i = Control.Reps, # sample size of group 2  
                   data = dat, # dataset containing the above information (here is the dataset of our working example)
                   digits = 3,
                   append = F)
dat_lnRR <- dat %>% mutate(lnRR = dat_lnRR$yi, V_lnRR = dat_lnRR$vi) # add effect size estimate and sample variance to the dataframe

Let’s have a loot at the effect size estimates \(lnRR\) and their sampling variances \(s_{lnRR}^2\).

Table S2

The point estimates of effect size \(lnRR\) (the variable lnRR) and their sampling variance \(s_{lnRR}^2\) (the variable V_lnRR) for each comparison (treatment vs. control) of the included studies.

Standardized mean difference (SMD)

In the same vein, We can use the function escalc() in metafor package to compute the point estimate of standardized mean difference (\(SMD\); well known estimators: Hedges’ g or Cohen’s d) and associated sampling variance (\(s_{SMD}^2\)) with:

dat_SMD <- escalc(measure = "SMD",  # the SMD is specified; "SMD" means standardized mean difference
                  m1i = Stress2.Mean, # mean of group 1 (e.g., group to which environmental stressor applied) 
                  m2i = Control.Mean, # mean of group 2 (e.g., control group)
                  sd1i = Stress2.SD, # standard deviation of mean of group 1
                  sd2i = Control.SD, # standard deviation of group 2 
                  n1i = Stress2.Reps, # sample size of group 1 
                  n2i = Control.Reps, # sample size of group 2 
                  data = dat, # dataset containing the above information (here is the dataset of our working example)
                  digits = 3,
                  append = F)
dat_SMD <- dat %>% mutate(SMD = dat_SMD$yi, V_SMD = dat_SMD$vi) # add effect size estimate and sample variance to the dataframe

Let’s have a look at the effect size estimates \(SMD\) and their sampling variances \(s_{SMD}^2\).

Table S3

The point estimates of effect size \(SMD\) (the variable yi) and their sampling variance \(s_{SMD}^2\) (the variable vi) for each comparison (treatment vs. control) of the included studies.

Multilevel meta-analysis of lnRR together with SMD

In this section, we illustrate the implementation of the multilevel meta-analysis (MLMA) of lnRR and SMD (Equation 7) in R. The function rma.mv() in metafor allows us to fit the MLMA using lnRR together with SMD. To fit such a model, we need to specify the variable Study_ID and ES_ID as the nested random effects via the argument random. In this regard, we can account for the between-study effect (\(\mu_{[lnRR]j}\) or \(\mu_{[SMD]j}\)) and within-study effect (\(w_{[lnRR]j}\) or \(w_{[SMD]j}\)). The function rma.mv() allows for the nested random-effects structure via the argument random.

We can use a formula to let R know the random-effects structure: starts with ~ 1, followed by a |, then an identifier denoting the grouping variable related to the random effect (in our case, Study_ID or ES_ID). Traits with the same value of the grouping variable will receive a random effect of the same value, while traits with different values of the grouping variable are assumed to be independent. In our case, we have two grouping variables (random effects): ~ 1 | Study_ID and ~ 1 | ES_ID. The two formulas can be specified as list of formulas via list(): random = list(~ 1 | Study_ID, ~ 1 | ES_ID). We can also use random = ~ 1 | Study_ID / ES_ID to specify the nested random-effects structure without using list(). random = ~ 1 | Study_ID / ES_ID adds a random effect corresponding to Study_ID and a random effect corresponding to ES_ID within Study_ID to the MLMA

The MLMA using \(lnRR\) as the effect size measure can be fitted with:

MLMA_lnRR <- rma.mv(yi = lnRR, # specify lnRR as the effect size measure;
                    V = V_lnRR, # specify lnRR's sampling variance;
                    random = list(~ 1 | Study_ID, # allow true effect sizes to vary among different primary studies - account for the between-study effect and quantify between-study heterogeneity;
                                  ~ 1 | ES_ID), # allow true effect sizes to vary within primary studies - account for the with-study effect and quantify with-study heterogeneity;
                    test = "t", # t distribution is specified to test the overall effect against the null hypothesis and construct confidence intervals;
                    method = "REML", # restricted likelihood maximum is assigned as the estimator for variance components as suggested;
                    data = dat_lnRR # dataset using lnRR as the effect size measure
                   )

The MLMA using \(SMD\) as the effect size measure can be fitted with:

MLMA_SMD <- rma.mv(yi = SMD, # specify SMD as the effect size measure;
                   V = V_SMD, # specify SMD's sampling variance;
                   random = list(~ 1 | Study_ID, # allow true effect sizes to vary among different primary studies - account for the between-study effect and quantify between-study heterogeneity;
                                 ~ 1 | ES_ID), # allow true effect sizes to vary within primary studies - account for the with-study effect and quantify with-study heterogeneity;
                   test = "t", # t distribution is specified to test the overall effect against the null hypothesis and construct confidence intervals;
                   method = "REML", # restricted likelihood maximum is assigned as the estimator for variance components as suggested;
                   data = dat_SMD # dataset using SMD as the effect size measure
                   )

Next, we show how to properly interpret the model output of the fitted MLMA of lnRR along with SMD (the object MLMA_lnRR and MLMA_SMD). summary() function can return us the model output.

The model output of the fitted MLMA of lnRR:

summary(MLMA_lnRR)

Multivariate Meta-Analysis Model (k = 532; method: REML)

   logLik   Deviance        AIC        BIC       AICc   
-706.9672  1413.9344  1419.9344  1432.7587  1419.9799   

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  0.9160  0.9571    210     no  Study_ID 
sigma^2.2  0.0806  0.2839    532     no     ES_ID 

Test for Heterogeneity:
Q(df = 531) = 10070.1835, p-val < .0001

Model Results:

estimate      se    tval   df    pval    ci.lb   ci.ub    
  0.0087  0.0731  0.1192  531  0.9052  -0.1348  0.1522    

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The model output of the fitted MLMA of SMD:

summary(MLMA_SMD)

Multivariate Meta-Analysis Model (k = 543; method: REML)

   logLik   Deviance        AIC        BIC       AICc   
-913.0457  1826.0914  1832.0914  1844.9772  1832.1360   

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  1.8664  1.3661    214     no  Study_ID 
sigma^2.2  0.0000  0.0001    543     no     ES_ID 

Test for Heterogeneity:
Q(df = 542) = 1720.3602, p-val < .0001

Model Results:

estimate      se     tval   df    pval    ci.lb    ci.ub    
 -0.2274  0.1063  -2.1388  542  0.0329  -0.4362  -0.0185  * 

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The model output might look overwhelming if you have not used rma.mv() to fit such a complex meta-analytic model before meeting this tutorial. Do not worry. Let’s teach you how to properly interpret each piece in the model output of in order of appearance, using MLMA_lnRR as an example.

  • Multivariate Meta-Analysis Model (k = 532; method: REML)

The first line tells us:

k = 532 = the number of effect sizes (k) fed to the model,

method: REML = the restricted likelihood maximum (REML) method was specified as estimation procedure for model fitting to obtain model estimates (e.g., variance components, overall effects).

  • Fit statistics

The next line contains several fit statistics, which can represent the model quality (e.g., model comparison and selection), including:

logLik = restricted log-likelihood of the fitted model,

Deviance = goodness-of-fit statistic of the fitted model (which is a generalization of the idea of using the sum of squares of residuals (SSR) to cases where model-fitting is achieved by maximum likelihood),

AIC = Akaike information criterion score of the fitted model,

BIC = Bayesian information criterion, and,

AICc = AIC corrected for small sample sizes.

  • Variance Components

sigma^2.1 = the estimated between-study variance \(\sigma_{u}^2\) for the random effect Study_ID,

sigma^2.2 = the estimated within-study variance \(\sigma_{w}^2\) for the random effect ES_ID,

estim = the estimated amount of corresponding variance components,

sqrt = standard deviation of variance components,

nlvls = the number of levels for each random effect,

fixed = whether to fix the variance components at a given value. This is useful for sensitivity analyses, likelihood ratio tests, or for imposing a desired variance-covariance structure on the data; no means the variance components are not fixed and estimated from the data,

factor = the name of the variables we supply to argument random to specify the random effects.

  • Test for Heterogeneity

Q(df = 531) = 10070.1835 = The test statistic of the Cochran’s Q-test, which tests whether effect sizes are consistent. Under the null hypothesis, the test statistic follows a chi-square distribution with 531 degree of freedom,

p-val < .0001 = The null hypothesis is rejectd and thus effect sizes are heterogeneous. The p-value for the test is based on a chi-square distribution with 531 degree of freedom.

  • Model Results

The estimated overall effect, confidence intervals, and corresponding model inference related.

estimate = the estimate of the overall effect; (\(\mu_{lnRR}\) or \(\mu_{SMD}\) in Equation 10,

se = the standard error of the estimate; here, it is (\(\text {SE}[\mu_{lnRR}]\) or \(\text {SE}[\mu_{SMD}]\)),

tval = the value of test statistic (in our case: t-value); a t-distribution with kp degrees of freedom to test the overall effect and compute (95% by default) confidence intervals,

df = degrees of freedom of the t-distribution to test the the overall effect and compute (95% by default) confidence intervals,

pval = the corresponding p value from the test,

ci.lb = lower boundary of (95% by default) confidence intervals,

ci.Ub = upper boundary of (95% by default) confidence intervals.

Accounting for the correlated errors in the MLMA:

As explained in the main text, there exists correlated sampling errors (\(e_{[lnRR]ij}\) or \(e_{[SMD]ij}\)), which might distort the statistical inferences on the overall effect. Although the MLMA assumes independent sampling errors, we can incorporate a sampling variance-covariance matrix (Equation 10) into the MLMA to explicitly capture the non-zero sampling covariance (\(Cov[e_{ij},e_{hj}]=\rho_{sih}s_{ij}s_{hj}\)).

We can use the function vcalc() in metafor to construct such a sampling variance-covariance matrix, assuming that the sampling errors within the same study are correlated with a constant correlation coefficient \(\rho_{ih}\) (say, 0.5; see below for the sensitivity analysis). Let’s construct the sampling variance-covariance matrix for \(lnRR\) and \(SMD\), respectively.

For \(lnRR\):

VCV_lnRR <- vcalc(vi = V_lnRR, # sampling variances of lnRR that are correlated within the same study;
                  cluster = Study_ID, # study identity;
                  obs = ES_ID, # effect size identity;
                  data = dat_lnRR, 
                  rho = 0.5 # assuming that the effect sizes within the same study are correlated with rho = 0.5
                  )

For \(SMD\):

VCV_SMD <- vcalc(vi = V_SMD, # sampling variances of SMD that are correlated within the same study;
                 cluster = Study_ID, # study identity;
                 obs = ES_ID, # effect size identity;
                 data = dat_SMD, 
                 rho = 0.5 # assuming that the effect sizes within the same study are correlated with rho = 0.5
                )

In the above syntax, setting rho = 0.5 means we assume that sampling errors within the same study are correlated with \(\rho\) = 0.5.

There are 3 points worth noting.

First, we recommend performing a sensitivity analysis to examine the extent to which the point estimate of the overall effect is sensitive to the arbitrarily assumed rho = 0.5.

Second, we recommend using robust variance estimation (RVE) to guard against the arbitrarily assumed rho = 0.5 used for constructing sampling variance-covariance matrix, thereby ensuring the valid statistical inferences on the overall effect (details see below).

Third, if the imputed sampling variance-covariance matrix is non-positive definite (or, equivalently, not invertible), the corresponding model output should be interpreted with caution.

Now, let’s incorporate the imputed sampling variance-covariance into the MLMA of lnRR together with SMD:

The MLMA using \(lnRR\) as the effect size measure:

mod2_lnRR <- rma.mv(yi = lnRR, # specify lnRR as the effect size measure
                    V = VCV_lnRR, # specify lnRR's sampling variance-covariance matrix;
                    random = list(~ 1 | Study_ID, # allow true effect sizes to vary among different primary studies - account for the between-study effect and quantify between-study heterogeneity;
                                  ~ 1 | ES_ID), # allow true effect sizes to vary within primary studies - account for the with-study effect and quantify with-study heterogeneity;
                    test = "t", # # t distribution is specified to test the overall effect against the null hypothesis and construct confidence intervals;
                    method = "REML", # restricted likelihood maximum is assigned as the estimator for variance components as suggested;
                    data = dat_lnRR # dataset using lnRR as the effect size measure
                    )

The MLMA using \(SMD\) as the effect size measure:

mod2_SMD <- rma.mv(yi = SMD, # specify SMD as the effect size measure
                   V = VCV_SMD, # specify SMD's sampling variance-covariance matrix;
                   random = list(~ 1 | Study_ID, # allow true effect sizes to vary among different primary studies - account for the between-study effect and quantify between-study heterogeneity;
                                 ~ 1 | ES_ID), # allow true effect sizes to vary within primary studies - account for the with-study effect and quantify with-study heterogeneity;
                   test = "t", # t distribution is specified to test the overall effect against the null hypothesis and construct confidence intervals;
                   method = "REML", # restricted likelihood maximum is assigned as the estimator for variance components as suggested;
                   data = dat_SMD # dataset using SMD as the effect size measure
                   )

The corresponding model output for the fitted MLMA of lnRR:

summary(mod2_lnRR)

Multivariate Meta-Analysis Model (k = 532; method: REML)

   logLik   Deviance        AIC        BIC       AICc   
-703.3522  1406.7045  1412.7045  1425.5287  1412.7500   

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  0.9072  0.9524    210     no  Study_ID 
sigma^2.2  0.1411  0.3757    532     no     ES_ID 

Test for Heterogeneity:
Q(df = 531) = 6783.6458, p-val < .0001

Model Results:

estimate      se    tval   df    pval    ci.lb   ci.ub    
  0.0294  0.0755  0.3901  531  0.6966  -0.1189  0.1778    

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The corresponding model output for the fitted MLMA of SMD:

summary(mod2_SMD)

Multivariate Meta-Analysis Model (k = 543; method: REML)

   logLik   Deviance        AIC        BIC       AICc   
-892.1099  1784.2198  1790.2198  1803.1056  1790.2644   

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  1.1984  1.0947    214     no  Study_ID 
sigma^2.2  0.1781  0.4220    543     no     ES_ID 

Test for Heterogeneity:
Q(df = 542) = 1269.3449, p-val < .0001

Model Results:

estimate      se     tval   df    pval    ci.lb    ci.ub     
 -0.2595  0.0963  -2.6931  542  0.0073  -0.4487  -0.0702  ** 

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

We recommend conducting a sensitivity analysis to examine the extent to which the overall effect derived from the MLMA of lnRR and SMD is sensitive to the assumption of sampling correlation values (\(\rho_{s}\)) used for constructing sampling variance-covariance matrix.

First, set a series of \(\rho_{s}\) (i.e., 0.3, 0.5, 0.7) (we assume these values arbitrarily; you can set them based on your expertise in your field):

rho_range <- c(0.3, 0.5, 0.7)

Then, we write a function to help repeatedly run the specified model, changing \(\rho_{s}\) at a time.

Conduct the sensitivity analysis on \(lnRR\):

MLMAlnRR_VCV_range <- list() # repeatedly run the specified model with varying rho

VCV_range <- vcalc(vi = V_lnRR, 
                   cluster = Study_ID, 
                   obs = ES_ID, 
                   rho = rho_range[i],
                   data = dat_lnRR 
                    ) # impute sampling variance covariance matrix with varying rho
MLMAlnRR_VCV_range[[i]] <- rma.mv(yi = lnRR, 
                                  V = VCV_range, # sampling variance covariance matrix with varying values of rho. 
                                  random = list(~ 1 | Study_ID, 
                                                ~ 1 | ES_ID), 
                                  method = "REML", 
                                  test = "t", 
                                  data = dat_lnRR # run the model with varying rho
)
}

Conduct the sensitivity analysis on \(SMD\):

MLMASMD_VCV_range <- list() # repeatedly run the specified model with varying rho

VCV_range <- vcalc(vi = V_SMD, 
                   cluster = Study_ID, 
                   obs = ES_ID, 
                   rho = rho_range[i],
                   data = dat_SMD 
                    ) # impute sampling variance covariance matrix with varying rho
MLMASMD_VCV_range[[i]] <- rma.mv(yi = SMD, 
                                 V = VCV_range, # sampling variance covariance matrix with varying values of rho. 
                                 random = list(~ 1 | Study_ID, 
                                               ~ 1 | ES_ID), 
                                 method = "REML", 
                                 test = "t", 
                                 data = dat_SMD # run the model with varying rho
)
}

We summarize the results of the sensitivity analysis in Table S4 (\(lnRR\)) and Table S5 (\(SMD\)). While the null-hypothesis test of the overall effect of \(lnRR\) does not change with the changing of \(\rho_{s}\) values, the point estimates are sensitive to different assumption of \(\rho_{s}\) values. The results corresponding to \(SMD\) are robust to different assumption of \(\rho_{s}\) values.

Table S4 Sensitivity analysis examining the robustness of the overall effect measured as \(lnRR\) to the assumption of \(\rho_{s}\) values in the multilevel meta-analysis.

Table S5 Sensitivity analysis examining the robustness of the overall effect measured as \(SMD\) to the assumption of \(\rho_{s}\) values in the multilevel meta-analysis.

Bonus 1

Quantifying heterogeneity is an important procedure of meta-analysis. In the Supplementary Material, we showed the formulas for quantifying heterogeneity using the \(I^2\) statistic. The \(I^2\) statistic is the unit-free heterogeneity measure and quantifies heterogeneity by expressing the proportion of variance beyond sampling variance.

Under the framework of the three-level MLMA, the \(I^2\) statistic can be partitioned into between- and within-study levels to provide extra biological information.

This can be easily done by using i2_ml() from orchaRd package.

For MLMA using lnRR:

i2_ml(MLMA_lnRR)
   I2_Total I2_Study_ID    I2_ES_ID 
  97.368409   89.493529    7.874879 

For MLMA using SMD:

i2_ml(MLMA_SMD)
    I2_Total  I2_Study_ID     I2_ES_ID 
7.376812e+01 7.376811e+01 3.339155e-07 

Bivariate multilevel meta-analysis of lnRR and SMD

In this section, we will show how to implement the bivariate multilevel meta-analysis (BMLMA) of lnRR and SMD (Equation 11) in R. We will need to use the function rma.mv() in metafor. The function rma.mv() requires a long format to fit the bivariate meta-analysis (or, more generally, multivariate meta-analysis). Let’s make a long format by combining the dataset using \(lnRR\) as effect size measure (dat_lnRR) and the dataset using \(SMD\) as effect size measure (dat_SMD):

dat_all <- data.frame(Study_ID = c(dat_lnRR$Study_ID, dat_SMD$Study_ID),
                      ES_ID = c(dat_lnRR$ES_ID, dat_SMD$ES_ID),
                      ES = c(dat_lnRR$lnRR, dat_SMD$SMD),
                      V_ES = c(dat_lnRR$V_lnRR, dat_SMD$V_SMD),
                      ES_measure = c(rep("lnRR", nrow(dat_lnRR)), rep("SMD", nrow(dat_SMD)))
)

Recall that the model specification of the BMLMA of lnRR and SMD (Equation 11) has three key parts: sampling variance-covariance (Equation 10), between-study variance-covariance (Equation 12), and within-study variance-covariance (Equation 13). The between- and within-study variance-covariances are used to account for the correlation/covariance between the \(lnRR\) and \(SMD\) parameters (\(Cov[u_{[lnRR]j},u_{[SMD]j}]=\rho_{u}u_{[lnRR]j}u_{[SMD]j}\) and \(Cov[w_{[lnRR]j},w_{[SMD]j}]=\rho_{w}w_{[lnRR]j}w_{[SMD]j}\)), while the sampling variance-covariance is used to account for the correlation/covariance between the \(lnRR\) and \(SMD\) estimates (to be more precise, sampling errors) (\(Cov[e_{[lnRR]j},e_{[SMD]j}]=\rho_{s}s_{[lnRR]j}s_{[SMD]j}\)). The utilization of the correlation between the lnRR and SMD, in turn, can increase the precision of the overall effect estimation, a concept known as borrowing of strength.

Firstly, let’s construct the sampling variance-covariance matrix. As demonstrated in the example of multilevel meta-analysis, we will use the function vcalc() in metafor. Given that the sampling correlation (\(\rho_{s}\)) between the estimates of \(lnRR\) and \(SMD\) is not reported in the literature, we assume a constant value if 0.5 (see below for how to relax this assumption). The syntax for imputing the sampling variance-covariance matrix is as follows:

VCV <- vcalc(vi = V_ES, # sampling variances of lnRR and SMD that are correlated within the same study;
             cluster = Study_ID, # study identity;
             type = ES_measure, # different types of effect size measures underlying the observed effect sizes;
             data = dat_all, # the long format data frame;
             rho = 0.5, # assuming that the effect sizes within the same study are correlated with rho = 0.5.
             nearpd = TRUE # specify whether impose the non positive definite VCV matrix (not invertible) to the nearest positive semi-definite matrix 
             )

In the above syntax, setting rho = 0.5 means we assume that the estimated \(lnRR\) and \(SMD\) within the same study are correlated with \(\rho\) = 0.5. Again, there are 3 points worth noting. Given their importance, we would like to recall them.

First, we recommend performing a sensitivity analysis to examine the extent to which the magnitude of the overall effect is sensitive to a range of \(\rho\) values.

Second, we recommend using robust variance estimation (RVE) to guard against the arbitrarily assumed rho = 0.5 used for constructing sampling variance-covariance matrix, thereby ensuring the valid statistical inferences on the overall effect (details see below).

Third, if the imputed sampling variance-covariance matrix is non-positive definite (or, equivalently, not invertible), the corresponding model output should be interpreted with caution.

Next, let’s show how to account for the correlated random effects. The argument random allows us to specify the correlated random effects: starts with ~ inner variable, followed by a |, then an identifier outer variable. To be more precise, ~ inner variable | outer variable means adding correlated effects for each level of inner variable within each level of outer variable. In our case, we have two correlated random effects: ~ ES_measure | Study_ID and ~ ES_measure | ES_ID. We then use list() to put two random effects together: random = list(~ ES_measure | Study_ID, ~ ES_measure | ES_ID), which indicates the effect sizes with the same level of the study identity (Study_ID) and observation identities (ES_ID) share correlated random effects corresponding to the variables (ES_measure) related to the two effect size measures \(lnRR\) and \(SMD\), while effect sizes with different levels of the study identity and observation identity are statistically independent.

The argument struct will be used to impose the variance-covariance structure associated with the inner variable (ES_measure). According to Equations 10 and 11, the variable ES_measure should be specified as an unstructured variance-covariance matrix (or heteroscedastic compound symmetric structure becauseES_measure only has two levels: \(lnRR\) and \(SMD\)).

For the fixed effect part, the variable ES_measure needs to be supplied to the argument mods using following formula: starting with a tilde ~, followed by the name of the variable: mods = ~ ES_measure. We also use -1 to remove the intercept (non-contrast coding).

The complete syntax to fit the bivariate multilevel meta-analysis of lnRR and SMD (Equation 7) is as follows:

BMLMA <- rma.mv(yi = ES, # specify the effect size estimate (the variable ES in our case);
                V = VCV, # specify the imputed variance-covariance matrix;
                mods = ~ ES_measure - 1, # specify the variable "ES_measure" indicating the types of effect size measures;
                random = list(~ ES_measure | Study_ID, 
                              ~ ES_measure | ES_ID), # add correlated random effects corresponding to the lnRR and SMD parameters in the same study;
                struct = "UN", # impose an unstructured variance-covariance of the study-specific random effects;
                test = "t", # t distribution is specified to test the overall effect against the null hypothesis and construct confidence intervals;
                method = "REML", # restricted likelihood maximum is assigned as the estimator for variance components as suggested;
                data = dat_all, # the long format dataset
                sparse = T
               )

Bonus 2

When you run your own BMLMA model, you might encounter model convergence issue (the model parameters are non-identifiable). If this happened, you can tune the parameters related to the numerical optimization, such as the step, the number of iterations, threshold. The following link provides some useful tips: http://www.metafor-project.org/doku.php/tips:convergence_problems_rma_mv

Let’s have a look at the the model output from the fitted BMLMA of lnRR and SMD (object BMLMA). summary() function can return us the model output:

summary(BMLMA)

Multivariate Meta-Analysis Model (k = 1075; method: REML)

    logLik    Deviance         AIC         BIC        AICc   
-1382.8971   2765.7943   2781.7943   2821.6200   2781.9296   

Variance Components:

outer factor: Study_ID   (nlvls = 214)
inner factor: ES_measure (nlvls = 2)

            estim    sqrt  k.lvl  fixed  level 
tau^2.1    0.8450  0.9192    532     no   lnRR 
tau^2.2    0.7261  0.8521    543     no    SMD 

      rho.lnRR  rho.SMD    lnRR  SMD 
lnRR         1                -  210 
SMD     0.8553        1      no    - 

outer factor: ES_ID      (nlvls = 543)
inner factor: ES_measure (nlvls = 2)

              estim    sqrt  k.lvl  fixed  level 
gamma^2.1    0.5219  0.7225    532     no   lnRR 
gamma^2.2    0.6938  0.8330    543     no    SMD 

      phi.lnRR  phi.SMD    lnRR  SMD 
lnRR         1   0.8741       -  532 
SMD     0.8741        1      no    - 

Test for Residual Heterogeneity:
QE(df = 1073) = 14365490971.0855, p-val < .0001

Test of Moderators (coefficients 1:2):
F(df1 = 2, df2 = 1073) = 11.4594, p-val < .0001

Model Results:

                estimate      se     tval    df    pval    ci.lb    ci.ub     
ES_measurelnRR    0.0179  0.0799   0.2244  1073  0.8225  -0.1388   0.1747     
ES_measureSMD    -0.2987  0.0919  -3.2503  1073  0.0012  -0.4790  -0.1184  ** 

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

If you understand the interpretation of the above MLMA of lnRR and SMD well, I guess you will be able to interpret the model output of the BMLMA for lnRR and SMD yourself. Anyway, let’s look at the results in the order they appear.

  • Multivariate Meta-Analysis Model details

The first line tells us:

k = 1075 = the number of effect sizes (k) fed to the model,

method: REML = the REML method was specified as estimation procedure for model fitting to obtain model estimates (e.g., variance components, overall effects).

  • Fit statistics

The next line contains several fit statistics, which can be used for model comparison or selection, including:

logLik = restricted log-likelihood of the fitted model,

Deviance = goodness-of-fit statistic of the fitted model,

AIC = Akaike information criterion score of the fitted model,

BIC = Bayesian information criterion, and

AICc = AIC corrected for small sample sizes.

  • Variance Components

The first two lines are the descriptions of the first correlated random effect structure.

outer factor: Study_ID = The first grouping variable Study_ID wherein the two types of effect size measures (\(lnRR\) and \(SMD\)) are correlated. nlvls = 214 means the variable Study_ID has 214 levels.

inner factor: ES_measure = The variable ES_measure containing the correlated effect size measures (\(lnRR\) and \(SMD\)). nlvls = 2 means the variable ES_measure has 2 levels.

The next three lines are the estimated first correlated variance components and correlations.

tau^2.1 = between-study variance of the first level of ES_measure (lnRR) \(\tau_{[lnRR]}^2\) in Equation 8.

tau^2.2 = within-study variance of the second level of ES_measure (SMD) \(\tau_{[SMD]}^2\) in Equation 9.

estim = the estimated amount of corresponding variance components (\(\tau_{[lnRR]}^2\) = 0.8450 and \(\tau_{[SMD]}^2\) = 0.7261),

sqrt = standard deviation of variance components (\(\tau_{[lnRR]}\) = 0.9192 and \(\tau_{[SMD]}\) = 0.8521),

k.lvl = the number of levels for each random effect,

fixed = whether to fix the variance components at a given value. This is useful for sensitivity analyses, likelihood ratio tests, or for imposing a desired variance-covariance structure on the data; no means the variance components are not fixed and estimated from the data,

level = the levels of the inner random effect inner factor: ES_measure.

Then, the estimated correlation matrix of the first correlated random effect comes to us.

rho.lnRR = the estimated correlation within the same level of the first grouping variable (Study_ID) between the lnRR parameter and itself (1), and between the lnRR parameter and the SMD parameter (0.8553),

rho.SMD = the estimated correlation within the same level of the first grouping variable (Study_ID) between the SMD parameter and the lnRR parameter (data not shown given it is a symmetric matrix), and between the SMD parameter and itself (1)

The following two lines are the descriptions of the second correlated random effect structure.

outer factor: Study_ID = The first grouping variable Study_ID wherein the two types of effect size measures (\(lnRR\) and \(SMD\)) are correlated. nlvls = 214 means the variable Study_ID has 214 levels.

inner factor: ES_measure = The variable ES_measure containing the correlated effect size measures (\(lnRR\) and \(SMD\)). nlvls = 2 means the variable ES_measure has 2 levels.

The next three lines are the estimated first correlated variance components and correlations.

gamma^2.1 = between-study variance of the first level of ES_measure (lnRR) \(\gamma_{[lnRR]}^2\) in Equation 8.

gamma^2.2 = within-study variance of the second level of ES_measure (SMD) \(\gamma_{[SMD]}^2\) in Equation 9.

estim = the estimated amount of corresponding variance components (\(\gamma_{[lnRR]}^2\) = 0.5219 and \(\gamma_{[SMD]}^2\) = 0.6938),

sqrt = standard deviation of variance components (\(\gamma_{[lnRR]}\) = 0.7225 and \(\gamma_{[SMD]}\) = 0.8330),

k.lvl = the number of levels for each random effect,

fixed = whether to fix the variance components at a given value. This is useful for sensitivity analyses, likelihood ratio tests, or for imposing a desired variance-covariance structure on the data; no means the variance components are not fixed and estimated from the data,

level = the levels of the inner random effect inner factor: ES_measure.

Next, we will see the estimated correlation matrix of the second correlated random effect.

phi.lnRR = the estimated correlation within the same level of the second grouping variable (ES_ID) between the lnRR parameter and itself (1), and between the lnRR parameter and the SMD parameter (0.8741),

phi.SMD = the estimated correlation within the same level of the second grouping variable (ES_ID) between the SMD parameter and the lnRR parameter (data not shown given it is a symmetric matrix), and between the SMD parameter and itself (1).

  • Test for Residual Heterogeneity

QE(df = 1073) = 14365490976.5589 = The test statistic of the Cochran’s Q-test of the residual heterogeneity, which tests whether effect sizes are consistent. Under the null hypothesis, the test statistic follows a chi-square distribution with 1073 degree of freedom,

p-val < .0001 = The null hypothesis is rejected and thus effect sizes are heterogeneous.

  • Test of Moderators (coefficients 1:2)

F(df1 = 2, df2 = 1073) = 11.4594 = the test statistic of the omnibus test (the so-called QM-test) of the overall effect measured as \(lnRR\) and \(SMD\). Under the joint null hypothesis, the test statistic follows asymptotically an F-distribution with 2 and 1073 degrees of freedom,

p-val < .0001 = the joint null hypothesis is rejected and thus at least one overall effect (other measured as \(lnRR\) or \(SMD\)) is statistically significant.

  • Model Results

The estimated overall effect, confidence intervals, and corresponding model inference related stats.

estimate = the estimate of overall effect (\(\mu_{lnRR}\) = 0.0179 under ES_measurelnRR and \(\mu_{SMD}\) = -0.2987 under ES_measureSMD),

se = the standard error of the estimate; here, it is (\(\text SE[\mu_{lnRR}]\) = 0.0799 and \(\text SE[\mu_{SMD}]\) = 0.0919),

tval = the value of test statistic (in our case: t-value); a t-distribution with 1073 degrees of freedom to test the overall effect and compute (95% by default) confidence intervals,

df = degrees of freedom of the t-distribution to test the the overall effect and compute (95% by default) confidence intervals,

pval = the corresponding p value from the test; in our case, 0.8225 for the test of overall effect measured as \(lnRR\) and 0.0012 for the the test of overall effect measured as \(SMD\),

ci.lb = lower boundary of (95% by default) confidence intervals,

ci.Ub = upper boundary of (95% by default) confidence intervals.

This example happens to show that using different effect size measures might lead to different conclusions. If using \(lnRR\), there is no overall effect, while using \(SMD\) yields an overall effect. To make matters worse, the \(lnRR\) and \(SMD\) have different signs (see the main text for the explanations). This creates room for analytic flexibility or “researcher degrees of freedom”. The analysts may (un)intentionally search methodological specifications to align with their desired findings, leading to confirmation bias (i.e., seeing what they want to see) and the risk of erroneously concluding the null effect as the true effect (i.e., a high false positive rate).

Bonus 3

As shown above, the lnRR parameters (or true effect sizes) are positively correlated with the SMD parameters. Some users might want to visualize this correlation. To do so, we first need to derive the regression line (or, more precisely, intercept and slope of the regression line) from BMLMA model where we regress the lnRR parameters on the SMD parameters.

To get the intercept and slope of the regression line, we use the function matreg() to fit a regression model based on correlation and covariance matrices derived from the fitted BMLMA. With the intercept and slope in hand, it is easy to geom_abline() to visualize the bivariate correlation between the lnRR parameters and the SMD parameters. We can also add the 95% coverage region using ellipse package. The whole syntax is as follows:

ab.l <- matreg(y=2, x=1, R=BMLMA$G, cov=TRUE, means=coef(BMLMA), n=BMLMA$g.levels.comb.k) # fit regression model

xy <- ellipse(BMLMA$G, centre=coef(BMLMA), level=0.95) # get 95% coverage region
ellipse_df <- data.frame(x = xy[, 1], y = xy[, 2]) # convert it int dataframe so that we can use ggplot to make a figure

pivot_wider(dat_all, names_from = ES_measure, values_from = c(ES, V_ES)) %>%
  ggplot() + 
  geom_point(aes(x = ES_lnRR, y = ES_SMD, size = 1/sqrt(V_ES_lnRR)), color = "#1B9E77", alpha = 0.5) + 
  geom_point(aes(x = coef(BMLMA)[1], y = coef(BMLMA)[2]), color = "red", size = 2) + 
  geom_abline(intercept = ab.l$tab$beta[1], slope = ab.l$tab$beta[2]) +
  geom_path(data = ellipse_df, aes(x = x, y = y), color = "gray50") +
  #scale_x_continuous(limits = c(-2.5, 2.5)) + 
  #scale_y_continuous(limits = c(-2.5, 2.5)) + 
  guides(size = "none") + 
  labs(x = "lnRR", y = "SMD") + 
  theme_bw()

Now let’s compare this “galaxy plot” with orchard plots of the two effects separately.

# orchard plot for lnRR
orchard_plot(mod2_lnRR, group = "Study_ID",  xlab = "lnRR", trunk.size = 0.5, branch.size = 4)  + scale_fill_manual(values = "#1B9E77") +
    scale_colour_manual(values ="#1B9E77")

# orchard plot for SMD
orchard_plot(mod2_SMD, group = "Study_ID",  xlab = "SMD", trunk.size = 0.5, branch.size = 4) + scale_fill_manual(values = "#1B9E77") +
    scale_colour_manual(values ="#1B9E77")

Sensitivity analysis of the imputed sampling variance-covariance matrix

Next, we show how to perform a sensitivity analysis to examine the extent to which the overall effect derived from the BMLMA of lnRR and SMD is sensitive to the assumption of sampling correlation values (\(\rho_{s}\)) used for constructing sampling variance-covariance matrix.

First, set a series of \(\rho_{s}\) (i.e., 0.3, 0.5, 0.7) (we assume these values arbitrarily; you can set them based on your expertise in your field):

rho_range <- c(0.3, 0.5, 0.7)

Then, we write a function to help repeatedly run the specified model, changing \(\rho_{s}\) at a time.

BMLMA_VCV_range <- list()
for (i in 1:length(rho_range)) {
VCV_range <- vcalc(vi = V_ES, 
             cluster = Study_ID, 
             type = ES_measure, 
             data = dat_all, 
             rho = rho_range[i], # setting varying rho values 
             nearpd = TRUE # if force the non-positive definite matrix (not invertible) to the nearest positive semi-definite matrix, you need to interpret the following model output with caution.
             )

BMLMA_VCV_range[[i]] <- rma.mv(yi = ES, 
                               V = VCV_range, # sampling variance covariance matrix with varying values of rho. 
                               mods = ~ ES_measure - 1, 
                               random = list(~ ES_measure | Study_ID, 
                                             ~ ES_measure | ES_ID), 
                               struct = "UN",
                               test = "t", 
                               method = "REML", 
                               data = dat_all 
)
}

We summarize the results of the sensitivity analysis in Table S6 (\(lnRR\)) and Table S7 (\(SMD\)).

Table S6 Sensitivity analysis examining the robustness of the overall effect measured as \(lnRR\) to the assumption of \(\rho_{s}\) values in the bivariate multilevel meta-analysis.

Table S7 Sensitivity analysis examining the robustness of the overall effect measured as \(SMD\) to the assumption of \(\rho_{s}\) values in the bivariate multilevel meta-analysis.

Given that the above BMLMA of lnRR and SMD has many parameters to be estimated. For some small datasets, this proposed approach of joint synthesis of lnRR and SMD is potentially highly parameterized and may be prone to overparameterization. Therefore, the advantages might be compromised by the increasing number of parameters that need to be estimated. In such a case, we recommend using MLMA as the main method and BMLMA as the sensitivity analysis. In essence, MLMA is a simplified version of BMLMA.

Robust variance estimation

Robust variance estimation provides a way to guard against the arbitrarily assumed sampling correlation (\(\rho_{s}\)) and inaccurate estimates of variance components. Therefore, as we emphasized in the main text, we should perform robust variance estimation after fitting either MLMA or BMLMA.

In R, it is very straightforward to implement robust variance estimation. We can supply the fitted MLMA or BMLMA to the function robust() from the metafor package. We use ‘bias-reduced linearization’ correction (the so-called CR2 in clubSandwich package [3]) for small-sample size adjustment. The implementation of robust variance estimation is straightforward.

The robust variance estimation of BMLMA model:

BMLMA_RVE <- robust(BMLMA, # fitted bivariate multilevel model for which to make robust model inference;
                    cluster = dat_all$Study_ID, 
                    clubSandwich = TRUE # ‘bias-reduced linearization’ small-sample size correction (the so-called CR2) is specified to approximate variance-covariance via clubSandwich package.
                    )

Now, let’s look at the corresponding output:

print(BMLMA_RVE)

Multivariate Meta-Analysis Model (k = 1075; method: REML)

Variance Components:

outer factor: Study_ID   (nlvls = 214)
inner factor: ES_measure (nlvls = 2)

            estim    sqrt  k.lvl  fixed  level 
tau^2.1    0.8450  0.9192    532     no   lnRR 
tau^2.2    0.7261  0.8521    543     no    SMD 

      rho.lnRR  rho.SMD    lnRR  SMD 
lnRR         1                -  210 
SMD     0.8553        1      no    - 

outer factor: ES_ID      (nlvls = 543)
inner factor: ES_measure (nlvls = 2)

              estim    sqrt  k.lvl  fixed  level 
gamma^2.1    0.5219  0.7225    532     no   lnRR 
gamma^2.2    0.6938  0.8330    543     no    SMD 

      phi.lnRR  phi.SMD    lnRR  SMD 
lnRR         1   0.8741       -  532 
SMD     0.8741        1      no    - 

Test for Residual Heterogeneity:
QE(df = 1073) = 14365490971.0855, p-val < .0001

Number of estimates:   1075
Number of clusters:    214
Estimates per cluster: 1-30 (mean: 5.02, median: 4)

Test of Moderators (coefficients 1:2):¹
F(df1 = 2, df2 = 193.65) = 11.4172, p-val < .0001

Model Results:

                estimate      se¹     tval¹      df¹    pval¹    ci.lb¹ 
ES_measurelnRR    0.0179  0.0799    0.2244   200.58   0.8226   -0.1396  
ES_measureSMD    -0.2987  0.0919   -3.2511   197.41   0.0014   -0.4798  
                  ci.ub¹     
ES_measurelnRR   0.1755      
ES_measureSMD   -0.1175   ** 

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

1) results based on cluster-robust inference (var-cov estimator: CR2,
   approx t/F-tests and confidence intervals, df: Satterthwaite approx)

As you can see, robust variance estimation computes the robust standard error and use it for the subsequent testing of null hypothesis (t-stat, p-val) and confidence intervals of the overall effect.

The robust variance estimation of MLMA model using lnRR:

MLMA_RVE_lnRR <- robust(MLMA_lnRR,
                        cluster = dat_lnRR$Study_ID, 
                        clubSandwich = TRUE 
                        )

print(MLMA_RVE_lnRR)

Multivariate Meta-Analysis Model (k = 532; method: REML)

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  0.9160  0.9571    210     no  Study_ID 
sigma^2.2  0.0806  0.2839    532     no     ES_ID 

Test for Heterogeneity:
Q(df = 531) = 10070.1835, p-val < .0001

Number of estimates:   532
Number of clusters:    210
Estimates per cluster: 1-15 (mean: 2.53, median: 2)

Model Results:

estimate      se¹    tval¹      df¹    pval¹    ci.lb¹   ci.ub¹    
  0.0087  0.0731   0.1192   200.05   0.9053   -0.1354   0.1528     

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

1) results based on cluster-robust inference (var-cov estimator: CR2,
   approx t-test and confidence interval, df: Satterthwaite approx)

The robust variance estimation of MLMA model using SMD:

MLMA_RVE_SMD <- robust(MLMA_SMD,
                        cluster = dat_SMD$Study_ID, 
                        clubSandwich = TRUE 
                        )

print(MLMA_RVE_SMD)

Multivariate Meta-Analysis Model (k = 543; method: REML)

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2.1  1.8664  1.3661    214     no  Study_ID 
sigma^2.2  0.0000  0.0001    543     no     ES_ID 

Test for Heterogeneity:
Q(df = 542) = 1720.3602, p-val < .0001

Number of estimates:   543
Number of clusters:    214
Estimates per cluster: 1-15 (mean: 2.54, median: 2)

Model Results:

estimate      se¹     tval¹      df¹    pval¹    ci.lb¹    ci.ub¹    
 -0.2274  0.1063   -2.1391   204.82   0.0336   -0.4370   -0.0178   * 

---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

1) results based on cluster-robust inference (var-cov estimator: CR2,
   approx t-test and confidence interval, df: Satterthwaite approx)

License

This documented is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International.

Software and package versions

R version 4.3.3 (2024-02-29)

Platform: aarch64-apple-darwin20 (64-bit)

locale: en_US.UTF-8||en_US.UTF-8||en_US.UTF-8||C||en_US.UTF-8||en_US.UTF-8

attached base packages: stats, graphics, grDevices, utils, datasets, methods and base

other attached packages: orchaRd(v.2.0), pander(v.0.6.5), patchwork(v.1.1.3), clubSandwich(v.0.5.10), metafor(v.4.6-0), numDeriv(v.2016.8-1.1), metadat(v.1.2-0), Matrix(v.1.6-5), ellipse(v.0.5.0), readxl(v.1.4.3), DT(v.0.29), here(v.1.0.1), lubridate(v.1.9.3), forcats(v.1.0.0), stringr(v.1.5.1), dplyr(v.1.1.4), purrr(v.1.0.2), readr(v.2.1.5), tidyr(v.1.3.1), tibble(v.3.2.1), ggplot2(v.3.5.0), tidyverse(v.2.0.0), rmdformats(v.1.0.4) and knitr(v.1.45)

loaded via a namespace (and not attached): beeswarm(v.0.4.0), gtable(v.0.3.4), xfun(v.0.43), bslib(v.0.7.0), htmlwidgets(v.1.6.2), lattice(v.0.22-5), mathjaxr(v.1.6-0), tzdb(v.0.4.0), crosstalk(v.1.2.0), vctrs(v.0.6.5), tools(v.4.3.3), generics(v.0.1.3), sandwich(v.3.0-2), fansi(v.1.0.6), highr(v.0.10), pacman(v.0.5.1), pkgconfig(v.2.0.3), lifecycle(v.1.0.4), farver(v.2.1.1), compiler(v.4.3.3), munsell(v.0.5.1), codetools(v.0.2-19), vipor(v.0.4.7), htmltools(v.0.5.8.1), sass(v.0.4.9), yaml(v.2.3.8), pillar(v.1.9.0), jquerylib(v.0.1.4), MASS(v.7.3-60.0.1), cachem(v.1.0.8), multcomp(v.1.4-25), nlme(v.3.1-164), tidyselect(v.1.2.1), digest(v.0.6.35), mvtnorm(v.1.2-4), stringi(v.1.8.3), bookdown(v.0.35), splines(v.4.3.3), labeling(v.0.4.3), latex2exp(v.0.9.6), rprojroot(v.2.0.3), fastmap(v.1.1.1), grid(v.4.3.3), colorspace(v.2.1-0), cli(v.3.6.2), magrittr(v.2.0.3), survival(v.3.5-8), utf8(v.1.2.4), TH.data(v.1.1-2), withr(v.3.0.0), scales(v.1.3.0), ggbeeswarm(v.0.7.2), estimability(v.1.5), timechange(v.0.3.0), rmarkdown(v.2.26), emmeans(v.1.10.0), cellranger(v.1.1.0), zoo(v.1.8-12), hms(v.1.1.3), coda(v.0.19-4), evaluate(v.0.23), rlang(v.1.1.3), Rcpp(v.1.0.12), xtable(v.1.8-4), glue(v.1.7.0), rstudioapi(v.0.16.0), jsonlite(v.1.8.8) and R6(v.2.5.1)

References

1. Burgess BJ, Purves D, Mace G, Murrell DJ. Classifying ecosystem stressor interactions: Theory highlights the data limitations of the additive null model and the difficulty in revealing ecological surprises. Global Change Biology. 2021;27:3052–65.
2. Viechtbauer W. Conducting meta-analyses in r with the metafor package. Journal of statistical software. 2010;36:1–48.
3. Pustejovsky JE, Tipton E. Meta-analysis with robust variance estimation: Expanding the range of working models. Prevention Science. 2022;23:425–38.