By default, the `RoBMA()`

function specifies models as a combination of all supplied prior distributions (across null and alternative specification), with their prior model odds being equal to the product of prior distributions’ prior odds. This results in the 12 meta-analytic models using the default settings (Maier et al., 2020). In another vignette, we illustrated that RoBMA can be also utilized for reproducing Bayesian Model-Averaged Meta-Analysis (BMA) (Gronau et al., 2017) and in a tutorial paper, we showed how to fit custom models in JASP (Bartoš et al., 2020). However, the package was built in a way that it can be used as a framework for estimating even more customized meta-analytic model ensembles. Here, we are going to illustrate how to do exactly that.

Please keep in mind that all models should be justified by theory. Furthermore, the models should be tested to make sure that it can perform as intended, a priori to drawing inference from it. The following sections are only illustrating the functionality of the package.

To illustrate the custom model building procedure, we use data from the infamous Bem (2011) “Feeling the future” paper. We use a coding of the results as presented by the original author (Bem et al., 2011). According to the original papers, participants showed statistically significant signs of precognition (predicting the future) in 8/9 experiments. However, there are many issues with the paper (Rouder & Morey, 2011; Schimmack, 2018; Wagenmakers et al., 2011), which are not important for this vignette.

Consider the following scenarios as plaussible explanations for the data, and include those into the meta-analytic ensemble:

- there is absolutely no precognition effect - a fixed effects model assuming the effect size to be zero (\(H_{0}^{\overline{\omega}f}\))
- the experiments measured the same underlying precognition effect - a fixed effects model (\(H_{1}^{\overline{\omega}f}\))
- each of the experiments measured a slightly different precognition effect - a random effects model (\(H_{1}^{\overline{\omega}r}\))
- there is absolutely no precognition effect and the results occurred just due to the publication bias - a fixed effects model assuming the effect size to be zero and publication bias (\(H_{1}^{{\omega}f}\))
- the experiments measured the same underlying precognition effect and the results were inflated due to the publication bias - a fixed effects model assuming publication bias (\(H_{1}^{{\omega}f}\))
- each of the experiments measured a slightly different precognition effect and the results were inflated due to the publication bias - a random effects model assuming publication bias (\(H_{1}^{{\omega}r}\))

If we were to fit the ensemble using the `RoBMA()`

function and specifying all of the priors, we would have ended with two more models than requested (the random effects model assuming the effect size to be zero (\(H_{0}^{\overline{\omega}r}\)) and the random effects model assuming the effect size to be zero and publication bias (\(H_{0}^{{\omega}r}\))). Furthermore, we could not specify different parameters for the prior distributions for each model, which the following process allows (but we do not utilize it).

We start with fitting only the first model using the `RoBMA()`

function and we will continuously update the fitted object to include all of the models. We explicitly specify prior distributions for all parameters using the `prior()`

function and we set the priors to be treated as the null priors for all parameters. We also add `silent = TRUE`

to the `control`

argument (to suppress the fitting messages) and set seed to ensure reproducibility of the results.

```
library(RoBMA)
#> Loading required namespace: runjags
#> module RoBMA loaded
#> Please, note the following changes in version 1.2.0 (see NEWS for more details):
#> - all models are now estimated using the likelihood of effects sizes (instead of t-statistics)
#> - parametrization of random effect models changed (the studies' true effects are marginalized out of the likelihood)
fit <- RoBMA(t = Bem2011$t, n = Bem2011$N, study_names = Bem2011$study,
priors_mu = NULL, priors_tau = NULL, priors_omega = NULL,
priors_mu_null = prior("spike", parameters = list(location = 0)),
priors_tau_null = prior("spike", parameters = list(location = 0)),
priors_omega_null = prior("spike", parameters = list(location = 1)),
control = list(silent = TRUE), seed = 666)
```

Before we add the second model to the ensemble, we need to decide on the prior distribution for the mean parameter. If precognition were to exist, the effect would be small since all casinos would be bankrupted otherwise. Also, negative precognition does not make a lot of sense. Therefore, we decide to use a normal distribution with mean = .15 and standard deviation 0.10, setting most of the probability around the small effect sizes. To get a better grasp of the prior distribution, we visualize it using the `plot.prior()`

function (the figure can be created using the ggplot2 package by adding `plot_type == "ggplot"`

argument).

We add the second model to the ensemble using the `update.RoBMA()`

function. The function can also be used to many other purposes - updating settings, prior model probabilities, and refitting failed models. Here, we supply the fitted ensemble object and add an argument specifying the prior distribution of each parameter for the additional model. Since we want to add model 2 - we set the prior for the mu parameter to be treated as an alternative prior and the remaining priors treated as null priors. If we wanted, we could also specify `prior_odds`

argument, to change the prior probability of the fitted model but we do not utilize this option here to keep the default value, which sets the prior odds for the new model to `1`

. (Note that the arguments for specifying prior distributions in `update.RoBMA()`

function are `prior_X`

- in singular, in comparison to `RoBMA()`

function that uses `priors_X`

in plural.)

```
fit <- update(fit,
prior_mu = prior("normal", parameters = list(mean = .15, sd = .10)),
prior_tau_null = prior("spike", parameters = list(location = 0)),
prior_omega_null = prior("spike", parameters = list(location = 1)))
```

We can inspect the updated ensemble to verify that it contains both models by adding `type = "models"`

argument to the `summary.RoBMA()`

function. We can also inspect the individual model estimates by changing the `type`

argument to `type = "individual"`

.

```
summary(fit, type = "models")
#> Call:
#> RoBMA(t = Bem2011$t, n = Bem2011$N, study_names = Bem2011$study,
#> priors_mu = NULL, priors_tau = NULL, priors_omega = NULL,
#> priors_mu_null = prior("spike", parameters = list(location = 0)),
#> priors_tau_null = prior("spike", parameters = list(location = 0)),
#> priors_omega_null = prior("spike", parameters = list(location = 1)),
#> control = list(silent = TRUE), seed = 666)
#>
#> Robust Bayesian Meta-Analysis
#> Prior mu Prior tau Prior omega Prior prob. Post. prob.
#> 1 Spike(0) Spike(0) Spike(1) 0.500 0.000
#> 2 Normal(0.15, 0.1)[-Inf, Inf] Spike(0) Spike(1) 0.500 1.000
#> log(MargLik) Incl. BF
#> 1 -16.499 0.000
#> 2 1.033 41138030.065
summary(fit, type = "individual")
#> Call:
#> RoBMA(t = Bem2011$t, n = Bem2011$N, study_names = Bem2011$study,
#> priors_mu = NULL, priors_tau = NULL, priors_omega = NULL,
#> priors_mu_null = prior("spike", parameters = list(location = 0)),
#> priors_tau_null = prior("spike", parameters = list(location = 0)),
#> priors_omega_null = prior("spike", parameters = list(location = 1)),
#> control = list(silent = TRUE), seed = 666)
#>
#> Individual Models Summary
#>
#> Model: 1
#> Prior prob.: 0.500 Prior mu: Spike(0)
#> log(MargLik): -16.499 Prior tau: Spike(0)
#> Post. prob.: 0.000 Prior omega: Spike(1)
#> Incl. BF: 0.000
#>
#> Model Coefficients:
#> NULL
#>
#>
#> Model: 2
#> Prior prob.: 0.500 Prior mu: Normal(0.15, 0.1)[-Inf, Inf]
#> log(MargLik): 1.033 Prior tau: Spike(0)
#> Post. prob.: 1.000 Prior omega: Spike(1)
#> Incl. BF: 41138030.065
#>
#> Model Coefficients:
#> Mean SD .025 Median .975 MCMC error Error % of SD ESS Rhat
#> mu 0.329 0.053 0.225 0.329 0.433 0.000 0.6 27929 1.000
```

We also need to decide on the prior distribution for the remaining models. We use the usual inverse-gamma(1, .15) prior distribution based on empirical heterogeneities (Erp et al., 2017) for the heterogeneity parameter tau in the random effects models (3, 6). For models assuming publication bias (4-6), we specify one-sided three-step weight function differentiating between marginally significant and significant *p*-values which we visualize bellow.

Now, we just need to add the remaining models to the ensemble using the `update.RoBMA()`

function as previously illustrated.

```
### adding model 3
fit <- update(fit,
prior_mu = prior("normal", parameters = list(mean = .15, sd = .10)),
prior_tau = prior("invgamma", parameters = list(shape = 1, scale = .15)),
prior_omega_null = prior("spike", parameters = list(location = 1)))
### adding model 4
fit <- update(fit,
prior_mu = prior("spike", parameters = list(location = 0)),
prior_tau = prior("spike", parameters = list(location = 0)),
prior_omega = prior("one.sided", parameters = list(steps = c(0.05, .10), alpha = c(1,1,1))))
### adding model 5
fit <- update(fit,
prior_mu = prior("normal", parameters = list(mean = .15, sd = .10)),
prior_tau = prior("spike", parameters = list(location = 0)),
prior_omega = prior("one.sided", parameters = list(steps = c(0.05, .10), alpha = c(1,1,1))))
### adding model 6
fit <- update(fit,
prior_mu = prior("normal", parameters = list(mean = .15, sd = .10)),
prior_tau = prior("invgamma", parameters = list(shape = 1, scale = .15)),
prior_omega = prior("one.sided", parameters = list(steps = c(0.05, .10), alpha = c(1,1,1))))
```

We verify that all of the requested models are included in the ensemble using the `summary.RoBMA()`

function with `type = "models"`

argument.

```
summary(fit, type = "models")
#> Call:
#> RoBMA(t = Bem2011$t, n = Bem2011$N, study_names = Bem2011$study,
#> priors_mu = NULL, priors_tau = NULL, priors_omega = NULL,
#> priors_mu_null = prior("spike", parameters = list(location = 0)),
#> priors_tau_null = prior("spike", parameters = list(location = 0)),
#> priors_omega_null = prior("spike", parameters = list(location = 1)),
#> control = list(silent = TRUE), seed = 666)
#>
#> Robust Bayesian Meta-Analysis
#> Prior mu Prior tau
#> 1 Spike(0) Spike(0)
#> 2 Normal(0.15, 0.1)[-Inf, Inf] Spike(0)
#> 3 Normal(0.15, 0.1)[-Inf, Inf] InvGamma(1, 0.15)[0, Inf]
#> 4 Spike(0) Spike(0)
#> 5 Normal(0.15, 0.1)[-Inf, Inf] Spike(0)
#> 6 Normal(0.15, 0.1)[-Inf, Inf] InvGamma(1, 0.15)[0, Inf]
#> Prior omega Prior prob. Post. prob. log(MargLik)
#> 1 Spike(1) 0.167 0.000 -16.499
#> 2 Spike(1) 0.167 0.008 1.033
#> 3 Spike(1) 0.167 0.002 -0.153
#> 4 One-sided((0.1, 0.05), (1, 1, 1)) 0.167 0.038 2.579
#> 5 One-sided((0.1, 0.05), (1, 1, 1)) 0.167 0.642 5.417
#> 6 One-sided((0.1, 0.05), (1, 1, 1)) 0.167 0.310 4.688
#> Incl. BF
#> 1 0.000
#> 2 0.040
#> 3 0.012
#> 4 0.195
#> 5 8.973
#> 6 2.244
```

Finally, we use the `summary.RoBMA()`

function to inspect the model results.

```
summary(fit)
#> Call:
#> RoBMA(t = Bem2011$t, n = Bem2011$N, study_names = Bem2011$study,
#> priors_mu = NULL, priors_tau = NULL, priors_omega = NULL,
#> priors_mu_null = prior("spike", parameters = list(location = 0)),
#> priors_tau_null = prior("spike", parameters = list(location = 0)),
#> priors_omega_null = prior("spike", parameters = list(location = 1)),
#> control = list(silent = TRUE), seed = 666)
#>
#> Robust Bayesian Meta-Analysis
#> Models Prior prob. Post. prob. Incl. BF
#> Effect 5/6 0.833 1.000 1.026453e+09
#> Heterogeneity 4/6 0.667 0.992 6.187900e+01
#> Pub. bias 3/6 0.500 0.990 9.457000e+01
#>
#> Model-averaged estimates
#> Mean Median 0.025 0.975
#> mu 0.203 0.209 0.000 0.359
#> tau 0.039 0.000 0.000 0.234
#> omega[0,0.05] 1.000 1.000 1.000 1.000
#> omega[0.05,0.1] 0.504 0.488 0.084 0.976
#> omega[0.1,1] 0.091 0.047 0.001 0.435
#> (Estimated omegas correspond to one-sided p-values)
```

The finalized ensemble can be treated as any other `RoBMA`

using the `summary.RoBMA()`

, `plot.RoBMA()`

, and `diagnostics()`

function. The results from our ensemble indicate support for presence of the effect, \(BF_{10} = 1.03e+09\), heterogeneity, \(BF_{rf} = 61.88\) , and publication bias, \(BF_{\omega{\overline{\omega}}} = 94.57\).

For example, we can use the `plot.RoBMA()`

with the `parameter = "mu", prior = TRUE`

arguments to plot the prior (grey) and posterior distribution (black) for the effect size.

Or change the parameter argument to `parameter = "omega"`

to plot the prior (grey dashed lines) and posterior (black lines, with gray area filling the 95% CI) distribution for the weight function.

As pointed out at the beginning of the vignette, the intention of this example was not to draw inference about the results of Bem (2011) studies. Furthermore, selection models and their ensembles implemented in `RoBMA`

are only able to control and correct for publication bias. Using a simulation study, we showed that the `RoBMA`

default 12 model meta-analytic ensemble is capable of testing and estimating parameters under different conditions assuming that the publication bias operates on *p*-values. However, we also showed that `RoBMA`

, as well as most of the other methods, fail to recover the true effect sizes in cases with severe p-hacking (Maier et al., 2020).

Bartoš, F., Maier, M., & Wagenmakers, E.-J. (2020). Adjusting for publication bias in JASP — selection models and robust bayesian meta-analysis. In *PsyArXiv*. https://doi.org/10.31234/osf.io/75bqn

Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. *Journal of Personality and Social Psychology*, *100*(3), 407. https://doi.org/10.1037/a0021524

Bem, D. J., Utts, J., & Johnson, W. O. (2011). Must psychologists change the way they analyze their data? *Journal of Personality and Social Psychology*, *101*(4), 716. https://doi.org/10.1037/a0024777

Erp, S. van, Verhagen, J., Grasman, R. P., & Wagenmakers, E.-J. (2017). Estimates of between-study heterogeneity for 705 meta-analyses reported in Psychological Bulletin from 1990–2013. *Journal of Open Psychology Data*, *5*(1). https://doi.org/10.5334/jopd.33

Gronau, Q. F., Van Erp, S., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E.-J. (2017). A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: The case of felt power. *Comprehensive Results in Social Psychology*, *2*(1), 123–138. https://doi.org/10.1080/23743603.2017.1326760

Maier, M., Bartoš, F., & Wagenmakers, E.-J. (2020). Robust Bayesian meta-analysis: Addressing publication bias with model-averaging. In *PsyArXiv*. https://doi.org/10.31234/osf.io/u4cns

Rouder, J. N., & Morey, R. D. (2011). A Bayes factor meta-analysis of Bem’s ESP claim. *Psychonomic Bulletin & Review*, *18*(4), 682–689. https://doi.org/10.3758/s13423-011-0088-7

Schimmack, U. (2018). My email correspondence with Daryl J. Bem about the data for his 2011 article “Feeling the future”. In *Replicability-Index*. https://replicationindex.com/2018/01/20/my-email-correspondence-with-daryl-j-bem-about-the-data-for-his-2011-article-feeling-the-future/

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & Van Der Maas, H. L. (2011). Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). *Journal of Personality and Social Psychology*, *100*(3), 426–432. https://doi.org/10.1037/a0022790