Perform mediation analysis in the presence of high-dimensional mediators based on the potential outcome framework. Bayesian Mediation Analysis (BAMA), developed by Song et al (2019) and Song et al (2020), relies on two Bayesian sparse linear mixed models to simultaneously analyze a relatively large number of mediators for a continuous exposure and outcome assuming a small number of mediators are truly active. This sparsity assumption also allows the extension of univariate mediator analysis by casting the identification of active mediators as a variable selection problem and applying Bayesian methods with continuous shrinkage priors on the effects.
You can install
bama from CRAN
or from Github via
# install.packages(devtools) ::install_github("umich-cphds/bama", built_opts = c())devtools
bama requires the R packages
RcppArmadillo, so you may want to install / update them
before downloading. If you decide to install
source (eg github), you will need a C++ compiler that supports C++11. On
Windows this can accomplished by installing Rtools, and Xcode
The Github version may contain new features or bug fixes not yet
present on CRAN, so if you are experiencing issues, you may want to try
the Github version of the package. ## Example problem
contains a semi-synthetic example data set,
is used in this example.
bama.data contains a continuous
y and a continuous exposure
a that is
mediated by 100 mediators,
library(bama) # print just the first 10 columns head(bama.data[,1:10])
The mediators have an internal correlation structure that is based
off the covariance matrix from the Multi-Ethnic Study of Atherosclerosis
(MESA) data. However,
bama does not model internal
correlation between mediators. Instead,
continuous Bayesian shrinkage priors to select mediators and assumes
that all the potential mediators contribute small effects in mediating
the exposure-outcome relationship, but only a small proportion of
mediators exhibit large effects.
We use no adjustment covariates in this example, so we just include the intercept. Also, in a real world situation, it may be beneficial to normalize the input data.
<- bama.data$y Y <- bama.data$a A # grab the mediators from the example data.frame <- as.matrix(bama.data[, paste0("m", 1:100)], nrow(bama.data)) M # We just include the intercept term in this example as we have no covariates <- matrix(1, 1000, 1) C1 <- matrix(1, 1000, 1) C2 <- rep(0, 100) beta.m <- rep(0, 100) alpha.a <- bama(Y = Y, A = A, M = M, C1 = C1, C2 = C2, method = "BSLMM", seed = 1234, out burnin = 1000, ndraws = 1100, weights = NULL, inits = NULL, control = list(k = 2, lm0 = 1e-04, lm1 = 1, l = 1)) # The package includes a function to summarise output from 'bama' <- summary(out) summary head(summary) # Product Threshold Gaussian = bama(Y = Y, A = A, M = M, C1 = C1, C2 = C2, method = "PTG", seed = 1234, ptgmod burnin = 1000, ndraws = 1100, weights = NULL, inits = NULL, control = list(lambda0 = 0.04, lambda1 = 0.2, lambda2 = 0.2)) mean(ptgmod$beta.a) apply(ptgmod$beta.m, 2, mean) apply(ptgmod$alpha.a, 2, mean) apply(ptgmod$betam_member, 2, mean) apply(ptgmod$alphaa_member, 2, mean) # Gaussian Mixture Model = bama(Y = Y, A = A, M = M, C1 = C1, C2 = C2, method = "GMM", seed = 1234, gmmmod burnin = 1000, ndraws = 1100, weights = NULL, inits = NULL, control = list(phi0 = 0.01, phi1 = 0.01)) mean(gmmmod$beta.a) apply(gmmmod$beta.m, 2, mean) apply(gmmmod$alpha.a, 2, mean) mean(gmmmod$sigma.sq.a) mean(gmmmod$sigma.sq.e) mean(gmmmod$sigma.sq.g)
Song, Y, Zhou, X, Zhang, M, et al. Bayesian shrinkage estimation of high dimensional causal mediation effects in omics studies. Biometrics. 2019; 1-11.