Bayesian network meta analysis

Michael Seo and Christopher Schmid

2020-08-27

In this vignette, we describe how to run a Bayesian network meta-analysis model using this package. First, we’ll need to load the package.

#install.packages("bnma")
#or devtools::install_github("MikeJSeo/bnma")
library(bnma)

Preprocessing

It is essential to specify the input data in a correct format. We have chosen to use arm-level data with following input variable names: Outcomes, N or SE, Study, and Treat. Outcomes is the trial results. N is the number of respondents used for binary or multinomial model. SE is the standard error used for normal model. Study is the study indicator for the meta analysis. Lastly, Treat is the treatment indicator for each arm. We will use a dataset parkinsons for illustration.

parkinsons
#> $Outcomes
#>  [1] -1.22 -1.53 -0.70 -2.40 -0.30 -2.60 -1.20 -0.24 -0.59 -0.73 -0.18 -2.20
#> [13] -2.50 -1.80 -2.10
#> 
#> $SE
#>  [1] 0.504 0.439 0.282 0.258 0.505 0.510 0.478 0.265 0.354 0.335 0.442 0.197
#> [13] 0.190 0.200 0.250
#> 
#> $Treat
#>  [1] "Placebo"       "Ropinirole"    "Placebo"       "Pramipexole"  
#>  [5] "Placebo"       "Pramipexole"   "Bromocriptine" "Ropinirole"   
#>  [9] "Bromocriptine" "Ropinirole"    "Bromocriptine" "Bromocriptine"
#> [13] "Cabergoline"   "Bromocriptine" "Cabergoline"  
#> 
#> $Study
#>  [1] 1 1 2 2 3 3 3 4 4 5 5 6 6 7 7
#> 
#> $Treat.order
#> [1] "Placebo"       "Pramipexole"   "Ropinirole"    "Bromocriptine"
#> [5] "Cabergoline"

In order to run network meta-analysis in JAGS, we need to relabel study names into to a numeric sequence, i.e. 1 to total number of studies, and relabel the treatment into a numeric sequence according to treatment order specified. If the treatment order is not specified, default is to use the alphabetical order. In the example below, we set placebo as the baseline treatment followed by Pramipexole, Ropinirole, Bromocriptine, and Cabergoline as the treatment order.

network <- with(parkinsons, network.data(Outcomes = Outcomes, Study = Study, Treat = Treat, SE = SE, response = "normal", Treat.order = Treat.order))
network$Treat.order 
#>               1               2               3               4               5 
#>       "Placebo"   "Pramipexole"    "Ropinirole" "Bromocriptine"   "Cabergoline"
network$Study.order
#> 1 2 3 4 5 6 7 
#> 1 2 3 4 5 6 7

Another important preprocessing step that is done in network.data function is changing the arm-level data into study-level data. We store the study-level data of Outcomes as r, Treat as t, N or SE as n or se. We can see how Outcomes changed into a study-level matrix given below (i.e. row = study). If the Outcomes are multinomial, it will change to a 3 dimensional array.

network$r
#>       [,1]  [,2] [,3]
#> [1,] -1.22 -1.53   NA
#> [2,] -0.70 -2.40   NA
#> [3,] -0.30 -2.60 -1.2
#> [4,] -0.24 -0.59   NA
#> [5,] -0.73 -0.18   NA
#> [6,] -2.20 -2.50   NA
#> [7,] -1.80 -2.10   NA

Priors

Priors can be set in the network.data function. If left unspecified, default values are used. For heterogeneity parameters of the random effects model, we follow the data format from a similar Bayesian network meta-analysis R package gemtc. It should be a list of length 3 where the first element should be the distribution (one of dunif, dgamma, dhnorm, dwish) and the next two are the parameters associated with the distribution. Here is an example of assigning a half-normal distribution with mean 0 and standard deviation 5.

network <- with(smoking, network.data(Outcomes = Outcomes, Study = Study, Treat = Treat, N = N, response = "binomial", mean.d = 0.1, hy.prior = list("dhnorm", 0, 5)))

Running the model

Now to run the model, we use the function network.run. The most important parameter is n.run which determines the number of final samples the user wants. Gelman-Rubin statitics is checked automatically every setsize number of iterations and once the series have converged we store the last half of the sequence. If the number of iteration is less than the number of final samples (n.runs), it will sample more to fill the requirement. One of the nice features of this package is that it checks for convergence automatically and will give an error if the sequence has not converged. The parameters tested for convergence are the relative treatment effects, baseline effect, and heterogeneity parameter. The number that is printed during the running of the model is the point estimate of the Gelman-Rubin statistics used to test convergence.

result <- network.run(network, n.run = 30000)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 50
#>    Unobserved stochastic nodes: 54
#>    Total graph size: 1129
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.003476
#> [1] 1.001935

Model Summary

Package includes many summary tools that can be used. One of the useful summary might be the forest plot.

network.forest.plot(result)

# draw.network.graph(network)
# network.autocorr.diag(result)
# network.autocorr.plot(result)
# network.cumrank.tx.plot(result)
# network.deviance.plot(result)
# network.gelman.plot(result)

Multinomial model

Another nice addition of this package is that multinomial outcome dataset can be analyzed. Here is an example.

network <- with(cardiovascular, network.data(Outcomes, Study, Treat, N, response = "multinomial"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 34
#>    Unobserved stochastic nodes: 37
#>    Total graph size: 1301
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.004288
#> [1] 1.001961
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>                 Mean      SD  Naive SE Time-series SE
#> d[1,1]      0.000000 0.00000 0.000e+00      0.0000000
#> d[2,1]     -0.105009 0.15161 3.915e-04      0.0011086
#> d[3,1]     -0.047876 0.11263 2.908e-04      0.0007097
#> d[1,2]      0.000000 0.00000 0.000e+00      0.0000000
#> d[2,2]     -0.187910 0.15433 3.985e-04      0.0011366
#> d[3,2]     -0.271128 0.11214 2.895e-04      0.0006805
#> sigma[1,1]  0.114417 0.05038 1.301e-04      0.0002340
#> sigma[2,1]  0.004773 0.03425 8.844e-05      0.0001631
#> sigma[1,2]  0.004773 0.03425 8.844e-05      0.0001631
#> sigma[2,2]  0.114345 0.05021 1.296e-04      0.0002271
#> 
#> 2. Quantiles for each variable:
#> 
#>                2.5%      25%      50%       75%    97.5%
#> d[1,1]      0.00000  0.00000  0.00000  0.000000  0.00000
#> d[2,1]     -0.40413 -0.20409 -0.10560 -0.006423  0.19560
#> d[3,1]     -0.27189 -0.12144 -0.04738  0.025996  0.17453
#> d[1,2]      0.00000  0.00000  0.00000  0.000000  0.00000
#> d[2,2]     -0.49495 -0.28830 -0.18721 -0.086117  0.11251
#> d[3,2]     -0.49630 -0.34335 -0.27011 -0.198058 -0.05122
#> sigma[1,1]  0.05170  0.08026  0.10344  0.135636  0.24130
#> sigma[2,1] -0.06242 -0.01405  0.00399  0.022818  0.07610
#> sigma[1,2] -0.06242 -0.01405  0.00399  0.022818  0.07610
#> sigma[2,2]  0.05189  0.08051  0.10338  0.135493  0.24084
#> 
#> 
#> $Treat.order
#> 1 2 3 
#> 1 2 3 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 31.17268 60.45318 91.62585 
#> 
#> $total_n
#> [1] 34
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Adding covariates

We can add continuous or discrete covariates to fit a network meta-regression. If the covariate is continuous, it is centered. Discrete variables need to be 0-1 dummy format. There are three different assumptions for covariate effect: “common”, “independent”, and “exchangeable”.

network <- with(statins, network.data(Outcomes, Study, Treat, N=N, response = "binomial", Treat.order = c("Placebo", "Statin"), covariate = covariate, covariate.type = "discrete", covariate.model = "common"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 38
#>    Unobserved stochastic nodes: 41
#>    Total graph size: 877
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.050821
#> [1] 1.010856
#> [1] 1.006566
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>              Mean     SD  Naive SE Time-series SE
#> beta1[1]  0.00000 0.0000 0.0000000       0.000000
#> beta1[2] -0.28834 0.2711 0.0007001       0.003773
#> d[1]      0.00000 0.0000 0.0000000       0.000000
#> d[2]     -0.07386 0.2135 0.0005514       0.002765
#> sd        0.24637 0.2147 0.0005543       0.006186
#> 
#> 2. Quantiles for each variable:
#> 
#>               2.5%     25%      50%      75%  97.5%
#> beta1[1]  0.000000  0.0000  0.00000  0.00000 0.0000
#> beta1[2] -0.874734 -0.4277 -0.26903 -0.13898 0.2276
#> d[1]      0.000000  0.0000  0.00000  0.00000 0.0000
#> d[2]     -0.498207 -0.1861 -0.07731  0.03533 0.3697
#> sd        0.007732  0.0903  0.18999  0.34163 0.7979
#> 
#> 
#> $Treat.order
#>         1         2 
#> "Placebo"  "Statin" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 42.64142 25.17595 67.81737 
#> 
#> $total_n
#> [1] 38
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Covariate plot shows you how the relative treatment effect changes as the covariate varies.

network.covariate.plot(result, base.treatment = "Placebo", comparison.treatment = "Statin")

Baseline risk

Another useful addition of this network package is the ability to model baseline risk. We can have “common”, “independent”, or “exchangeable” assumption on the baseline slopes and “independenet” and “exchangeable” assumption on the baseline risk. Here we demonstrate a common baseline slope and exchangeable baseline risk model.

network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", Treat.order = Treat.order, baseline = "common", baseline.risk = "exchangeable"))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 24
#>    Unobserved stochastic nodes: 34
#>    Total graph size: 670
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.2701
#> [1] 1.01316
#> [1] 1.010379
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>            Mean     SD  Naive SE Time-series SE
#> B       -0.7817 0.2724 0.0007034       0.004204
#> b_bl[1]  0.0000 0.0000 0.0000000       0.000000
#> b_bl[2] -0.7817 0.2724 0.0007034       0.004204
#> b_bl[3] -0.7817 0.2724 0.0007034       0.004204
#> b_bl[4] -0.7817 0.2724 0.0007034       0.004204
#> b_bl[5] -0.7817 0.2724 0.0007034       0.004204
#> b_bl[6] -0.7817 0.2724 0.0007034       0.004204
#> b_bl[7] -0.7817 0.2724 0.0007034       0.004204
#> d[1]     0.0000 0.0000 0.0000000       0.000000
#> d[2]     1.8654 0.2326 0.0006006       0.001939
#> d[3]     2.1371 0.2114 0.0005459       0.002158
#> d[4]     2.0295 0.4215 0.0010883       0.005361
#> d[5]     1.6363 0.2026 0.0005232       0.001956
#> d[6]     0.2937 0.5836 0.0015069       0.009282
#> d[7]     2.1542 0.2871 0.0007413       0.002892
#> sd       0.1939 0.1755 0.0004530       0.003623
#> 
#> 2. Quantiles for each variable:
#> 
#>              2.5%      25%     50%     75%   97.5%
#> B       -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[1]  0.000000  0.00000  0.0000  0.0000  0.0000
#> b_bl[2] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[3] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[4] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[5] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[6] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> b_bl[7] -1.290676 -0.92984 -0.7938 -0.6476 -0.2075
#> d[1]     0.000000  0.00000  0.0000  0.0000  0.0000
#> d[2]     1.399183  1.75099  1.8646  1.9807  2.3281
#> d[3]     1.742496  2.01625  2.1320  2.2512  2.5714
#> d[4]     1.211522  1.78117  2.0269  2.2785  2.8662
#> d[5]     1.238557  1.52701  1.6351  1.7423  2.0456
#> d[6]    -0.891186 -0.07728  0.3039  0.6842  1.3823
#> d[7]     1.596276  2.00036  2.1531  2.3022  2.7368
#> sd       0.006792  0.07272  0.1515  0.2608  0.6470
#> 
#> 
#> $Treat.order
#>             1             2             3             4             5 
#>     "Placebo"         "CZP"  "Adalimumab"  "Etanercept"  "Infliximab" 
#>             6             7 
#>   "Rituximab" "Tocilizumab" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 27.81343 18.77136 46.58479 
#> 
#> $total_n
#> [1] 24
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Unrelated Means Model

Unrelated mean effects (UME) model estimates separate, unrelated basic parameters. We do not assume consistency in this model. We can compare this model with the standard consistency model. If the parameter estimates are similar for both models, and there is considerable overlap in the 95% credible interval, we can conclude that there is no evidence of inconsistency in the network.

network <- with(smoking, {
  ume.network.data(Outcomes, Study, Treat, N = N, response = "binomial", type = "random")
})
result <- ume.network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 50
#>    Unobserved stochastic nodes: 57
#>    Total graph size: 1020
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.003885
#> [1] 1.003418
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>            Mean     SD  Naive SE Time-series SE
#> d[1,2]  0.33793 0.5821 0.0015030       0.002048
#> d[1,3]  0.86314 0.2736 0.0007065       0.001495
#> d[2,3] -0.05665 0.7414 0.0019144       0.002991
#> d[1,4]  1.42382 0.8820 0.0022773       0.006867
#> d[2,4]  0.65467 0.7342 0.0018958       0.003447
#> d[3,4]  0.20139 0.7798 0.0020136       0.003201
#> 
#> 2. Quantiles for each variable:
#> 
#>           2.5%      25%      50%    75% 97.5%
#> d[1,2] -0.8105 -0.03608  0.33524 0.7085 1.504
#> d[1,3]  0.3423  0.68323  0.85581 1.0355 1.427
#> d[2,3] -1.5280 -0.53318 -0.05547 0.4208 1.409
#> d[1,4] -0.2028  0.83123  1.38821 1.9720 3.287
#> d[2,4] -0.7957  0.18185  0.65308 1.1265 2.116
#> d[3,4] -1.3700 -0.29786  0.20766 0.7109 1.726
#> 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 53.40959 44.92760 98.33719 
#> 
#> $total_n
#> [1] 50
#> 
#> attr(,"class")
#> [1] "summary.ume.network.result"

Inconsistency model

We included another inconsistency model that can be used to test consistency assumption. Here we can specify a pair where we want to nodesplit and test the inconsistency assumptions. For instance if we specify treatment pair = c(3, 9), we are finding the difference in the direct and indirect evidence of treatment 3 and 9. Inconsistency estimate and the corresponding p-value are reported in the summary. If the p-value is small, it means that we can reject the null hypothesis that direct and indirect evidence agree. We can repeat for all the pairs in the network and identify pairs that might be inconsistent. Refer to Dias et al. 2010 (i.e. Checking consistency in mixed treatment comparison meta-analysis) for more details.

network <- with(thrombolytic, nodesplit.network.data(Outcomes, Study, Treat, N, response = "binomial", pair = c(3,9), type = "fixed"))
result <- nodesplit.network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 102
#>    Unobserved stochastic nodes: 59
#>    Total graph size: 2263
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.001654
#> [1] 1.000349
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>                    Mean      SD  Naive SE Time-series SE
#> d[1]          0.0000000 0.00000 0.000e+00      0.000e+00
#> d[2]         -0.0015994 0.03040 7.849e-05      1.872e-04
#> d[3]         -0.1601295 0.04292 1.108e-04      3.477e-04
#> d[4]         -0.0440163 0.04641 1.198e-04      2.400e-04
#> d[5]         -0.1127069 0.05986 1.545e-04      4.652e-04
#> d[6]         -0.1546408 0.07707 1.990e-04      5.747e-04
#> d[7]         -0.4652455 0.10073 2.601e-04      5.696e-04
#> d[8]         -0.1965259 0.21912 5.658e-04      1.248e-03
#> d[9]          0.0038887 0.03695 9.540e-05      2.176e-04
#> diff          1.2379222 0.42151 1.088e-03      4.006e-03
#> direct        1.4019404 0.41736 1.078e-03      3.988e-03
#> oneminusprob  0.0006733 0.02594 6.698e-05      9.248e-05
#> prob          0.9993267 0.02594 6.698e-05      9.248e-05
#> 
#> 2. Quantiles for each variable:
#> 
#>                  2.5%      25%       50%      75%     97.5%
#> d[1]          0.00000  0.00000  0.000000  0.00000  0.000000
#> d[2]         -0.06129 -0.02206 -0.001695  0.01895  0.058025
#> d[3]         -0.24419 -0.18919 -0.160275 -0.13102 -0.076162
#> d[4]         -0.13575 -0.07521 -0.044022 -0.01273  0.046809
#> d[5]         -0.22980 -0.15322 -0.112511 -0.07218  0.004301
#> d[6]         -0.30517 -0.20639 -0.154751 -0.10229 -0.004436
#> d[7]         -0.66316 -0.53311 -0.464816 -0.39726 -0.268543
#> d[8]         -0.62421 -0.34422 -0.196774 -0.04893  0.232624
#> d[9]         -0.06877 -0.02111  0.004047  0.02897  0.075702
#> diff          0.45082  0.94822  1.222504  1.51144  2.109201
#> direct        0.62471  1.11554  1.386883  1.67319  2.265301
#> oneminusprob  0.00000  0.00000  0.000000  0.00000  0.000000
#> prob          1.00000  1.00000  1.000000  1.00000  1.000000
#> 
#> 
#> $deviance
#> NULL
#> 
#> $`Inconsistency estimate`
#> [1] 1.237922
#> 
#> $p_value
#> [1] 0.001346667
#> 
#> attr(,"class")
#> [1] "summary.nodesplit.network.result"

Finding risk difference, relative risk, and number needed to treat with Binomial outcomes

Default summary measure when analyzing binary outcomes is the odds ratio. We have added an option to calculate risk difference, relative risk, and number needed to treat by incorporating external baseline risk in treatment A (i.e. placebo)

#Using metaprop function from meta package, we meta-analyze placebo event proportion.
#library(meta)
#placebo_index <- which(certolizumab$Treat == "Placebo")
#meta.pla <- metaprop(event = certolizumab$Outcomes[placebo_index], n = certolizumab$N[placebo_index], method = "GLMM", sm = "PLOGIT")
#mean.A = meta.pla$TE.random; prec.A = 1/meta.pla$tau^2

network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", mean.A = -2.27, prec.A = 2.53))
result <- network.run(network)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 24
#>    Unobserved stochastic nodes: 32
#>    Total graph size: 653
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.007456
#> [1] 1.002093
summary(result, extra.pars = c("RD", "RR", "NNT"))
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>              Mean        SD  Naive SE Time-series SE
#> NNT[2]   6.708250 3.539e+03 9.138e+00      9.133e+00
#> NNT[3]   2.813323 3.053e+03 7.882e+00      7.882e+00
#> NNT[4] -24.934187 7.465e+03 1.928e+01      1.928e+01
#> NNT[5] -17.053310 4.208e+02 1.087e+00      1.086e+00
#> NNT[6] -11.428259 2.006e+03 5.179e+00      5.191e+00
#> NNT[7] -81.066243 2.212e+04 5.710e+01      5.710e+01
#> RD[2]    0.068954 1.429e-01 3.690e-04      9.449e-04
#> RD[3]    0.259932 2.774e-01 7.161e-04      2.380e-03
#> RD[4]    0.007253 1.222e-01 3.156e-04      1.046e-03
#> RD[5]   -0.086092 5.460e-02 1.410e-04      2.147e-04
#> RD[6]   -0.067703 9.647e-02 2.491e-04      5.311e-04
#> RD[7]   -0.011796 1.041e-01 2.687e-04      6.565e-04
#> RR[2]    1.755173 1.720e+00 4.440e-03      1.105e-02
#> RR[3]    4.035537 4.034e+00 1.042e-02      3.004e-02
#> RR[4]    1.118508 1.429e+00 3.691e-03      1.219e-02
#> RR[5]    0.191541 2.501e-01 6.457e-04      1.895e-03
#> RR[6]    0.384779 1.032e+00 2.665e-03      5.963e-03
#> RR[7]    0.920473 1.156e+00 2.986e-03      7.302e-03
#> d[1]     0.000000 0.000e+00 0.000e+00      0.000e+00
#> d[2]     0.350233 1.081e+00 2.791e-03      6.525e-03
#> d[3]     1.478260 1.855e+00 4.791e-03      1.560e-02
#> d[4]    -0.238945 1.029e+00 2.657e-03      7.621e-03
#> d[5]    -1.992456 6.874e-01 1.775e-03      5.214e-03
#> d[6]    -1.989963 1.498e+00 3.869e-03      8.989e-03
#> d[7]    -0.491072 1.077e+00 2.781e-03      6.682e-03
#> sd       0.926998 6.241e-01 1.611e-03      9.238e-03
#> 
#> 2. Quantiles for each variable:
#> 
#>              2.5%       25%       50%       75%     97.5%
#> NNT[2] -2.569e+02 -17.65273   6.37008 21.273293 251.59047
#> NNT[3] -1.041e+02   1.30044   2.67198  7.494136  97.65141
#> NNT[4] -2.517e+02 -35.45958 -16.16772  6.859956 215.98945
#> NNT[5] -4.549e+01 -19.40947 -12.93104 -8.802273  -4.31058
#> NNT[6] -6.766e+01 -20.51422 -12.86744 -8.136559  26.22287
#> NNT[7] -2.557e+02 -34.86021 -16.63207  2.977774 209.97277
#> RD[2]  -9.959e-02  -0.01280   0.02927  0.105616   0.49010
#> RD[3]  -9.365e-02   0.02756   0.18268  0.460544   0.84974
#> RD[4]  -1.276e-01  -0.04817  -0.02020  0.017847   0.37889
#> RD[5]  -2.151e-01  -0.11194  -0.07634 -0.050656  -0.01925
#> RD[6]  -2.152e-01  -0.10662  -0.06890 -0.040895   0.12467
#> RD[7]  -1.461e-01  -0.05704  -0.02685  0.003305   0.26689
#> RR[2]   1.748e-01   0.84395   1.34115  2.080316   6.09365
#> RR[3]   1.650e-01   1.32488   2.87162  5.392206  14.77716
#> RR[4]   1.443e-01   0.47940   0.73517  1.206168   4.68606
#> RR[5]   3.670e-02   0.10771   0.15223  0.211711   0.56550
#> RR[6]   7.532e-03   0.06790   0.15098  0.332645   2.29413
#> RR[7]   7.100e-02   0.39158   0.65474  1.040712   3.62742
#> d[1]    0.000e+00   0.00000   0.00000  0.000000   0.00000
#> d[2]   -1.838e+00  -0.18963   0.33781  0.877902   2.57735
#> d[3]   -1.894e+00   0.32428   1.34690  2.510002   5.52549
#> d[4]   -2.030e+00  -0.79734  -0.34009  0.214055   2.11590
#> d[5]   -3.417e+00  -2.32812  -1.98146 -1.648844  -0.62227
#> d[6]   -5.002e+00  -2.79662  -1.98874 -1.180292   1.00239
#> d[7]   -2.750e+00  -1.00837  -0.46632  0.044940   1.66369
#> sd      1.560e-01   0.51240   0.77883  1.168220   2.61020
#> 
#> 
#> $Treat.order
#>             1             2             3             4             5 
#>  "Adalimumab"         "CZP"  "Etanercept"  "Infliximab"     "Placebo" 
#>             6             7 
#>   "Rituximab" "Tocilizumab" 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 26.97404 22.62738 49.60142 
#> 
#> $total_n
#> [1] 24
#> 
#> attr(,"class")
#> [1] "summary.network.result"

Generating reproducible results: initializing the random number generators

Generating reproducible results requires to set two set of seed values. First, the set.seed() function is used to allow bnma to generate reproducible initial values. Second, there is the JAGS RNG seed that need to be set. Setting the JAGS RNG seed is not necessary in bnma as the program assigns default JAGS RNG seeds. However, users can specify their own seed if needed.

set.seed(1234) # seed for generating reproducible initial values
network <- with(blocker, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial"))

# JAGS RNG list of initial values
jags_inits <- list(
  list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 94387),
  list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 24507),
  list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 39483)
)
result <- network.run(network, n.chains=3, RNG.inits=jags_inits)
#> Compiling model graph
#>    Resolving undeclared variables
#>    Allocating nodes
#> Graph information:
#>    Observed stochastic nodes: 44
#>    Unobserved stochastic nodes: 46
#>    Total graph size: 971
#> 
#> Initializing model
#> 
#> NOTE: Stopping adaptation
#> 
#> 
#> [1] 1.016791
#> [1] 1.018379

# bnma initial values now contain initial values for the parameters and the JAGS RNG initial values
str(result$inits)
#> List of 3
#>  $ :List of 6
#>   ..$ Eta      : num [1:22] -3.02 -1.88 -1.63 -2.61 -2.43 ...
#>   ..$ d        : num [1:2] NA -0.126
#>   ..$ sd       : num 0.271
#>   ..$ delta    : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#>   ..$ .RNG.name: chr "base::Wichmann-Hill"
#>   ..$ .RNG.seed: num 94387
#>  $ :List of 6
#>   ..$ Eta      : num [1:22] -2.59 -1.83 -2.19 -2.53 -2.4 ...
#>   ..$ d        : num [1:2] NA -0.103
#>   ..$ sd       : num 0.283
#>   ..$ delta    : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#>   ..$ .RNG.name: chr "base::Wichmann-Hill"
#>   ..$ .RNG.seed: num 24507
#>  $ :List of 6
#>   ..$ Eta      : num [1:22] -2.9 -2.23 -2.32 -2.51 -2.61 ...
#>   ..$ d        : num [1:2] NA -0.208
#>   ..$ sd       : num 0.297
#>   ..$ delta    : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#>   ..$ .RNG.name: chr "base::Wichmann-Hill"
#>   ..$ .RNG.seed: num 39483

# reproducible results
summary(result)
#> $summary.samples
#> 
#> Iterations = 1:50000
#> Thinning interval = 1 
#> Number of chains = 3 
#> Sample size per chain = 50000 
#> 
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#> 
#>         Mean      SD  Naive SE Time-series SE
#> d[1]  0.0000 0.00000 0.0000000      0.0000000
#> d[2] -0.2489 0.06537 0.0001688      0.0007305
#> sd    0.1353 0.08160 0.0002107      0.0017319
#> 
#> 2. Quantiles for each variable:
#> 
#>           2.5%      25%     50%     75%   97.5%
#> d[1]  0.000000  0.00000  0.0000  0.0000  0.0000
#> d[2] -0.373586 -0.29239 -0.2502 -0.2068 -0.1169
#> sd    0.008369  0.07367  0.1283  0.1864  0.3149
#> 
#> 
#> $Treat.order
#> 1 2 
#> 1 2 
#> 
#> $deviance
#>     Dbar       pD      DIC 
#> 41.75576 28.16711 69.92287 
#> 
#> $total_n
#> [1] 44
#> 
#> attr(,"class")
#> [1] "summary.network.result"