mbnma.run()
This vignette demonstrates how to use MBNMAdose
to perform Model-Based Network Meta-Analysis (MBNMA) of studies with multiple doses of different agents by accounting for the dose-response relationship. This can connect disconnected networks via the dose-response relationship and the placebo response, improve precision of estimated effects and allow interpolation/extrapolation of predicted response based on the dose-response relationship.
Modelling the dose-response relationship also avoids the “lumping” of different doses of an agent which is often done in Network Meta-Analysis (NMA) and can introduce additional heterogeneity or inconsistency. All models and analyses are implemented in a Bayesian framework, following an extension of the standard NMA methodology presented by (Lu and Ades 2004) and are run in JAGS (version 4.3.0 or later is required) (JAGS Computer Program 2017). For full details of dose-response MBNMA methodology see Mawdsley et al. (2016). Throughout this vignette we refer to a treatment as a specific dose or a specific agent
This package has been developed alongside MBNMAtime
, a package that allows users to perform time-course MBNMA to incorporate multiple time points within different studies. However, they should not be loaded into R at the same time as there are a number of functions with shared names that perform similar tasks yet are specific to dealing with either time-course or dose-response data.
Functions within MBNMAdose
follow a clear pattern of use:
mbnma.network()
mbnma.run()
, or any of the available wrapper dose-response functionsnma.nodesplit()
and nma.run()
predict()
At each of these stages there are a number of informative plots that can be generated to help understand the data and to make decisions regarding model fitting.
HF2PPITT
is from a systematic review of interventions for pain relief in migraine (Thorlund et al. 2014). The outcome is binary, and represents (as aggregate data) the number of participants who were headache-free at 2 hours. Data are from patients who had had at least one migraine attack, who were not lost to follow-up, and who did not violate the trial protocol. The dataset includes 70 Randomised-Controlled Trials (RCTs), comparing 7 triptans with placebo. Doses are standardised as relative to a “common” dose, and in total there are 23 different treatments (combination of dose and agent). HF2PPITT
is a data frame in long format (one row per arm and study), with the variables studyID
, AuthorYear
, N
, r
, dose
and agent
.
studyID | AuthorYear | N | r | dose | agent |
---|---|---|---|---|---|
1 | Tfelt-Hansen P 2006 | 22 | 6 | 0 | placebo |
1 | Tfelt-Hansen P 2006 | 30 | 14 | 1 | sumatriptan |
2 | Goadsby PJ 2007 | 467 | 213 | 1 | almotriptan |
2 | Goadsby PJ 2007 | 472 | 229 | 1 | zolmitriptan |
3 | Tuchman M2006 | 160 | 15 | 0 | placebo |
3 | Tuchman M2006 | 174 | 48 | 1 | zolmitriptan |
psoriasis
is from a systematic review of RCTs comparing biologics at different doses and placebo (Warren et al. 2019). Three different binary outcomes are included, all based on the number of patients experiencing degrees of improvement on the Psoriasis Area and Severity Index (PASI) measured at 12 weeks follow-up. The dataset includes 28 Randomised-Controlled Trials (RCTs), comparing 9 different biologics at different doses with placebo. There are three response variables, indicating the number of participants who achieved >=75% (r75
), >=90% (r90
) and 100% (r100
) improvement in PASI score after 12 weeks.
studyID | agent | dose_mg | freq | dose | N | r75 | r90 | r100 |
---|---|---|---|---|---|---|---|---|
UNCOVER 1 | Ixekizumab | 80 | Q2W | 40 | 433 | 386 | 307 | 153 |
UNCOVER 1 | Ixekizumab | 80 | Q4W | 20 | 432 | 357 | 279 | 145 |
UNCOVER 1 | Placebo | 0 | NA | 0 | 431 | 17 | 2 | 0 |
UNCOVER 2 | Ixekizumab | 80 | Q2W | 40 | 351 | 315 | 248 | 142 |
UNCOVER 2 | Ixekizumab | 80 | Q4W | 20 | 347 | 269 | 207 | 107 |
UNCOVER 2 | Etanercept | 50 | BIW | 100 | 358 | 149 | 67 | 19 |
ssri
is from a systematic review examining the efficacy of different doses of SSRI antidepressant drugs and placebo (Furukawa et al. 2019). The response to treatment is defined as a 50% reduction in depressive symptoms after 8 weeks (4-12 week range) follow-up. The dataset includes 60 RCTs comparing 5 different SSRIs with placebo.
studyID | bias | age | weeks | agent | dose | N | r |
---|---|---|---|---|---|---|---|
1 | Moderate risk | 43.0 | 6 | placebo | 0 | 149 | 69 |
1 | Moderate risk | 42.9 | 6 | fluoxetine | 20 | 137 | 77 |
2 | Low risk | 41.2 | 6 | placebo | 0 | 137 | 63 |
2 | Low risk | 40.9 | 6 | paroxetine | 20 | 138 | 74 |
7 | Low risk | 41.6 | 6 | placebo | 0 | 158 | 91 |
7 | Low risk | 41.3 | 6 | fluoxetine | 20 | 148 | 89 |
GoutSUA_2wkCFB
is from a systematic review of interventions for lowering Serum Uric Acid (SUA) concentration in patients with gout [not published previously]. The outcome is continuous, and aggregate data responses correspond to the mean change from baseline in SUA in mg/dL at 2 weeks follow-up. The dataset includes 10 Randomised-Controlled Trials (RCTs), comparing 5 different agents, and placebo. Data for one agent (RDEA) arises from an RCT that is not placebo-controlled, and so is not connected to the network directly. In total there were 19 different treatments (combination of dose and agent). GoutSUA_2wkCFB
is a data frame in long format (one row per arm and study), with the variables studyID
, y
, se
, agent
and dose
.
studyID | y | se | agent | dose | |
---|---|---|---|---|---|
4 | 1102 | -0.53 | 0.25 | RDEA | 100 |
5 | 1102 | -1.37 | 0.18 | RDEA | 200 |
6 | 1102 | -1.73 | 0.25 | RDEA | 400 |
53 | 2001 | -6.82 | 0.06 | Febu | 240 |
54 | 2001 | 0.15 | 0.04 | Plac | 0 |
92 | 2003 | -3.43 | 0.03 | Allo | 300 |
osteopain_2wkabs
is from a systematic review of interventions for pain relief in osteoarthritis, used previously in Pedder et al. (2019). The outcome is continuous, and aggregate data responses correspond to the mean WOMAC pain score at 2 weeks follow-up. The dataset includes 18 Randomised-Controlled Trials (RCTs), comparing 8 different agents with placebo. In total there were 26 different treatments (combination of dose and agent). The active treatments can also be grouped into 3 different classes, within which they have similar mechanisms of action. osteopain_2wkabs
is a data frame in long format (one row per arm and study), with the variables studyID
, agent
, dose
, class
, y
, se
, and N
.
studyID | agent | dose | class | y | se | N | |
---|---|---|---|---|---|---|---|
13 | 1 | Placebo | 0 | Placebo | 6.26 | 0.23 | 60 |
14 | 1 | Etoricoxib | 10 | Cox2Inhib | 5.08 | 0.16 | 114 |
15 | 1 | Etoricoxib | 30 | Cox2Inhib | 4.42 | 0.17 | 102 |
16 | 1 | Etoricoxib | 5 | Cox2Inhib | 5.34 | 0.16 | 117 |
17 | 1 | Etoricoxib | 60 | Cox2Inhib | 3.62 | 0.17 | 112 |
18 | 1 | Etoricoxib | 90 | Cox2Inhib | 4.08 | 0.17 | 112 |
alog_pcfb
is from a systematic review of Randomised-Controlled Trials (RCTs) comparing different doses of alogliptin with placebo (Langford et al. 2016). The systematic review was simply performed and was intended to provide data to illustrate a statistical methodology rather than for clinical inference. Alogliptin is a treatment aimed at reducing blood glucose concentration in type II diabetes. The outcome is continuous, and aggregate data responses correspond to the mean change in HbA1c from baseline to follow-up in studies of at least 12 weeks follow-up. The dataset includes 14 RCTs, comparing 5 different doses of alogliptin with placebo, leading to 6 different treatments (combination of dose and agent) within the network. alog_pcfb
is a data frame in long format (one row per arm and study), with the variables studyID
, agent
, dose
, y
, se
, and N
.
studyID | agent | dose | y | se | N |
---|---|---|---|---|---|
NCT01263470 | alogliptin | 0.00 | 0.06 | 0.05 | 75 |
NCT01263470 | alogliptin | 6.25 | -0.51 | 0.08 | 79 |
NCT01263470 | alogliptin | 12.50 | -0.70 | 0.06 | 84 |
NCT01263470 | alogliptin | 25.00 | -0.76 | 0.06 | 79 |
NCT01263470 | alogliptin | 50.00 | -0.82 | 0.05 | 79 |
NCT00286455 | alogliptin | 0.00 | -0.13 | 0.08 | 63 |
Before embarking on an analysis, the first step is to have a look at the raw data. Two features (network connectivity and dose-response relationship) are particularly important for MBNMA. For this we want to get our dataset into the right format for the package. We can do this using mbnma.network()
.
# Using the triptans dataset
network <- mbnma.network(HF2PPITT)
#> Values for `agent` with dose = 0 have been recoded to `Placebo`
#> agent is being recoded to enforce sequential numbering and allow inclusion of `Placebo`
summary(network)
#> Description: Network
#> Number of studies: 70
#> Number of treatments: 23
#> Number of agents: 8
#> Median (min, max) doses per agent (incl placebo): 4 (3, 6)
#> Agent-level network is CONNECTED
#> Ttreatment-level network is CONNECTED
This function takes a dataset with the columns:
studyID
Study identifiersagent
Agent identifiers (can be character, factor or numeric)dose
Numeric data indicating the dose of the given agent within the study armclass
An optional column indicating a particular class code. Agents with the same name/identifier must also have the same class code.Depending on the type of data (and the likelihood) the following columns are required:
y
Numeric data indicating the mean response for a given study armse
Numeric data indicating the standard error for a given study armr
Numeric data indicating the number of responders in a given study armN
Numeric data indicating the total number of participants in a given study armr
Numeric data indicating the number of events in a given study armE
Numeric data indicating the total exposure time in a given study armIt then performs the following checks on the data:
Finally it converts the data frame into an object of class("mbnma.network")
, which contains indices for study arms, numeric variables for treatments, agents and classes, and stores a vector of treatment, agent and class names as an element within the object. By convention, agents are numbered alphabetically, though if the original data for agents is provided as a factor then the factor codes will be used. This then contains all the necessary information for subsequent MBNMAdose
functions.
Examining how the evidence in the network is connected and identifying which studies compare which treatments/agents helps to understand which effects can be estimated, what information will be helping to inform those estimates, and whether linking via the dose-response relationship is possible if the network is disconnected at the treatment-level. The complexity of dose-response relationships that can be estimated is dependent on the number of doses of each agent available, so this is also important to know.
Network plots can be plotted which shows which treatments/agents have been compared in head-to-head trials. Typically the thickness of connecting lines (“edges”) is proportional to the number of studies that make a particular comparison and the size of treatment nodes (“vertices”) is proportional to the total number of patients in the network who were randomised to a given treatment/agent (provided N
is included as a variable in the original dataset for mbnma.network()
).
In MBNMAdose
these plots are generated using igraph
, and can be plotted by calling plot()
. The generated plots are objects of class("igraph")
meaning that, in addition to the options specified in plot()
, various igraph
functions can subsequently be used to make more detailed edits to them.
Within these network plots, vertices are automatically aligned in a circle (as the default) and can be tidied by shifting the label distance away from the nodes.
# Prepare data using the triptans dataset
tripnet <- mbnma.network(HF2PPITT)
#> Values for `agent` with dose = 0 have been recoded to `Placebo`
#> agent is being recoded to enforce sequential numbering and allow inclusion of `Placebo`
summary(tripnet)
#> Description: Network
#> Number of studies: 70
#> Number of treatments: 23
#> Number of agents: 8
#> Median (min, max) doses per agent (incl placebo): 4 (3, 6)
#> Agent-level network is CONNECTED
#> Ttreatment-level network is CONNECTED
# Draw network plot
plot(tripnet)
If some vertices are not connected to the network reference treatment through any pathway of head-to-head evidence, a warning will be given. The nodes that are coloured white represent these disconnected vertices.
# Prepare data using the gout dataset
goutnet <- mbnma.network(GoutSUA_2wkCFB)
summary(goutnet)
#> Description: Network
#> Number of studies: 10
#> Number of treatments: 19
#> Number of agents: 6
#> Median (min, max) doses per agent (incl placebo): 5 (3, 6)
#> Agent-level network is DISCONNECTED
#> Treatment-level network is DISCONNECTED
plot(goutnet, label.distance = 5)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> Allo_245
#> Allo_256
#> Allo_300
#> Allo_400
#> Benz_50
#> Benz_139
#> Benz_143
#> Benz_200
#> Febu_40
#> Febu_80
#> Febu_120
#> RDEA_100
#> RDEA_200
#> RDEA_400
However, whilst at the treatment-level (specific dose of a specific agent), many of these vertices are disconnected, at the agent level they are connected (via different doses of the same agent), meaning that via the dose-response relationship it is possible to estimate results.
# Plot at the agent-level
plot(goutnet, level = "agent", label.distance = 6)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> RDEA
One agent (RDEA) is still not connected to the network, but MBNMAdose
allows agents to connect via a placebo response even if they do not include placebo in a head-to-head trial (see [Linking disconnected treatments via the dose-response relationship]).
# Plot connections to placebo via a two-parameter dose-response function (e.g.
# Emax)
plot(goutnet, level = "agent", doselink = 2, remove.loops = TRUE, label.distance = 6)
#> Dose-response connections to placebo plotted based on a dose-response
#> function with 1 degrees of freedom
It is also possible to plot a network at the treatment level but to colour the doses by the agent that they belong to.
# Colour vertices by agent
plot(goutnet, v.color = "agent", label.distance = 5)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> Allo_245
#> Allo_256
#> Allo_300
#> Allo_400
#> Benz_50
#> Benz_139
#> Benz_143
#> Benz_200
#> Febu_40
#> Febu_80
#> Febu_120
#> RDEA_100
#> RDEA_200
#> RDEA_400
Several further options exist to allow for inclusion of disconnected treatments, such as assuming some sort of common effect among agents within the same class. This is discussed in more detail later in the vignette.
In order to consider which functional forms may be appropriate for modelling the dose-response relationship, it is useful to look at results from a “split” network meta-analysis (NMA), in which each dose of an agent is considered as separate and unrelated (i.e. we are not assuming any dose-response relationship). The nma.run()
function performs a simple NMA, and by default it drops studies that are disconnected at the treatment-level (since estimates for these will be very uncertain if included).
# Run a random effect split NMA using the alogliptin dataset
alognet <- mbnma.network(alog_pcfb)
nma.alog <- nma.run(alognet, method = "random")
print(nma.alog)
#> $jagsresult
#> Inference for Bugs model at "C:\Users\hp17602\AppData\Local\Temp\RtmpuCnbSK\file7648af1520c", fit using jags,
#> 3 chains, each with 10000 iterations (first 5000 discarded), n.thin = 5
#> n.sims = 3000 iterations saved
#> mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat
#> d[1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d[2] -0.453 0.089 -0.625 -0.511 -0.454 -0.395 -0.273 1.001
#> d[3] -0.653 0.046 -0.739 -0.683 -0.655 -0.623 -0.558 1.001
#> d[4] -0.709 0.045 -0.795 -0.740 -0.710 -0.679 -0.622 1.001
#> d[5] -0.759 0.087 -0.925 -0.814 -0.760 -0.703 -0.582 1.001
#> d[6] -0.678 0.171 -1.013 -0.794 -0.679 -0.567 -0.334 1.003
#> sd 0.123 0.028 0.076 0.104 0.121 0.140 0.185 1.005
#> totresdev 46.863 9.768 29.731 39.864 46.269 52.996 67.676 1.001
#> deviance -124.451 9.768 -141.583 -131.450 -125.045 -118.318 -103.638 1.001
#> n.eff
#> d[1] 1
#> d[2] 2200
#> d[3] 3000
#> d[4] 2500
#> d[5] 3000
#> d[6] 730
#> sd 410
#> totresdev 2800
#> deviance 3000
#>
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#>
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 47.7 and DIC = -77.3
#> DIC is an estimate of expected predictive error (lower deviance is better).
#>
#> $trt.labs
#> [1] "Placebo_0" "alogliptin_6.25" "alogliptin_12.5" "alogliptin_25"
#> [5] "alogliptin_50" "alogliptin_100"
#>
#> attr(,"class")
#> [1] "nma"
# Draw plot of NMA estimates plotted by dose
plot(nma.alog)
In the alogliptin dataset there appears to be a dose-response relationship, and it also appears to be non-linear.
One additional use of nma.run()
is that is can be used after fitting an MBNMA to ensure that fitting a dose-response function is not leading to poorer model fit than when conducting a conventional NMA. Comparing the total residual deviance between NMA and MBNMA models is useful to identify if introducing a dose-response relationship is leading to poorer model fit. However, it is important to note that if treatments are disconnected in the NMA and have been dropped (drop.discon=TRUE
), there will be fewer observations present in the dataset, which will subsequently lead to lower pD and lower residual deviance, meaning that model fit statistics from NMA and MBNMA may not be directly comparable.
mbnma.run()
MBNMA is performed in MBNMAdose
by applying mbnma.run()
. A "mbnma.network"
object must be provided as the data for mbnma.run()
. The key arguments within mbnma.run()
involve specifying the functional form used to model the dose-response, and the dose-response parameters that comprise that functional form.
Several different functional forms are implemented within MBNMAdose
, that allow a variety of parameterizations and dose-response shapes. These are provided to the fun
argument in mbnma.run()
. \(x_{i,k}\) refers to the dose and \(t_{i,k}\) to the agent in arm \(k\) of study \(i\).
"linear"
: \(f(x_{i,k}, t_{i,k})=\beta_{t_{i,k}} x_{i,k}\) where \(\beta_{t_{i,k}}\) is the slope
"exponential"
: \(f(x_{i,k}, t_{i,k})=\lambda_{t_{i,k}} (1 - e^{-x_{i,k}})\) where \(\lambda\) is the rate of exponential growth/decay
"emax"
: \(f(x_{i,k}, t_{i,k})=\dfrac{Emax_{t_{i,k}} \times x_{i,k}} {ED50_{t_{i,k}} + x_{i,k}}\) where \(Emax\) is the maximum response that can be achieved and \(ED50\) is the dose at which 50% of the maximum response is achieved
"emax.hill"
: \(f(x_{i,k}, t_{i,k})=\dfrac{Emax_{t_{i,k}} \times x_{i,k}^\gamma}{ED50_{t_{i,k}}^\gamma + x_{i,k}^\gamma}\) where \(Emax\) is the maximum response that can be achieved, \(ED50\) is the dose at which 50% of the maximum response is achieved and \(\gamma\) is the Hill parameter.
"rcs"
(restricted cubic spline): \(f(x_{i,k}, t_{i,k})=\sum_{p=1}^{P} \beta_{p,t_{i,k}} X_{p,i,k}\) where \(\beta_{p,t_{i,k}}\) is the regression coefficient for spline \(p^{th}\) and \(X_{1:P,i,k}\) is the basis matrix for the spline.
"nonparam.up"
- Non-parametric monotonically increasing dose-response
"nonparam.down"
- Non-parametric monotonically decreasing dose-response
"user"
: Any function that can be explicitly defined by the user within user.fun
(see User-defined dose-response function)
In mbnma.run()
it is possible to specify up to four different dose-response parameters, depending on the dose-response function used. These are named beta.1
, beta.2
, beta.3
and beta.4
, and their interpretation varies depending on the dose-response function used (see ?mbnma.run()
).
For simplification and interpretability, both in the way in which dose-response parameters are defined in the model and in how they are reported in the output, wrapper functions are provided for each of the commonly used dose-response functions in mbnma.run()
. For example, mbnma.emax()
is equivalent to mbnma.run(fun="emax")
, but with a different naming of dose-response parameters (emax
instead of beta.1
and ed50
instead of beta.2
). Several of these wrapper functions will be shown in other examples in this vignette.
Dose-response parameters can be specified in different ways which affects the key parameters estimated by the model and implies different modelling assumptions. Three different specifications are available for each parameter:
"rel"
indicates that relative effects should be pooled for this dose-response parameter separately for each agent in the network. This preserves randomisation within included studies and is likely to vary less between studies (only due to effect modification)."common"
indicates that a single absolute value for this dose-response parameter should be estimated across the whole network that does not vary by agent. This is particularly useful for parameters expected to be constant (e.g. Hill parameters in mbnma.emax.hill()
)."random"
indicates that a single absolute value should be estimated separately for each agent, but that all the agent values vary randomly around a single mean absolute network effect. It is similar to "common"
but makes slightly less strong assumptions.numeric()
Assigned a numeric value - this is similar to assigning "common"
, but the single absolute value is assigned as a numeric value by the user, rather than estimated from the data.In mbnma.run()
, an additional argument, method
, indicates what method to use for pooling relative effects and can take either the values "common"
, implying that all studies estimate the same true effect (akin to a “fixed effect” meta-analysis), or "random"
, implying that all studies estimate a separate true effect, but that each of these true effects vary randomly around a true mean effect. This approach allows for modelling of between-study heterogeneity.
If relative effects ("rel"
) are modelled on more than one dose-response parameter then by default, a correlation will be assumed between the dose-response parameters, which will typically improve estimation (provided the parameters are correlated…they usually are). This can be prevented by setting cor=FALSE
.
mbnma.run()
returns an object of class c("rjags", "mbnma")
. summary()
provides posterior medians and 95% credible intervals (95%CrI) for different parameters in the model, naming them by agent and giving some explanation of the way they have been specified in the model. print()
can also be used to give full summary statistics of the posterior distributions for monitored nodes in the JAGS model. Estimates are automatically reported for parameters of interest depending on the model specification (unless otherwise specified in parameters.to.save
)
Nodes that are automatically monitored (if present in the model) have the following interpretation. They will have an additional suffix that relates to the name/number of the dose-response parameter to which they correspond (e.g. d.et50
or beta.1
):
d
The pooled effect for each agent for a given dose-response parameter. These will be estimated by the model for dose-response parameters specified as "rel"
(e.g. mbnma.run(beta.1="rel")
).sd
(without a suffix) - the between-study SD (heterogeneity) for relative effects, reported if method="random"
.D
The class effect for each class for a given dose-response parameter. These will be estimated by the model if specified in class.effect
(see Class effects) for a given dose-response parameter.sd.D
The within-class SD for different agents within the same class. This will be estimated by the model if any dose-response parameter specified in class.effect
(see Class effects) is set to "random"
.beta
The common or mean value of a given dose-response parameter across the whole network (does not vary by agent/class). This will be estimated by the model for dose-response parameters specified as "common"
or "random"
(e.g. mbnma.run(beta.1="common"
)).sd
(with a suffix) - the between-study SD (heterogeneity) for dose-response parameters modelled as exchangeable around a single mean value. This will be estimated by the model for dose-response parameters specified as "random"
.totresdev
The residual deviance of the modeldeviance
The deviance of the modelModel fit statistics for pD
(effective number of parameters) and DIC
(Deviance Information Criterion) are also reported, with an explanation as to how they have been calculated.
An example MBNMA of the triptans dataset using an Emax dose-response function and common treatment effects that pool relative effects on both Emax and ED50 parameters follows:
# Run an Emax dose-response MBNMA
mbnma <- mbnma.run(tripnet, fun = "emax", beta.1 = "rel", beta.2 = "rel", method = "common")
#> `likelihood` not given by user - set to `binomial` based on data provided
#> `link` not given by user - set to `logit` based on assigned value for `likelihood`
summary(mbnma)
#> ========================================
#> Dose-response MBNMA
#> ========================================
#>
#> Dose-response function: emax
#>
#> Pooling method
#>
#> Method: Common (fixed) effects estimated for relative effects
#>
#>
#>
#>
#> beta.1 (emax, emax) dose-response parameter results
#>
#> Pooling: relative effects
#>
#> Parameter Median 2.5% 97.5%
#> eletriptan d.1[2] 2.538276 2.2419184 2.913613
#> sumatriptan d.1[3] 1.753897 1.5215865 2.083017
#> frovatriptan d.1[4] 1.891403 1.3004045 2.972405
#> almotriptan d.1[5] 1.875362 1.3773170 2.931678
#> zolmitriptan d.1[6] 2.024466 1.6018714 2.644516
#> naratriptan d.1[7] 1.084020 0.5916346 2.049913
#> rizatriptan d.1[8] 2.361248 1.8656037 3.432664
#>
#>
#> beta.2 (emax, ed50) dose-response parameter results
#>
#> Parameter modelled on exponential scale to ensure it takes positive values
#> on the natural scale
#> Pooling: relative effects
#>
#> Parameter Median 2.5% 97.5%
#> eletriptan d.2[2] -0.7304136 -1.1633609 -0.3327194
#> sumatriptan d.2[3] -0.6672289 -1.2807633 -0.1272411
#> frovatriptan d.2[4] -0.5096691 -1.5624433 0.4872680
#> almotriptan d.2[5] -0.1869783 -0.9932092 0.6701474
#> zolmitriptan d.2[6] -0.4210627 -1.1953366 0.1899742
#> naratriptan d.2[7] -0.1829688 -1.2360775 1.0008551
#> rizatriptan d.2[8] -0.6279711 -1.4274515 0.1892692
#>
#>
#>
#>
#> Model Fit Statistics
#>
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 79.9
#>
#> Deviance = 1169.6
#> Residual deviance = 266.5
#> Deviance Information Criterion (DIC) = 1249.4
# An alternative would be to use an Emax wrapper for mbnma.run() which would give
# the same result but with more easily interpretable parameter names
mbnma.emax(tripnet, emax = "rel", ed50 = "rel", method = "common")
In this example the d.1
/d.emax
parameters are the effects of each agent for the dose-response parameter beta.1
/emax
. For an Emax model this corresponds to the maximum response that can be achieved for a particular agent. The d.2
/d.ed50
parameters are the effects for each agent for beta.2
/ed50
, which (for an Emax function) corresponds to the dose at which 50% of the maximum response is achieved. Results for ED50 are given on the log scale as it is constrained to be greater than zero.
Instead of estimating a separate relative effect for each agent, a simpler dose-response model that makes stronger assumptions could estimate a single parameter across the whole network for ED50, but still estimates a separate effect for each agent for Emax. In this case we can model random relative effects:
# Emax model with single parameter estimated for Emax
emax <- mbnma.emax(tripnet, emax = "rel", ed50 = "common", method = "random")
#> `likelihood` not given by user - set to `binomial` based on data provided
#> `link` not given by user - set to `logit` based on assigned value for `likelihood`
summary(emax)
#> ========================================
#> Dose-response MBNMA
#> ========================================
#>
#> Dose-response function: emax
#>
#> Pooling method
#>
#> Method: Random effects estimated for relative effects
#>
#> Parameter Median (95%CrI)
#> -----------------------------------------------------------------------
#> Between-study SD for relative effects 0.248 (0.16, 0.338)
#>
#>
#> emax dose-response parameter results
#>
#> Pooling: relative effects
#>
#> Parameter Median 2.5% 97.5%
#> eletriptan d.emax[2] 2.804818 2.3014276 3.463684
#> sumatriptan d.emax[3] 1.894138 1.6071997 2.332513
#> frovatriptan d.emax[4] 2.060035 1.3527891 2.961884
#> almotriptan d.emax[5] 1.774588 1.3505504 2.370081
#> zolmitriptan d.emax[6] 2.111987 1.6689420 2.685967
#> naratriptan d.emax[7] 1.022732 0.4276458 1.681113
#> rizatriptan d.emax[8] 2.697588 2.1727235 3.576930
#>
#>
#> ed50 dose-response parameter results
#>
#> Parameter modelled on exponential scale to ensure it takes positive values
#> on the natural scale
#> Pooling: single parameter shared across the network
#>
#> Parameter Median 2.5% 97.5%
#> beta.ed50 -0.3868821 -0.9243471 0.1521089
#>
#>
#>
#>
#> Model Fit Statistics
#>
#> Effective number of parameters:
#> pD (pV) calculated using the rule, pD = var(deviance)/2 = 190.3
#>
#> Deviance = 1094
#> Residual deviance = 190.9
#> Deviance Information Criterion (DIC) = 1284.3
In this example the d.1
/d.emax
parameters are the effects of each agent for the dose-response parameter beta.1
/emax
, as previously. But now there is a beta.ed50
parameter in the output (instead of d.ed50
), which is the absolute value of ED50 (on the log scale) across all agents in the network. As we have modelled random relative effects, we also have estimated a parameter for sd
, the standard deviation for relative effects between studies.
The total residual deviance (totresdev
) is lower in the second model, indicating a better fit, but the effective number of parameters (pD) is much greater due to the modelling of random effects, and overall the DIC is higher, suggesting that the first model is a better compromise of fit and complexity. Furthermore it makes less strong assumptions regarding the exchangeability of effects on ED50 between agents.
mbnma.run()
Several additional arguments can be given to mbnma.run()
that require further explanation.
Similar effects between agents within the network can be modelled using class effects. This requires assuming that different agents have some sort of common class effect, perhaps due to similar mechanisms of action. Advantages of this is that class effects can be used to connect agents that might otherwise be disconnected from the network, and they can also provide additional information on agents that might otherwise have insufficient data available to estimate a desired dose-response. The drawback is that this requires making additional assumptions regarding similarity of efficacy.
In particular, the scales for different dose-response parameters must be the same for this assumption to be valid. For example, in an Emax model it may be reasonable to assume a class effect on the Emax parameter, as this is parameterised on the response scale and so could be similar across agents of the same class. However, the scale for the ED50 parameter is on the dose scale, which is likely to differ for each agent and so an assumption of similarity between agents for this parameter may be less valid.
Class effects can only be applied to dose-response parameters which vary by agent. In mbnma.run()
they are supplied as a list, in which each element is named following the name of the corresponding dose-response parameter as defined in the dose-response function. The names will therefore differ when using wrapper functions for mbnma.run()
. The class effect for each dose-response parameter can be either "common"
, in which the effects for each agent within the same class are constrained to a common class effect, or "random"
, in which the effects for each agent within the same class are assumed to be randomly distributed around a shared class mean.
When working with class effects in MBNMAdose
a variable named class
must be included in the original data frame provided to mbnma.network()
. Below we assign a class for two similar agents in the dataset - Naproxcinod and Naproxen. We will estimate separate effects for all other agents, so we set their classes to be equal to their agents.
# Using the osteoarthritis dataset
pain.df <- osteopain_2wkabs
# Set class equal to agent for all agents
pain.df$class <- pain.df$class
# Set a shared class (NSAID) only for Naproxcinod and Naproxen
pain.df$class[pain.df$agent %in% c("Naproxcinod", "Naproxen")] <- "NSAID"
# Run a restricted cubic spline MBNMA with a common class effect on beta.1
classnet <- mbnma.network(pain.df)
splines <- mbnma.run(classnet, fun = "rcs", class.effect = list(beta.1 = "common"))
Mean class effects are given in the output as D.ed50
/D.1
parameters. These can be interpreted as the effect of each class for Emax parameters (beta.1
). Note the number of D.ed50
parameters is therefore equal to the number of classes defined in the dataset.
If we had specified that the class effects were "random"
, each treatment effect for Emax (beta.1
) would be assumed to be randomly distributed around its class mean with SD given in the output as sd.D.ed50
/sd.D.1
.
mbnma.run()
automatically models correlation between relative effects dose-response parameters (unless cor=FALSE
). The correlation is modelled using a vague Wishart prior, but this can be made more informative by indicating the relative magnitude of scales of the parameters that are modelled using relative effects.
var.scale
can be used for this - it takes a numeric vector the same length as the number of relative effect dose-response parameters, and the relative magnitude of the numbers indicates the relative magnitude of the scales. Each element of var.scale
corresponds to the relevant dose-response parameter (i.e. var.scale[1]
will correspond to beta.1
)
For example, with the triptans dataset we might expect that values for Emax might be 4 times larger for ED50 (on the log scale):
Users can define their own dose-response function rather than using one of the functions provided in mbnma.run()
. By specifying fun = "user"
in the arguments, a function can then be provided to user.fun
, which specifies a new dose-response in terms of beta
parameters and dose
. This allows a huge degree of flexibility when defining the dose-response relationship.
The function assigned to user.fun
needs to fulfil a few criteria to be valid: * dose
must always be included in the function * At least one beta
time-course parameter must be specified, up to a maximum of four. These must always be named beta.1
, beta.2
, beta.3
and beta.4
, and must be included sequentially (i.e. don’t include beta.3
if beta.2
is not included) * Indices used by JAGS should not be added to user.fun
(e.g. use dose
rather than dose[i,k]
) * Any mathematical/logical operators that can be implemented in JAGS can be added to the function (e.g. exp()
, ifelse()
)
Different dose-response functions can be used for different agents within the network. This allows for the modelling of more complex dose-response functions in agents for which there are many doses available, and less complex functions in agents for which there are fewer doses available. Note that these models are typically less computationally stable than single dose-response function models, and they are likely to benefit less from modelling correlation between multiple dose-response parameters since there are fewer agents informing correlations between each dose-response parameter.
This can be modelled in mbnma.run()
by assigning a character vector of dose-response functions to the fun
argument, with each element in the vector corresponding to an agent in the network. When using multiple dose-response functions, the length of fun
must therefore be equal to the number of agents in the network. The choice of dose-response function used for Placebo is irrelevant, since evaluating the function at dose=0 will always equal 0.
Different dose-response parameters across the different functions are numbered in increasing order following the functions "user"
, "linear"
, "exponential"
, "emax"
, "emax.hill"
, "rcs"
. For example, as the following model uses exponential and Emax functions, beta.1
corresponds to the rate parameter of the exponential function, beta.2
to the Emax parameter of the Emax function, and beta.3
to the ED50 parameter of the Emax function.
# Placebo can be modeled using any function (since it will evaluate to 0)
# Adalimubab and Guselkumab: exponential function (limited dose-response info)
# All others: restricted cubic spline with 3 knots
dr.funs <- rep(NA, length(psorinet$agents))
dr.funs[which(psorinet$agents %in% c("Placebo", "Adalimumab", "Guselkumab"))] <- "exponential"
dr.funs[which(!psorinet$agents %in% c("Placebo", "Adalimumab", "Guselkumab"))] <- "rcs"
multifun <- mbnma.run(psorinet, fun = dr.funs, method = "common", knots = 3, n.iter = 50000)
summary(multifun)
knots
For a more flexible dose-response shape, restricted cubic splines can be fitted to the data by setting fun="rcs"
. The model is very flexible and can allow for non-monotonic dose-response relationships, though parameters can be difficult to interpret. This follows the method of Hamza et a. (2020) (Hamza et al. 2020).
To fit this model, the number/location of knots
should be specified. If a single number is given, it represents the the number of knots to be equally spaced across the dose range of each agent. Alternatively several probabilities can be given that represent the quantiles of the dose range for each agent at which knots should be located.
Default vague priors for the model are as follows:
\[ \begin{aligned} &d_{p,a} \sim N(0,10000)\\ &beta_{p} \sim N(0,10000)\\ &\sigma \sim N(0,400) \text{ limited to } x \in [0,\infty]\\ &\sigma_{p} \sim N(0,400) \text{ limited to } x \in [0,\infty]\\ &D_{p,c} \sim N(0,1000)\\ &\sigma_{p} \sim N(0,400) \text{ limited to } x \in [0,\infty]\\ \end{aligned} \]
…where \(p\) is an identifier for the dose-response parameter (e.g. 1 for Emax and 2 for ED50), \(a\) is an agent identifier and \(c\) is a class identifier
Users may wish to change these, perhaps in order to use more/less informative priors, but also because the default prior distributions in some models may lead to errors when compiling/updating models.
If the model fails during compilation/updating (i.e. due to a problem in JAGS), mbnma.run()
will generate an error and return a list of arguments that mbnma.run()
used to generate the model. Within this (as within a model that has run successfully), the priors used by the model (in JAGS syntax) are stored within "model.arg"
:
print(mbnma$model.arg$priors)
#> $mu
#> [1] "dnorm(0,0.001)"
#>
#> $inv.R
#> [1] "dwish(Omega[,], 2)"
In this way a model can first be run with vague priors and then rerun with different priors, perhaps to allow successful computation, perhaps to provide more informative priors, or perhaps to run a sensitivity analysis with different priors. Increasing the precision of prior distributions only a little can also often improve convergence considerably.
To change priors within a model, a list of replacements can be provided to priors
in mbnma.run()
. The name of each element is the name of the parameter to change (without indices) and the value of the element is the JAGS distribution to use for the prior. This can include censoring or truncation if desired. Only the priors to be changed need to be specified - priors for parameters that aren’t specified will take default values.
For example, if we want to use tighter priors for the half-normal SD parameters we could increase the precision:
The default value in for pd
in mbnma.run()
is "pv"
, which uses the value automatically calculated in the R2jags
package as pv = var(deviance)/2
. Whilst this is easy to calculate, it is numerically less stable than pD
and may perform more poorly in certain conditions (Gelman, Hwang, and Vehtari 2014).
A commonly-used approach for calculating pD is the plug-in method (pd="plugin"
) (Spiegelhalter et al. 2002). However, this can sometimes result in negative non-sensical values due to skewed posterior distributions for deviance contributions that can arise when fitting non-linear models.
Another approach that is more reliable than the plug-in method when modelling non-linear effects is using the Kullback-Leibler divergence (pd="pd.kl"
) (Plummer 2008). The disadvantage of this approach is that it requires running additional MCMC iterations, so can be slightly slower to calculate.
Finally, pD can also be calculated using an optimism adjustment (pd="popt"
) which allows for calculation of the penalized expected deviance (Plummer 2008). This adjustment allows for the fact that data used to estimate the model is the same as that used to assess its parsimony. It also requires running additional MCMC iterations.
In addition to the arguments specific to mbnma.run()
it is also possible to use any arguments to be sent to R2jags::jags()
. Most of these are likely to relate to improving the performance of MCMC simulations in JAGS. Some of the key arguments that may be of interest are:
n.chains
The number of Markov chains to run (default is 3)n.iter
The total number of iterations per MCMC chainn.burnin
The number of iterations that are discarded to ensure iterations are only saved once chains have convergedn.thin
The thinning rate which ensures that results are only saved for 1 in every n.thin
iterations per chain. This can be increased to reduce autocorrelationOne of the strengths of dose-response MBNMA is that it allows treatments to be connected in a network that might otherwise be disconnected, by linking up different doses of the same agent via the dose-response relationship. To illustrate this we can generate a version of the gout dataset which excludes placebo (to artificially disconnect the network):
# Generate dataset without placebo
noplac.gout <- GoutSUA_2wkCFB[!GoutSUA_2wkCFB$studyID %in% c(2001, 3102), ] # Drop two-arm placebo studies
noplac.gout <- noplac.gout[noplac.gout$agent != "Plac", ] # Drop placebo arm from multi-arm studies
# Create mbnma.network object
noplac.net <- mbnma.network(noplac.gout)
# Plot network
plot(noplac.net, label.distance = 5)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> Allo_300
#> Allo_400
#> Arha_400
#> Arha_600
#> Benz_50
#> Benz_200
#> Febu_40
#> Febu_80
#> Febu_120
#> RDEA_100
#> RDEA_200
#> RDEA_400
This results in a very disconnected network, and if we were to conduct a conventional “split” NMA (whereby different doses of an agent are considered to be independent), we would only be able to estimate relative effects for a very small number of treatments. However, if we assume a dose-response relationship then these different doses can be connected via this relationship, and we can connect up more treatments and agents in the network.
# Network plot at the agent level illustrates how doses can connect using MBNMA
plot(noplac.net, level = "agent", remove.loops = TRUE, label.distance = 4)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> Arha
#> RDEA
There are still two agents that do not connect to the network because they involve comparisons of different doses of the same agent. However, multiple doses of an agent within a study allow us to estimate the dose-response relationship and tell us something about the placebo (dose = 0) response - the number of different doses of an agent within a study will determine the degrees of freedom with which we are able to estimate a given dose-response function. Although the placebo response is not estimated directly in the MBNMA framework (it is modelled as a nuisance parameter), it allows us to connect the dose-response function estimated for an agent in one study, with that in another.
To visualise this, we can use the doselink
argument in plot(mbnma.network)
. The integer given to this argument indicates the minimum number of doses from which a dose-response function could be estimated, and is equivalent to the number of parameters in the desired dose-response function plus one. For example for an exponential function, we would require at least two doses on a dose-response curve (including placebo), since this would allow one degree of freedom with which to estimate the one-parameter dose-response function. By modifying the doselink
argument we can determine the complexity of a dose-response function that we might expect to be able to estimate whilst still connecting all agents within the network.
If placebo is not included in the original dataset then this argument will also add a node for placebo to illustrate the connection.
# Network plot assuming connectivity via two doses Allows estimation of a
# single-parameter dose-response function
plot(noplac.net, level = "agent", remove.loops = TRUE, label.distance = 4, doselink = 2)
#> Dose-response connections to placebo plotted based on a dose-response
#> function with 1 degrees of freedom
# Network plot assuming connectivity via three doses Allows estimation of a
# two-parameter dose-response function
plot(noplac.net, level = "agent", remove.loops = TRUE, label.distance = 4, doselink = 3)
#> Warning in check.network(g): The following treatments/agents are not connected
#> to the network reference:
#> Allo
#> Arha
#> Benz
#> Febu
#> Dose-response connections to placebo plotted based on a dose-response
#> function with 2 degrees of freedom
In this way we can fully connect up treatments in an otherwise disconnected network, though unless informative prior information is used this will be limited by the number of doses of agents within included studies.
In addition to the parametric dose-response functions described above, two non-parametric monotonic dose-response relationships can also be specified in mbnma.run()
. Setting fun="nonparam.up"
or fun="nonparam.down"
can be used to specify a monotonically increasing or decreasing dose-response respectively. This is achieved in the model by imposing restrictions on the prior distributions of treatment effects that ensure that each increasing dose of an agent has an effect that is either the same or greater/less (for "nonparam.up"
and "nonparam.down"
respectively) than the previous dose. The approach results in a similar model to that developed by Owen et al. (2015).
By making this assumption, this model is slightly more informative, and can lead to some slight gains in precision if relative effects are otherwise imprecisely estimated. However, because a functional form for the dose-response is not modelled, it cannot be used to connect networks that are disconnected at the treatment-level, unlike a parametric MBNMA.
In the case of MBNMA, it may be useful to compare the fit of a non-parametric model to that of a parametric dose-response function, to ensure that fitting a parametric dose-response function does not lead to significantly poorer model fit.
When fitting a non-parametric dose-response model, there is no need to specify arguments for dose-response parameters (beta.1
, beta.2
, beta.3
, beta.4
), as these are ignored in this modelling approach. method
can still be used to specify either "common"
or "random"
effects. It is important to correctly choose either "nonparam.up"
or "nonparam.down"
depending on the expected direction of effect, or this can lead to computation errors.
nonparam <- mbnma.run(tripnet, fun = "nonparam.up", method = "random")
#> `likelihood` not given by user - set to `binomial` based on data provided
#> `link` not given by user - set to `logit` based on assigned value for `likelihood`
#> Modelling non-parametric dose-response - arguments for dose-response parameters:
#> `beta.1`, `beta.2`, `beta.3`, `beta.4` will be ignored
print(nonparam)
#> Inference for Bugs model at "C:\Users\hp17602\AppData\Local\Temp\RtmpuCnbSK\file764864a65aac", fit using jags,
#> 3 chains, each with 10000 iterations (first 5000 discarded), n.thin = 5
#> n.sims = 3000 iterations saved
#> mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat
#> d.1[1,1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[1,2] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,2] 1.201 0.174 0.864 1.080 1.199 1.318 1.539 1.001
#> d.1[3,2] 1.749 0.121 1.511 1.667 1.748 1.833 1.984 1.004
#> d.1[4,2] 2.052 0.144 1.771 1.953 2.055 2.152 2.334 1.006
#> d.1[1,3] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,3] 0.946 0.139 0.634 0.863 0.959 1.047 1.178 1.001
#> d.1[3,3] 1.114 0.082 0.952 1.059 1.113 1.170 1.270 1.002
#> d.1[4,3] 1.255 0.112 1.047 1.174 1.254 1.335 1.473 1.001
#> d.1[5,3] 1.456 0.087 1.291 1.396 1.456 1.514 1.628 1.004
#> d.1[1,4] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,4] 1.244 0.205 0.830 1.115 1.241 1.377 1.640 1.003
#> d.1[3,4] 1.619 0.315 1.051 1.393 1.596 1.819 2.292 1.003
#> d.1[1,5] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,5] 0.614 0.243 0.127 0.447 0.626 0.790 1.062 1.008
#> d.1[3,5] 1.030 0.124 0.796 0.945 1.027 1.113 1.278 1.005
#> d.1[4,5] 1.438 0.214 1.046 1.289 1.428 1.580 1.872 1.002
#> d.1[1,6] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,6] 0.800 0.309 0.148 0.584 0.832 1.038 1.300 1.012
#> d.1[3,6] 1.251 0.117 1.026 1.172 1.251 1.331 1.475 1.001
#> d.1[4,6] 1.561 0.194 1.217 1.422 1.551 1.694 1.947 1.001
#> d.1[5,6] 1.899 0.281 1.409 1.693 1.878 2.075 2.489 1.002
#> d.1[6,6] 2.934 0.603 1.886 2.501 2.891 3.318 4.221 1.009
#> d.1[1,7] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,7] 0.553 0.205 0.144 0.409 0.553 0.691 0.946 1.010
#> d.1[3,7] 1.005 0.307 0.467 0.785 0.987 1.200 1.666 1.002
#> d.1[1,8] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
#> d.1[2,8] 0.481 0.313 0.022 0.223 0.438 0.703 1.157 1.005
#> d.1[3,8] 1.264 0.164 0.943 1.152 1.265 1.380 1.577 1.001
#> d.1[4,8] 1.609 0.102 1.417 1.538 1.606 1.677 1.814 1.001
#> sd 0.262 0.045 0.176 0.230 0.260 0.292 0.352 1.012
#> totresdev 188.918 18.829 153.868 176.128 188.238 200.932 228.765 1.006
#> deviance 1092.024 18.829 1056.974 1079.234 1091.344 1104.038 1131.871 1.006
#> n.eff
#> d.1[1,1] 1
#> d.1[1,2] 1
#> d.1[2,2] 3000
#> d.1[3,2] 530
#> d.1[4,2] 350
#> d.1[1,3] 1
#> d.1[2,3] 2300
#> d.1[3,3] 1200
#> d.1[4,3] 2300
#> d.1[5,3] 600
#> d.1[1,4] 1
#> d.1[2,4] 830
#> d.1[3,4] 990
#> d.1[1,5] 1
#> d.1[2,5] 2300
#> d.1[3,5] 440
#> d.1[4,5] 1100
#> d.1[1,6] 1
#> d.1[2,6] 750
#> d.1[3,6] 3000
#> d.1[4,6] 2900
#> d.1[5,6] 2000
#> d.1[6,6] 280
#> d.1[1,7] 1
#> d.1[2,7] 730
#> d.1[3,7] 2300
#> d.1[1,8] 1
#> d.1[2,8] 900
#> d.1[3,8] 2500
#> d.1[4,8] 3000
#> sd 170
#> totresdev 380
#> deviance 390
#>
#> For each parameter, n.eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
#>
#> DIC info (using the rule, pD = var(deviance)/2)
#> pD = 176.5 and DIC = 1267.8
#> DIC is an estimate of expected predictive error (lower deviance is better).
In the output from non-parametric models, d.1
parameters represent the relative effect for each treatment (specific dose of a specific agent) versus the reference treatment, similar to in a standard Network Meta-Analysis. The first index of d
represents the dose identifier, and the second index represents the agent identifier. Information on the specific values of the doses is not included in the model, as only the ordering of them (lowest to highest) is important.
Note that some post-estimation functions (e.g. ranking, prediction) cannot be performed on non-parametric models within the package.
For looking at post-estimation in MBNMA we will demonstrate using results from an Emax MBNMA on the triptans dataset unless specified otherwise:
tripnet <- mbnma.network(HF2PPITT)
#> Values for `agent` with dose = 0 have been recoded to `Placebo`
#> agent is being recoded to enforce sequential numbering and allow inclusion of `Placebo`
trip.emax <- mbnma.emax(tripnet, emax = "rel", ed50 = "rel")
#> `likelihood` not given by user - set to `binomial` based on data provided
#> `link` not given by user - set to `logit` based on assigned value for `likelihood`
To assess how well a model fits the data, it can be useful to look at a plot of the contributions of each data point to the residual deviance. This can be done using devplot()
. As individual deviance contributions are not automatically monitored in parameters.to.save
, this might require the model to be automatically run for additional iterations.
Results can be plotted either as a scatter plot (plot.type="scatter"
) or a series of boxplots (plot.type="box"
).
# Plot boxplots of residual deviance contributions (scatterplot is the default)
devplot(trip.emax, plot.type = "box")
#> `resdev` not monitored in mbnma$parameters.to.save.
#> additional iterations will be run in order to obtain results for `resdev`
From these plots we can see that whilst the model fit does not seem to be systematically non-linear (which would suggest an alternative dose-response function may be a better fit), residual deviance is high at a dose of 1 for eletriptan, and at 2 for sumatriptan. This may indicate that fitting random effects may allow for additional variability in response which may improve the model fit.
If saved to an object, the output of devplot()
contains the results for individual deviance contributions, and this can be used to identify any extreme outliers.
Another approach for assessing model fit can be to plot the fitted values, using fitplot()
. As with devplot()
, this may require running additional model iterations to monitor theta
.
# Plot fitted and observed values with treatment labels
fitplot(trip.emax)
#> `theta` not monitored in mbnma$parameters.to.save.
#> additional iterations will be run in order to obtain results
Fitted values are plotted as connecting lines and observed values in the original dataset are plotted as points. These plots can be used to identify if the model fits the data well for different agents and at different doses along the dose-response function.
Forest plots can be easily generated from MBNMA models using the plot()
method on an "mbnma"
object. By default this will plot a separate panel for each dose-response parameter in the model. Forest plots can only be generated for parameters which are modelled using relative effects and that vary by agent/class.
Rankings can be calculated for different dose-response parameters from MBNMA models by using rank()
on an "mbnma"
object. Any parameter monitored in an MBNMA model that varies by agent/class can be ranked. A vector of these is assigned to params
. direction
indicates whether negative responses should be ranked as “better” (-1
) or “worse” (1
).
ranks <- rank(trip.emax, direction = 1)
print(ranks)
#>
#> ================================
#> Ranking of dose-response MBNMA
#> ================================
#>
#> Includes ranking of relative effects from dose-response MBNMA:
#> d.ed50 d.emax
#>
#> 7 parameters ranked with positive responses being `better`
summary(ranks)
#> $d.ed50
#> rank.param mean sd 2.5% 25% 50% 75% 97.5%
#> 1 eletriptan 5.497667 1.308905 3 5 6 7 7
#> 2 sumatriptan 5.071000 1.544481 2 4 5 6 7
#> 3 frovatriptan 4.160667 2.032789 1 2 4 6 7
#> 4 almotriptan 2.548333 1.571566 1 1 2 3 7
#> 5 zolmitriptan 3.595333 1.698001 1 2 3 5 7
#> 6 naratriptan 2.636000 1.867619 1 1 2 4 7
#> 7 rizatriptan 4.491000 1.773782 1 3 5 6 7
#>
#> $d.emax
#> rank.param mean sd 2.5% 25% 50% 75% 97.5%
#> 1 eletriptan 1.630000 0.7663522 1 1 1 2 3
#> 2 sumatriptan 5.171333 0.9206487 3 5 5 6 7
#> 3 frovatriptan 4.312333 1.5786952 1 3 4 6 7
#> 4 almotriptan 4.419667 1.4636464 1 3 5 6 7
#> 5 zolmitriptan 3.624333 1.1654730 1 3 4 4 6
#> 6 naratriptan 6.781667 0.8235431 4 7 7 7 7
#> 7 rizatriptan 2.060667 0.9973220 1 1 2 3 4
The output is an object of class("mbnma.rank")
, containing a list for each ranked parameter in params
, which consists of a summary table of rankings and raw information on agent/class (depending on argument given to level
) ranking and probabilities. The summary median ranks with 95% credible intervals can be simply displayed using summary()
.
Histograms for ranking results can also be plotted using the plot()
method, which takes the raw MCMC ranking results stored in mbnma.rank
and plots the number of MCMC iterations the parameter value for each treatment was ranked a particular position.
Alternatively, cumulative ranking plots for all parameters can be plotted simultaneously so as to be able to compare the effectiveness of different agents on different parameters. The surface under cumulative ranking curve (SUCRA) for each parameter can also be estimated by setting sucra=TRUE
.
#> # A tibble: 14 x 3
#> agent parameter sucra
#> <fct> <chr> <dbl>
#> 1 eletriptan d.ed50 2.00
#> 2 eletriptan d.emax 5.61
#> 3 sumatriptan d.ed50 2.42
#> 4 sumatriptan d.emax 2.33
#> 5 frovatriptan d.ed50 3.28
#> 6 frovatriptan d.emax 3.16
#> 7 almotriptan d.ed50 4.79
#> 8 almotriptan d.emax 3.06
#> 9 zolmitriptan d.ed50 3.85
#> 10 zolmitriptan d.emax 3.86
#> 11 naratriptan d.ed50 4.67
#> 12 naratriptan d.emax 0.714
#> 13 rizatriptan d.ed50 2.98
#> 14 rizatriptan d.emax 5.28
After performing an MBNMA, responses can be predicted from the model parameter estimates using predict()
on an "mbnma"
object. A number of important arguments should be specified for prediction. See ?predict.mbnma
for detailed specification of these arguments.
E0
This is the response at dose = 0 (equivalent to the placebo response). Since relative effects are the parameters estimated in MBNMA, the placebo response is not explicitly modelled and therefore must be provided by the user in some way. The simplest approach is to provide either a single numeric value for E0
(deterministic approach), or a string representing a distribution for E0
that can take any Random Number Generator (RNG) distribution for which a function exists in R (stochastic approach). Values should be given on the natural scale. For example for a binomial outcome:
E0 <- 0.2
E0 <- "rbeta(n, shape1=2, shape2=10)"
Another approach is to estimate E0
from a set of studies. These would ideally be studies of untreated/placebo-treated patients that closely resemble the population for which predictions are desired, and the studies may be observational. However, synthesising results from the placebo arms of trials in the original network is also possible. For this, E0
is assigned a data frame of studies in the long format (one row per study arm) with the variables studyID
, and a selection of y
, se
, r
, N
and E
(depending on the likelihood used in the MBNMA model). synth
can be set to "fixed"
or "random"
to indicate whether this synthesis should be fixed or random effects.
Additionally, it’s also necessary to specify the doses at which to predict responses. By default, predict()
uses the maximum dose within the dataset for each agent, and predicts doses at a series of cut points. The number of cut points can be specified using n.doses
, and the maximum dose to use for prediction for each agent can also be specified using max.doses
(a named list of numeric values where element names correspond to agent names).
An alternative approach is to predict responses at specific doses for specific agents using the argument exact.doses
. As with max.doses
, this is a named list in which element names correspond to agent names, but each element is a numeric vector in which each value within the vector is a dose at which to predict a response for the given agent.
# Predict 20 doses for each agent, with a stochastic distribution for E0
doses <- list(Placebo = 0, eletriptan = 3, sumatriptan = 3, almotriptan = 3, zolmitriptan = 3,
naratriptan = 3, rizatriptan = 3)
pred <- predict(trip.emax, E0 = "rbeta(n, shape1=2, shape2=10)", max.doses = doses,
n.dose = 20)
# Predict exact doses for two agents, and estimate E0 from the data
E0.data <- HF2PPITT[HF2PPITT$dose == 0, ]
doses <- list(eletriptan = c(0, 1, 3), sumatriptan = c(0, 3))
pred <- predict(trip.emax, E0 = E0.data, exact.doses = doses)
#> `link` not given by user - set to `logit` based on assigned value for `likelihood`
#> Values for `agent` with dose = 0 have been recoded to `Placebo`
#> agent is being recoded to enforce sequential numbering and allow inclusion of `Placebo`
An object of class "mbnma.predict"
is returned, which is a list of summary tables and MCMC prediction matrices for each treatment (combination of dose and agent). The summary()
method can be used to print mean posterior predictions at each time point for each treatment.
summary(pred)
#> agent dose mean sd 2.5% 25% 50%
#> 1 eletriptan 0 0.1239109 0.003266562 0.1176321 0.1217139 0.1238668
#> 2 eletriptan 1 0.4369072 0.019140139 0.4008041 0.4234634 0.4368564
#> 3 eletriptan 3 0.5566435 0.026576460 0.5069840 0.5379896 0.5560398
#> 4 sumatriptan 0 0.1239109 0.003266562 0.1176321 0.1217139 0.1238668
#> 5 sumatriptan 3 0.3873524 0.017030220 0.3559237 0.3754153 0.3864601
#> 75% 97.5%
#> 1 0.1260825 0.1302021
#> 2 0.4503098 0.4742147
#> 3 0.5741813 0.6104255
#> 4 0.1260825 0.1302021
#> 5 0.3989690 0.4219495
Predicted responses can also be plotted using the plot()
method on an object of class("mbnma.predict")
. The predicted responses are joined by a line to form the dose-response curve for each agent predicted, with 95% credible intervals (CrI). Therefore, when plotting the response it is important to predict a sufficient number of doses (using n.doses
) to get a smooth curve.
# Predict responses using default doses up to the maximum of each agent in the
# dataset
pred <- predict(trip.emax, E0 = 0.2, n.dose = 20)
plot(pred)
Shaded counts of the number of studies in the original dataset that investigate each dose of an agent can be plotted over the 95% CrI for each treatment by setting disp.obs = TRUE
, though this requires that the original "mbnma.network"
object used to estimate the MBNMA be provided via network
.
This can be used to identify any extrapolation/interpretation of the dose-response that might be occurring for a particular agent. As you can see, more observations typically leads to tighter 95% CrI for the predicted response at a particular point along the dose-response curve.
We can also plot the results of a “split” Network Meta-Analysis (NMA) in which all doses of an agent are assumed to be independent. As with disp.obs
we also need to provide the original mbnma.network
object to be able to estimate this, and we can also specify if we want to perform a common or random effects NMA using method
. Treatments that are only connected to the network via the dose-response relationship (rather than by a direct head-to-head comparison) will not be included.
alognet <- mbnma.network(alog_pcfb)
alog.emax <- mbnma.emax(alognet, method = "random")
pred <- predict(alog.emax, E0 = 0, n.dose = 20)
plot(pred, overlay.split = TRUE, method = "random")
By plotting these, as well as observing how responses can be extrapolated/interpolated, we can also see which doses are likely to be providing most information to the dose-response relationship. The tighter 95% CrI on the predicted responses from the MBNMA also show that modelling the dose-response function also gives some additional precision even at doses for which there is information available.
More detailed documentation can be accessed using ?plot.mbnma.predict
.
Predicted responses from an object of class("mbnma.predict")
can also be ranked using the rank()
method. As when applied to an object of class("mbnma")
, this method will rank parameters (in this case predictions) in order from either highest to lowest (direction=1
) or lowest to highest (direction=-1
), and return an object of class("mbnma.rank")
.
If there have been predictions at dose = 0 for several agents only one of these will be included in the rankings, in order to avoid duplication (since the predicted response at dose = 0 is the same for all agents).
pred <- predict(trip.emax, E0 = 0.2, n.doses = 4, max.doses = list(eletriptan = 5,
sumatriptan = 5, frovatriptan = 5, zolmitriptan = 5))
ranks <- rank(pred)
plot(ranks)
When performing a MBNMA by pooling relative treatment effects, the modelling approach assumes consistency between direct and indirect evidence within a network. This is an incredibly useful assumption as it allows us to improve precision on existing direct estimates, or to estimate relative effects between treatments that have not been compared in head-to-head trials, by making use of indirect evidence.
However, if this assumption does not hold, this is extremely problematic for inference, so it is important to be able to test it. A number of different approaches exist to allow for this in standard Network Meta-Analysis (NMA) (Dias et al. 2013), but within dose-response MBNMA there is added complexity because the consistency assumption can be conceptualised either for each treatment comparison (combination of dose and agent), or for each agent, where consistency is required for the agent-level parameters governing the dose-response relationship.
Testing for consistency at the agent-level is challenging as there is unlikely to be the required data available to be able to do this - included studies in the dataset must have multiple doses of multiple agents, so that sufficient information is available to estimate dose-response parameters within that study. However, testing for consistency at the treatment-level is possible in MBNMA, and this is described below. In practice, testing for consistency at the treatment-level should suffice, as any inconsistency identified at the treatment level will also translate to inconsistency at the agent level and vice versa [manuscript in progress].
Consistency also depends on the functional form assumed for the dose-response relationship, and so is inextricably linked to model fit of the dose-response relationship. A thorough assessment of the validity of the fitted model is therefore important to be confident that the resulting treatment effect estimates provide a firm basis for decision making.
When meta-analysing dose-response studies, the potential for inconsistency testing may actually be reasonably rare, as most (if not all) trials will be multi-arm placebo-controlled. Since each study is internally consistent (the relative effects within the trial will always adhere to consistency relationships), there will be no closed loops of treatments that are informed by independent sources of evidence.
Another approach for consistency checking is node-splitting. This splits contributions for a particular treatment comparison into direct and indirect evidence, and the two can then be compared to test their similarity (Valkenhoef et al. 2016). Node-splitting in dose-response MBNMA is an extension to this method, as indirect evidence contributions can be calculated incorporating the dose-response function. mbnma.nodesplit()
takes similar arguments to mbnma.run()
, and returns an object of class("nodesplit")
.
In addition to these, the argument comparisons
can be used to indicate which treatment comparisons to perform a nodesplit on. If left as NULL
(the default) node-splits will automatically be performed in all closed loops of treatments in which comparisons are informed by independent sources of evidence. This is somewhat similar to the function gemtc::mtc.nodesplit.comparisons()
, but uses a fixed network reference treatment and therefore ensures differences between direct and indirect evidence are parameterised as inconsistency rather than as heterogeneity (Dias et al. 2013). However, it also allows for indirect evidence to be informed via the dose-response relationship even if there is no pathway of evidence between the treatments, which can in fact lead to additional potentially inconsistent loops. To incorporate indirect evidence in this way incldr=TRUE
can be set in inconsistency.loops()
, the default when using mbnma.nodesplit()
The complexity of the dose-response relationship fitted and the amount of indirect evidence available will also affect the number of comparisons on which node-splitting is possible [manuscript in progress]. If there is only limited indirect dose-response information for a given comparison (e.g. only two doses available for the agents in the comparison), then only a simpler dose-response function (e.g. exponential) can be fitted. The values given in inconsistency.loops()$path
can give an indication as to the number of doses available for each comparison. For example, drparams 3 4
would indicate that the indirect evidence is estimated only via the dose-response relationship, and that within the indirect evidence there are three doses available for estimating the dose-response of the agent in t1
of the comparison, and four doses available for estimating the dose-responses of the agent in t2
of the comparison. This means that a three-parameter dose-response function would be the most complex function that could be used when node-splitting this comparison.
As several models have to be run for each closed loop of treatments, node-splitting can take some time to run, and it therefore is not shown for the purposes of this vignette.
# Using the psoriasis dataset (>75% improvement in PASI score)
psoriasis$r <- psoriasis$r75
psorinet <- mbnma.network(psoriasis)
# Identify comparisons on which node-splitting is possible
splitcomps <- inconsistency.loops(psorinet$data.ab, incldr = TRUE)
print(splitcomps)
# If we want to fit an Emax dose-response function, there is insufficient
# indirect evidence in all but the first 6 comparisons
nodesplit <- mbnma.nodesplit(psorinet, fun = "emax", comparisons = splitcomps[1:6,
], method = "common")
Performing the print()
method on an object of class("nodesplit")
prints a summary of the node-split results to the console, whilst the summary()
method will return a data frame of posterior summaries for direct and indirect estimates for each split treatment comparison.
The nodesplit object itself is a list with results for each treatment comparison that has been split. There is a lot of information within the results, but the most useful (and easily interpretable) elements are:
p.values
the Bayesian p-value for the posterior overlap between direct and indirect estimatesquantiles
the median and 95%CrI of the posterior distributions for direct and indirect evidence, and for the difference between them.forest.plot
a forest plot that shows the median and 95% CrI for direct and indirect estimatesdensity.plot
a plot that shows the posterior distributions for direct and indirect estimatesIt is possible to generate different plots of each nodesplit comparison using plot()
:
MBNMAdose
provides a complete set of functions that allow for performing dose-response MBNMA, model checking, prediction, and plotting of a number of informative graphics. By modelling a dose-response relationship within the network meta-analysis framework, this method can help connect networks of evidence that might otherwise be disconnected, allow extrapolation and interpolation of dose-response, and improve precision on predictions and relative effects between agents.
The package allows a range of dose-response functions (as well as the possibility to incorporate user-defined functions) and facilitates model specification in a way which allows users to make additional modelling assumptions to help identify parameters.
Dias, S., N. J. Welton, A. J. Sutton, D. M. Caldwell, G. Lu, and A. E. Ades. 2013. “Evidence Synthesis for Decision Making 4: Inconsistency in Networks of Evidence Based on Randomized Controlled Trials.” Journal Article. Med Decis Making 33 (5): 641–56. https://doi.org/10.1177/0272989X12455847.
Furukawa, T. A., A. Cipriani, P. J. Cowen, S. Leucht, M. Egger, and G. Salanti. 2019. “Optimal Dose of Selective Serotonin Reuptake Inhibitors, Venlafaxine, and Mirtazapine in Major Depression: A Systematic Review and Dose-Response Meta-Analysis.” Journal Article. Lancet Psychiatry 6: 601–9.
Gelman, Andrew, Jessica Hwang, and Aki Vehtari. 2014. “Understanding Predictive Information Criteria for Bayesian Models.” Journal Article. Statistics and Computing 24 (6): 997–1016. https://doi.org/10.1007/s11222-013-9416-2.
Hamza, T., A. Cipriani, T. A. Furukawa, M. Egger, and G. Orsini N. Salanti. 2020. “A Bayesian Dose-Response Meta-Analysis Model: Simulation Study and Application.” https://arxiv.org/abs/2004.12737v1.
JAGS Computer Program. 2017. http://mcmc-jags.sourceforge.net/.
Langford, O., J. K. Aronson, G. van Valkenhoef, and R. J. Stevens. 2016. “Methods for Meta-Analysis of Pharmacodynamic Dose-Response Data with Application to Multi-Arm Studies of Alogliptin.” Journal Article. Stat Methods Med Res. https://doi.org/10.1177/0962280216637093.
Lu, G., and A. E. Ades. 2004. “Combination of Direct and Indirect Evidence in Mixed Treatment Comparisons.” Journal Article. Stat Med 23 (20): 3105–24. https://doi.org/10.1002/sim.1875.
Mawdsley, D., M. Bennetts, S. Dias, M. Boucher, and N. J. Welton. 2016. “Model-Based Network Meta-Analysis: A Framework for Evidence Synthesis of Clinical Trial Data.” Journal Article. CPT Pharmacometrics Syst Pharmacol 5 (8): 393–401. https://doi.org/10.1002/psp4.12091.
Owen, R. K., D. G. Tincello, and R. A. Keith. 2015. “Network Meta-Analysis: Development of a Three-Level Hierarchical Modeling Approach Incorporating Dose-Related Constraints.” Journal Article. Value Health 18 (1): 116–26. https://doi.org/10.1016/j.jval.2014.10.006.
Pedder, H., S. Dias, M. Bennetts, M. Boucher, and N. J. Welton. 2019. “Modelling Time-Course Relationships with Multiple Treatments: Model-Based Network Meta-Analysis for Continuous Summary Outcomes.” Journal Article. Res Synth Methods 10 (2): 267–86.
Plummer, M. 2008. “Penalized Loss Functions for Bayesian Model Comparison.” Journal Article. Biostatistics 9 (3): 523–39. https://pubmed.ncbi.nlm.nih.gov/18209015/.
Spiegelhalter, D. J., N. G. Best, B. P. Carlin, and A. van der Linde. 2002. “Bayesian Measures of Model Complexity and Fit.” Journal Article. J R Statistic Soc B 64 (4): 583–639.
Thorlund, K., E. J. Mills, P. Wu, E. P. Ramos, A. Chatterjee, E. Druyts, and P. J. Godsby. 2014. “Comparative Efficacy of Triptans for the Abortive Treatment of Migraine: A Multiple Treatment Comparison Meta-Analysis.” Journal Article. Cephalagia. https://doi.org/10.1177/0333102413508661.
Valkenhoef, G. van, S. Dias, A. E. Ades, and N. J. Welton. 2016. “Automated Generation of Node-Splitting Models for Assessment of Inconsistency in Network Meta-Analysis.” Journal Article. Res Synth Methods 7 (1): 80–93. https://doi.org/10.1002/jrsm.1167.
Warren, R. B., M. Gooderham, R. Burge, B. Zhu, D. Amato, K. H. Liu, D. Shrom, J. Guo, A. Brnabic, and A. Blauvelt. 2019. “Comparison of Cumulative Clinical Benefits of Biologics for the Treatment of Psoriasis over 16 Weeks: Results from a Network Meta-Analysis.” Journal Article. J Am Acad Dermatol 82 (5): 1138–49.