# Introduction

The getting started vignette illustrated the basic features of the brada package. In this vignette, we illustrate how to monitor a running trial with the brada package.

Note that there are more vignettes which illustrate
• how to apply and calibrate the predictive evidence value design with the brada package. This vignette is hosted at the Open Science Foundation.
• how to monitor a running clinical trial with a binary endpoint by means of the brada package

# Monitoring a trial

To apply the package, first, load the package:

library(brada)

Monitoring a trial with the brada package is straightforward through the monitor function. Suppose we have analyzed and calibrated a design according to our requirements, and end up with the following design:

design = brada(Nmax = 30, batchsize = 5, nInit = 10,
p_true = 0.4, p0 = 0.4, p1 = 0.4,
nsim = 100,
theta_T = 0.90, theta_L = 0.1, theta_U = 1,
method = "PP",
cores = 2)

Now, suppose the trial is performed and the first ten patients show the response pattern $$(0,1,0,0,0,0,0,1,0,0)$$, where $$1$$ encodes a response and $$0$$ no response. Thus, there are $$2$$ responses out of nInit=10 observations. To check whether the trial can be stopped for futility or efficacy based on theta_L=0.1 and theta_U=1, we run the monitor function as follows:

monitor(design, obs = c(0,1,0,0,0,0,0,1,0,0))
## --------- BRADA TRIAL MONITORING ---------
## Primary endpoint: binary
## Test of H_0: p <= 0.4 against H_1: p > 0.4
## Trial design: Predictive probability design
## Maximum sample size: 30
## First interim analysis at: 10
## Interim analyses after each 5 observations
## Last interim analysis at: 25 observations
## -----------------------------------------
## Current trial size: 10 patients
## --------------- RESULTS -----------------
## Predictive probability of trial success: 0.00768
##  Futility threshold: 0.1
## Decision: Stop for futility

Thus, the results indicate that we should stop for efficacy. This is intuitively in agreement with the notion that $$2$$ responses out of $$10$$ observations are quite unlikely if $$H_1:p>0.4$$ would hold.

Note that it is not important which value the p_true or nsim arguments had in the brada call which returned the object design. We could also have simulated data under p_true=0.2 and nsim=3000 or some other values, the monitor function only takes the brada object and applies the design specified in the method argument of the object, in this case, the predictive probability design. All necessary arguments are identified by the monitor function automatically. The predictive evidence value design can be monitored analogue, for details on the design and its calibration see the Open Science Foundation.