This vignette gives a first and very brief overview of how the package **JointAI** can be used. The different settings and options are explained in more depth in the help pages and other vignettes.

Here, we use the NHANES data that are part of the **JointAI** package. For more info on this data, check the help file for the NHANES data, go to the web page of the National Health and Nutrition Examination Survey (NHANES) and check out the vignette *Visualizing Incomplete Data*, in which the NHANES data is explored.

Fitting a linear regression model with **JointAI** is straightforward with the function `lm_imp()`

:

```
lm1 <- lm_imp(SBP ~ gender + age + race + WC + alc + educ + albu + bili,
data = NHANES, n.iter = 500, progress.bar = 'none')
```

The specification of `lm_imp()`

is similar to the specification of a linear regression model for complete data using `lm()`

In this minimal example the only difference is that for `lm_imp()`

the number of iterations `n.iter`

has to be specified. Of course there are many more parameters that can or should be specified. In the vignette *Model Specification* many of these parameters are explained in detail.

`n.iter`

specifies the length of the Markov Chain, i.e., the number of draws from the posterior distribution of the parameter or unobserved value. How many iterations are necessary depends on the data and complexity of the model and can vary from as few as 100 up to thousands or millions.

One important criterion is that the Markov chains need to have converged. This can be evaluated visually with a traceplot.

The function `traceplot()`

produces a plot of the sampled values across iterations per parameter. By default, three^{1} Markov chains are produced for each parameter and represented by different colors.

When the sampler has converged the chains show a horizontal band, as in the above figure. Consequently, when traces show a trend, convergence has not been reached and more iterations are necessary (e.g., using `add_samples()`

).

When convergence has been achieved, we can obtain the result of the model from the model summary.

Results from a `JointAI`

model can be printed using

```
summary(lm1)
#>
#> Linear model fitted with JointAI
#>
#> Call:
#> lm_imp(formula = SBP ~ gender + age + race + WC + alc + educ +
#> albu + bili, data = NHANES, n.iter = 500, progress.bar = "none")
#>
#> Posterior summary:
#> Mean SD 2.5% 97.5% tail-prob. GR-crit
#> (Intercept) 61.281 22.8684 16.815 106.044 0.0040 1.02
#> genderfemale -3.072 2.1682 -7.374 1.262 0.1560 1.01
#> age 0.364 0.0724 0.225 0.507 0.0000 1.01
#> raceOther Hispanic 0.473 4.8900 -8.904 10.202 0.9160 1.00
#> raceNon-Hispanic White -1.438 3.0539 -7.317 4.292 0.6467 1.00
#> raceNon-Hispanic Black 8.905 3.5577 2.023 15.753 0.0147 1.00
#> raceother 3.819 3.4207 -2.825 10.936 0.2587 1.00
#> educhigh -3.497 2.1855 -7.923 0.823 0.1147 1.01
#> WC 0.239 0.0841 0.084 0.403 0.0120 1.01
#> albu 5.153 4.0234 -2.878 12.767 0.2093 1.02
#> bili -5.442 4.9888 -15.097 4.152 0.2827 1.03
#> alc>=1 7.343 2.2860 3.023 11.849 0.0040 1.01
#>
#> Posterior summary of residual std. deviation:
#> Mean SD 2.5% 97.5% GR-crit
#> sigma_SBP 13.2 0.748 11.8 14.8 1.01
#>
#>
#> MCMC settings:
#> Iterations = 101:600
#> Sample size per chain = 500
#> Thinning interval = 1
#> Number of chains = 3
#>
#> Number of observations: 186
```

The output gives the posterior summary, i.e., the summary of the MCMC (Markov Chain Monte Carlo) sample (which consists of all chains combined).

By default, `summary()`

will only print the posterior summary for the main model parameters of the analysis model. How to select which parameters are shown is described in the vignette *Selecting Parameters*.

The summary consists of the posterior mean, the standard deviation and the 2.5% and 97.5% quantiles of the MCMC sample, the tail probability and the Gelman-Rubin criterion for convergence. The tail probability is a measure of how likely the value 0 is under the estimated posterior distribution, and is calculated as \[2\times\min\left\{Pr(\theta > 0), Pr(\theta < 0)\right\}\] (where \(\theta\) is the parameter of interest).

In the following graphics, the shaded areas represent the minimum of \(Pr(\theta > 0)\) and \(Pr(\theta < 0)\):

The Gelman-Rubin^{2} criterion, also available via the function `GR_crit()`

, compares the within and between chain variation. When it is close enough to 1^{3}, the chains can be assumed to have converged.

Additionally, some important characteristics of the MCMC samples on which the summary is based, is given. This includes the range and number of iterations (= `Sample size per chain`

), thinning interval and number of chains.

Furthermore, the number of observations (the sample size of the data) is given.

With the arguments `start`

, `end`

and `thin`

it is possible to select which iterations from the MCMC sample are included in the summary.

For example:

When the traceplot shows that the chains only converged after 1500 iterations, `start = 1500`

should be specified in `summary()`

.

The posterior distributions can be visualized using the function `densplot()`

:

By default, `densplot()`

plots the empirical distribution of each of the chains separately. When `joined = TRUE`

the distributions of the combined chains are plotted.

Besides linear regression models, it is also possible to use

**generalized linear models**:`glm_imp()`

(follows the specification of`glm()`

)**linear mixed models**:`lme_imp()`

(follows the specification of`lme()`

from package**nlme**)**generalized linear mixed models**:`glme_imp()`

(analogue the specification used in`lme_imp()`

and`glm_imp()`

)**parametric (Weibull) survival models**:`survreg_imp()`

(follows the specification of`survreg()`

from package**survival**)

The number of chains can be changed with the argument

`n.chain`

.↩Gelman, A and Rubin, DB (1992) Inference from iterative simulation using multiple sequences, Statistical Science, 7, 457-511.

Brooks, SP. and Gelman, A. (1998) General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434-455.↩for example < 1.1; but this is not a generally accepted cut-off↩