# Introduction

This vignette demonstrates how to improve the Monte Carlo sampling accuracy of leave-one-out cross-validation with the loo package and Stan. The loo package automatically monitors the sampling accuracy using Pareto $$k$$ diagnostics for each observation. Here, we present a method for quickly improving the accuracy when the Pareto diagnostics indicate problems. This is done by performing some additional computations using the existing posterior sample. If successful, this will decrease the Pareto $$k$$ values, making the model assessment more reliable. loo also stores the original Pareto $$k$$ values with the name influence_pareto_k which are not changed. They can be used as a diagnostic of how much each observation influences the posterior distribution.

The methodology presented is based on the paper

More information about the Pareto $$k$$ diagnostics is given in the following papers

• Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413–1432. :10.1007/s11222-016-9696-4. Links: published | arXiv preprint.

• Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2019). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.

We will use the same example as in the vignette Using the loo package (version >= 2.0.0). See the demo for a description of the problem and data. We will use the same Poisson regression model as in the case study.

## Coding the Stan model

Here is the Stan code for fitting the Poisson regression model, which we will use for modeling the number of roaches.

stancode <- "
data {
int<lower=1> K;
int<lower=1> N;
matrix[N,K] x;
int y[N];
vector[N] offset;

real beta_prior_scale;
real alpha_prior_scale;
}
parameters {
vector[K] beta;
real intercept;
}
model {
y ~ poisson(exp(x * beta + intercept + offset));
beta ~ normal(0,beta_prior_scale);
intercept ~ normal(0,alpha_prior_scale);
}
generated quantities {
vector[N] log_lik;
for (n in 1:N)
log_lik[n] = poisson_lpmf(y[n] | exp(x[n] * beta + intercept + offset[n]));
}
"

Following the usual approach recommended in Writing Stan programs for use with the loo package, we compute the log-likelihood for each observation in the generated quantities block of the Stan program.

## Setup

In addition to loo, we load the rstan package for fitting the model, and the rstanarm package for the data.

library("rstan")
library("loo")
seed <- 9547
set.seed(seed)

## Fitting the model with RStan

Next we fit the model in Stan using the rstan package:

# Prepare data
data(roaches, package = "rstanarm")
roaches$roach1 <- sqrt(roaches$roach1)

# Using loo_moment_match() directly

The moment matching can also be performed by explicitly calling the function loo_moment_match(). This enables its use also for models that are not using rstan or another package with built-in support for loo_moment_match(). To use loo_moment_match(), the user must give the model object x, the loo object, and 5 helper functions as arguments to loo_moment_match(). The helper functions are

• post_draws
• A function the takes x as the first argument and returns a matrix of posterior draws of the model parameters, pars.
• log_lik_i
• A function that takes x and i and returns a matrix (one column per chain) or a vector (all chains stacked) of log-likeliood draws of the ith observation based on the model x. If the draws are obtained using MCMC, the matrix with MCMC chains separated is preferred.
• unconstrain_pars
• A function that takes arguments x and pars, and returns posterior draws on the unconstrained space based on the posterior draws on the constrained space passed via pars.
• log_prob_upars
• A function that takes arguments x and upars, and returns a matrix of log-posterior density values of the unconstrained posterior draws passed via upars.
• log_lik_i_upars
• A function that takes arguments x, upars, and i and returns a vector of log-likelihood draws of the ith observation based on the unconstrained posterior draws passed via upars.

Next, we show how the helper functions look like for RStan objects, and show an example of using loo_moment_match() directly. For stanfit objects from rstan objects, the functions look like this:

# create a named list of draws for use with rstan methods
.rstan_relist <- function(x, skeleton) {
out <- utils::relist(x, skeleton)
for (i in seq_along(skeleton)) {
dim(out[[i]]) <- dim(skeleton[[i]])
}
out
}

# rstan helper function to get dims of parameters right
.create_skeleton <- function(pars, dims) {
out <- lapply(seq_along(pars), function(i) {
len_dims <- length(dims[[i]])
if (len_dims < 1) return(0)
return(array(0, dim = dims[[i]]))
})
names(out) <- pars
out
}

# extract original posterior draws
post_draws_stanfit <- function(x, ...) {
as.matrix(x)
}

# compute a matrix of log-likelihood values for the ith observation
# matrix contains information about the number of MCMC chains
log_lik_i_stanfit <- function(x, i, parameter_name = "log_lik", ...) {
loo::extract_log_lik(x, parameter_name, merge_chains = FALSE)[, , i]
}

# transform parameters to the unconstraint space
unconstrain_pars_stanfit <- function(x, pars, ...) {
skeleton <- .create_skeleton(x@sim$pars_oi, x@par_dims[x@sim$pars_oi])
upars <- apply(pars, 1, FUN = function(theta) {
rstan::unconstrain_pars(x, .rstan_relist(theta, skeleton))
})
# for one parameter models
if (is.null(dim(upars))) {
dim(upars) <- c(1, length(upars))
}
t(upars)
}

# compute log_prob for each posterior draws on the unconstrained space
log_prob_upars_stanfit <- function(x, upars, ...) {
apply(upars, 1, rstan::log_prob, object = x,
}

# compute log_lik values based on the unconstrained parameters
log_lik_i_upars_stanfit <- function(x, upars, i, parameter_name = "log_lik",
...) {
S <- nrow(upars)
out <- numeric(S)
for (s in seq_len(S)) {
out[s] <- rstan::constrain_pars(x, upars = upars[s, ])[[parameter_name]][i]
}
out
}

Using these function, we can call loo_moment_match() to update the existing loo object.

loo3 <- loo::loo_moment_match.default(
x = fit,
loo = loo1,
post_draws = post_draws_stanfit,
log_lik_i = log_lik_i_stanfit,
unconstrain_pars = unconstrain_pars_stanfit,
log_prob_upars = log_prob_upars_stanfit,
log_lik_i_upars = log_lik_i_upars_stanfit
)
Warning: Some Pareto k diagnostic values are slightly high. See help('pareto-k-diagnostic') for details.
loo3

Computed from 4000 by 262 log-likelihood matrix

Estimate     SE
elpd_loo  -5478.5  700.2
p_loo       253.5   57.5
looic     10957.1 1400.3
------
Monte Carlo SE of elpd_loo is 0.4.

Pareto k diagnostic values:
Count Pct.    Min. n_eff
(-Inf, 0.5]   (good)     250   95.4%   127
(0.5, 0.7]   (ok)        12    4.6%   65
(0.7, 1]   (bad)        0    0.0%   <NA>
(1, Inf)   (very bad)   0    0.0%   <NA>

All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.

As expected, the result is identical to the previous result of loo2 <- loo(fit, moment_match = TRUE).

# References

Gelman, A., and Hill, J. (2007). Data Analysis Using Regression and Multilevel Hierarchical Models. Cambridge University Press.

Stan Development Team (2020) RStan: the R interface to Stan, Version 2.21.1 https://mc-stan.org

Paananen, T., Piironen, J., Buerkner, P.-C., Vehtari, A. (2020). Implicitly Adaptive Importance Sampling. arXiv preprint arXiv:1906.08850.

Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413–1432. :10.1007/s11222-016-9696-4. Links: published | arXiv preprint.

Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2019). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.