The goal of `kldest`

is to estimate Kullback-Leibler (KL) divergence \(D_{KL}(P||Q)\) between two probability distributions \(P\) and \(Q\) based on:

- a sample \(x_1,...,x_n\) from \(P\) and the probability density \(q\) of \(Q\), or
- samples \(x_1,...,x_n\) from \(P\) and \(y_1,...,y_m\) from \(Q\).

The distributions \(P\) and \(Q\) may be uni- or multivariate, and they may be discrete, continuous or mixed discrete/continuous.

Different estimation algorithms are provided for continuous distributions, either based on nearest neighbour density estimation or kernel density estimation. Confidence intervals for KL divergence can also be computed, either via subsampling (preferred) or bootstrapping.

You can install kldest from CRAN:

Alternatively, can install the development version of kldest from GitHub with:

KL divergence estimation based on nearest neighbour density estimates is the most flexible approach.

Set a seed for reproducibility

Analytical KL divergence:

Estimate based on two samples from these Gaussians:

Estimate based on a sample from the first Gaussian and the density of the second:

Uncertainty quantification via subsampling:

```
kld_ci_subsampling(X, q = q)
#> $est
#> [1] 0.6374628
#>
#> $ci
#> 2.5% 97.5%
#> 0.2601375 0.9008446
```

Analytical KL divergence between an uncorrelated and a correlated Gaussian:

```
kld_gaussian(mu1 = rep(0,2), sigma1 = diag(2),
mu2 = rep(0,2), sigma2 = matrix(c(1,1,1,2),nrow=2))
#> [1] 0.5
```

Estimate based on two samples from these Gaussians: