Worst Value-at-Risk under Known Margins

Marius Hofert

2018-11-11

library(qrmtools)
library(copula)
library(combinat) # for permn()
library(sfsmisc) # for eaxis()
doPDF <- FALSE

1 Homogeneous case

We start by considering the following setup in the homogeneous case, that is, when all marginal distributions are equal.

qF2 <- function(p, th = 2) qPar(p, shape = th) # Par(2) quantile function
pF2 <- function(q, th = 2) pPar(q, shape = th) # Par(2) distribution function
dim <- 8 # variable dimension (we use 8 or 100 here)

1.1 Checks for method = “dual”

Investigate the helper function \(h(s,t)\) (the function for the inner root-finding to compute \(D(s)\); see dual_bound()).

As we know, \(h(s,s/d) = 0\). We also see that \(s\) has to be sufficiently large in order to find a root \(h(s,t) = 0\) for \(t < s/d\).

Now let’s plot the dual bound \(D(s)\) for various \(\theta\) (this checks the outer root-finding).

1.2 Checks for method = “Wang”/“Wang.Par”

1.2.1 Check of auxiliary functions with numerical integration (for \(\theta = 2\))

Check Wang_h_aux().

Check the objective function \(h(c)\) (that is, Wang_h() with numerical integration) on its domain. Note that \(h\) is used to determine the root in the open interval \((0,(1-\alpha)/d)\) for computing worst value-at-risk.

Check the objective function \(h\) at the endpoints of its domain.

## [1] -Inf    0

\(-\infty\) is not a problem for root finding (for \(\theta > 1\); for \(\theta <= 1\) it is NaN, see below, and thus a problem!), but the \(0\) at the right endpoint is a problem.

## [1] NaN
## [1] Inf

A proper initial interval \([c_l,u_l]\) with \(0<c_l<=c_u<(1-\alpha)/d\) (containing the root) is thus required. We have derived this in Hofert et al. (2017, ``Improved Algorithms for Computing Worst Value-at-Risk’’)).

1.3 Compute best/worst \(\mathrm{VaR}_\alpha\) (via “Wang.Par”)

After dealing with various numerical issues, we can now look at some example calculations of best/worst value-at-risk. We first plot value-at-risk as a function of \(\alpha\).

Now consider best/worst value-at-risk as a function of \(d\).

We can also consider best/worst value-at-risk as a function of \(\theta\). Note that, as before, best value-at-risk is the same for all \(d\) here (depicted in a green dashed line below).

1.4 Comparison between various methods for computing worst value-at-risk

Now consider a graphical comparison between the various methods. To stress the numerical challenges, let us include the following implementation of Wang’s approach (with various switches to highlight the individual numerical challenges) for the Pareto case.

## Initial interval for the root finding in case of worst VaR
init_interval <- function(alpha, d, shape, trafo = FALSE, adjusted = FALSE)
{
    if(trafo) {
        low <- if(shape == 1) {
            d/2
        } else {
            (d-1)*(1+shape)/(d-1+shape)
        }
        up <- if(shape > 1) {
            r <- (1+d/(shape-1))^shape
            if(adjusted) 2*r else r
        } else if(shape == 1) {
            e <- exp(1)
            (d+1)^(e/(e-1))
        } else {
            d*shape/(1-shape)+1
        }
        c(low, up)
    } else {
        low <- if(shape > 1) {
            r <- (1-alpha)/((d/(shape-1)+1)^shape + d-1)
            if(adjusted) r/2 else r
        } else if(shape == 1) {
            e <- exp(1)
            (1-alpha)/((d+1)^(e/(e-1))+d-1)
        } else {
            r <- (1-shape)*(1-alpha)/d
            if(adjusted) r/2 else r
        }
        up <- if(shape == 1) (1-alpha)/(3*d/2-1)
              else (1-alpha)*(d-1+shape)/((d-1)*(2*shape+d))
        c(low, up)
    }
}

## Function to compute the best/worst value-at-risk in the homogeneous case with
## Par(theta) margins
VaR_hom_Par <- function(alpha, d, shape, method = c("worst", "best"),
                        trafo = FALSE, interval = NULL, adjusted = FALSE,
                        avoid.cancellation = FALSE, ...)
{
    ## Pareto quantile function
    qF <- function(p) (1 - p)^(-1/shape) - 1

    ## Compute \bar{I}
    Ibar <- function(a, b, alpha, d, shape)
    {
        if(shape == 1) log((1-a)/(1-b))/(b-a) - 1
        else (shape/(1-shape))*((1-b)^(1-1/shape)-(1-a)^(1-1/shape))/(b-a) - 1
    }

    ## Main
    method <- match.arg(method)
    switch(method,
    "worst" = {

        ## Distinguish according to whether we optimize the auxiliary function
        ## on a transformed scale
        h <- if(trafo) {
            ## Auxiliary function to find the root of on (1, Inf)
            if(shape == 1) {
                function(x) x^2 + x*(-d*log(x)+d-2)-(d-1)
            } else {
                function(x)
                (d/(1-shape)-1)*x^(-1/shape + 1) - (d-1)*x^(-1/shape) + x - (d*shape/(1-shape) + 1)
            }
        } else {
            ## Auxiliary function to find the root of on (0, (1-alpha)/d)
            function(c) {
                a <- alpha+(d-1)*c
                b <- 1-c
                Ib <- if(c == (1-alpha)/d) { # Properly deal with limit c = (1-alpha)/d
                    ((1-alpha)/d)^(-1/shape) - 1
                } else {
                    Ibar(a = a, b = b, alpha = alpha, d = d, shape = shape)
                }
                Ib - (qF(a)*(d-1)/d + qF(b)/d)
            }
        }

        ## Do the optimization
        if(is.null(interval)) interval <- init_interval(alpha, d, shape,
                                                        trafo = trafo, adjusted = adjusted)
        c <- uniroot(h, interval = interval, ...)$root
        if(trafo) # convert value back to the right scale (c-scale)
            c <- (1-alpha)/(c+d-1)
        if(avoid.cancellation) {
            t1 <- (1-alpha)/c-(d-1)
            d * ((c^(-1/shape)/d) * ((d-1)*t1^(-1/shape) + 1) - 1) # = qF(a)*(d-1) + qF(b)
        } else {
            a <- alpha+(d-1)*c
            b <- 1-c
            qF(a)*(d-1) + qF(b)
        }
    },
    "best" = {
        max((d-1)*0 + (1-alpha)^(-1/shape)-1, # Note: Typo in Wang, Peng, Yang (2013)
            d*Ibar(a = 0, b = alpha, alpha = alpha, d = d, shape))
    },
    stop("Wrong 'method'"))
}

For the comparison, consider the following setup.

Now compute the values and plot them.

res <- matrix(, nrow = n.th, ncol = 7)
colnames(res) <- c("Wang", "straightforward", "transformed", "Wang.Par",
                   "dual", "RA.low", "RA.up")
pb <- txtProgressBar(max = n.th, style = if(isatty(stdout())) 3 else 1) # setup progress bar
for(i in seq_len(n.th)) {
    ## "Wang" (numerical integration with smaller uniroot() tolerance; still
    ## numerically critical -- we catch "the integral is probably divergent"-errors here)
    Wang.num.res <- tryCatch(VaR_bounds_hom(alpha, d = d, qF = qFs[[i]])[2], error = function(e) e)
    res[i,"Wang"] <- if(is(Wang.num.res, "simpleError")) NA else Wang.num.res
    ## Our straightforward implementation
    res[i,"straightforward"] <- VaR_hom_Par(alpha, d = d, shape = th[i])
    ## Our straightforward implementation based on the transformed auxiliary function
    res[i,"transformed"] <- VaR_hom_Par(alpha, d = d, shape = th[i], trafo = TRUE)
    ## "Wang.Par" (using a smaller uniroot() tolerance and adjusted initial interval)
    res[i,"Wang.Par"] <- VaR_bounds_hom(alpha, d = d, method = "Wang.Par", shape = th[i])[2]
    ## "dual" (with uniroot()'s default tolerance)
    res[i,"dual"] <- VaR_bounds_hom(alpha, d = d, method = "dual",
                                    interval = crude_VaR_bounds(alpha, qF = qFs[[i]], d = d),
                                    pF = pFs[[i]])[2]
    ## Rearrangement Algorithm
    set.seed(271) # use the same random permutation for each theta
    RA. <- RA(alpha, qF = rep(qFs[i], d), N = N)
    res[i,"RA.low"] <- RA.$bounds[1]
    res[i,"RA.up"]  <- RA.$bounds[2]
    ## Progress
    setTxtProgressBar(pb, i) # update progress bar
}
close(pb) # close progress bar

So what goes wrong with our straightforward implementation? Let’s start with the obvious. As we can infer from the x-axis scale in the plots of \(h\) above, uniroot()’s default tolerance tol = .Machine$double.eps^0.25(\(\approx 0.0001220703\) here) is too large to determine the root accurately. In fact, if we choose a smaller tol, we have no problem.

So is this the solution to all the problems? No. Consider the setup as before, where \(d\) is running (we catch the appearing errors here).

By using warnings() we see that we cannot even determine the root in all cases as the function values are numerically not of opposite sign at the endpoints of the theoretically correct initial interval containing the root. We can solve this (non-elegantly) by simply adjusting the lower bound.

So we see that we need a smaller tolerance and to extend the initial interval in order to get reasonable results.

What about using the transformed auxiliary function directly? Is this a numerically more stable approach?

## [1] Inf

As we can see, there is a problem as well. To understand it, consider the following minimized version of VaR_hom_Par() (for computing worst value-at-risk for \(\theta\neq 1\) only).

## [1] 1.869497e-31
## [1] Inf

As we can see, \(c>0\) is so small that \(b = 1-c\) is numerically equal to 1 and thus the Pareto quantile evaluated as \(\infty\). This cancellation can be avoided by simplifying the terms (note that qPar(1-c, shape = th) = c^(-1/th)-1).

## [1] 162.5923

We can apply this to the variable combinations as before. Note that due to the partly extreme values of \(d\) and \(\theta\), we also need to adjust the range here in order for a root to be found.

We can now compare the values (based on the non-transformed/transformed auxiliary function) in a graph.

We see that the results differ slightly if both \(d\) and \(\theta\) are large. It remains numerically challenging to appropriately compute worst value-at-risk in this case. However, both of the above approaches seem to work for dimensions even as large as \(10^5\).

2 Inhomogeneous case

2.1 A motivation for (column) rearrangements

The basic idea goes back to Iman and Conover (1982, “A distribution-free approach to inducing rank correlation among input variables”). Suppose we have 500 observations of two Pareto distributions. This is the joint sample under independence (as we independently draw the uniforms when constructing X):

If we order the realizations in each column to be in increasing order (so the smallest (largest) entry of the first column lies next to the smallest (largest) entry of the second column of X), we obtain the following (comonotone) sample.

We take away that column rearrangements don’t change the marginal distributions but the dependence between the two columns and thus the distributions of their sums. To see the latter (in terms of boxplots and kernel density estimates), consider:

2.2 Run-time comparison (straightforward vs efficient implementation)

A straightforward (but inefficient) implementation of a basic rearrange(, sample = FALSE, is.sorted = TRUE) is the following. Note that our implementation in qrmtools allows for much greater functionality and is faster.

We now compare this to the actual implementation (rearrange(, sample = FALSE, is.sorted = TRUE)). To this end, we consider the following setup.

Here’s a plot which shows the improvement in run time (in %). For larger dimensions \(d\), the improvement is very close to 100%.

2.3 How rearrange() acts on specific matrices

To see how rearrange() actually proceeds, consider the following example. Due to trace = TRUE, the matrix is printed after each column rearrangement. A “|” and “=” indicate whether the respective column has changed or not, respectively, and the last two printed columns provide the row sums over all other columns but the current one, as well as the new updated row sums (over all columns) after the rearrangement. We see that for tol = NULL the algorithm stops after the first time \(d\) (here: 3) consecutive rearrangements left the matrix unchanged.

##           
## [1,] 1 1 1
## [2,] 2 3 2
## [3,] 3 5 4
## [4,] 4 7 8
##      |     -col sum
## [1,] 4 1 1    2   6
## [2,] 3 3 2    5   8
## [3,] 2 5 4    9  11
## [4,] 1 7 8   15  16
##        |   -col sum
## [1,] 4 5 1    5  10
## [2,] 3 7 2    5  12
## [3,] 2 3 4    6   9
## [4,] 1 1 8    9  10
##          | -col sum
## [1,] 4 5 2    9  11
## [2,] 3 7 1   10  11
## [3,] 2 3 4    5   9
## [4,] 1 1 8    2  10
##      |     -col sum
## [1,] 3 5 2    7  10
## [2,] 2 7 1    8  10
## [3,] 4 3 4    7  11
## [4,] 1 1 8    9  10
##        =   -col sum
## [1,] 3 5 2    5  10
## [2,] 2 7 1    3  10
## [3,] 4 3 4    8  11
## [4,] 1 1 8    9  10
##          = -col sum
## [1,] 3 5 2    8  10
## [2,] 2 7 1    9  10
## [3,] 4 3 4    7  11
## [4,] 1 1 8    2  10
##      =     -col sum
## [1,] 3 5 2    7  10
## [2,] 2 7 1    8  10
## [3,] 4 3 4    7  11
## [4,] 1 1 8    9  10
## $bound
## [1] 10
## 
## $tol
## [1] 0
## 
## $converged
## [1] TRUE
## 
## $opt.row.sums
## [1]  6  9  9 10 10 10 10
## 
## $X.rearranged
##      [,1] [,2] [,3]
## [1,]    3    5    2
## [2,]    2    7    1
## [3,]    4    3    4
## [4,]    1    1    8

This is the highest possible minimal row sum and thus the optimally rearranged matrix. Let’s consider another example.

##           
## [1,] 1 1 1
## [2,] 2 2 2
## [3,] 3 3 3
##      |     -col sum
## [1,] 3 1 1    2   5
## [2,] 2 2 2    4   6
## [3,] 1 3 3    6   7
##        =   -col sum
## [1,] 3 1 1    4   5
## [2,] 2 2 2    4   6
## [3,] 1 3 3    4   7
##          = -col sum
## [1,] 3 1 1    4   5
## [2,] 2 2 2    4   6
## [3,] 1 3 3    4   7
##      =     -col sum
## [1,] 3 1 1    2   5
## [2,] 2 2 2    4   6
## [3,] 1 3 3    6   7
## $bound
## [1] 5
## 
## $tol
## [1] 0
## 
## $converged
## [1] TRUE
## 
## $opt.row.sums
## [1] 5 5 5 5
## 
## $X.rearranged
##      [,1] [,2] [,3]
## [1,]    3    1    1
## [2,]    2    2    2
## [3,]    1    3    3

Here we do not reach an optimal rearrangement (6 would be a better minimal row sum, attainable via the rearrangement matrix(c(1,2,3, 2,3,1, 3,1,2), ncol = 3)). It remains an open problem how an efficient algorithm can avoid such a case.

2.4 Convergence

Whether rearrange() converges or not depends to a large degree on the underlying sorting algorithm. We found that unstable sorting algorithms such as C’s qsort() lead to rearrange() not terminating on certain input matrices.

One such example is the above matrix B, for which the 2nd/3rd column are constantly swapped after the first column being rearranged since the row sum of all others is (4,4,4) and termination then depends on how ranks are assigned to these three equal numbers.

The current implementation is still not based on a stable sorting algorithm (as this would largely compromise run time) but a) is comparably fast and b) terminates on all input matrices. To see the latter, consider the following example in which we investigate the possible output matrices of rearrange() on all input matrices (obviously only for \(d = 3\) due to run time; larger \(N\) can be chosen but note that the effort is \((N!)^{d-1}\) here!).

## [1] 12.85417