Transformations and link functions in emmeans

emmeans package, Version 1.4.2

Contents

This vignette covers the intricacies of transformations and link functions in emmeans.

  1. Overview
  2. Re-gridding
  3. Link functions
  4. Both a response transformation and a link
  5. Special transformations
  6. Specifying a transformation after the fact
  7. Auto-detected transformations
  8. Faking a log transformation
  9. Bias adjustment

Index of all vignette topics

Overview

Consider the same example with the pigs dataset that is used in many of these vignettes:

pigs.lm <- lm(log(conc) ~ source + factor(percent), data = pigs)

This model has two factors, source and percent (coerced to a factor), as predictors; and log-transformed conc as the response. Here we obtain the EMMs for source, examine its structure, and finally produce a summary, including a test against a null value of log(35):

pigs.emm.s <- emmeans(pigs.lm, "source")
str(pigs.emm.s)
## 'emmGrid' object with variables:
##     source = fish, soy, skim
## Transformation: "log"
summary(pigs.emm.s, infer = TRUE, null = log(35))
##  source emmean     SE df lower.CL upper.CL null t.ratio p.value
##  fish     3.39 0.0367 23     3.32     3.47 3.56 -4.385  0.0002 
##  soy      3.67 0.0374 23     3.59     3.74 3.56  2.988  0.0066 
##  skim     3.80 0.0394 23     3.72     3.88 3.56  6.130  <.0001 
## 
## Results are averaged over the levels of: percent 
## Results are given on the log (not the response) scale. 
## Confidence level used: 0.95

Now suppose that we want the EMMs expressed on the same scale as conc. This can be done by adding type = "response" to the summary() call:

summary(pigs.emm.s, infer = TRUE, null = log(35), type = "response")
##  source response   SE df lower.CL upper.CL null t.ratio p.value
##  fish       29.8 1.09 23     27.6     32.1   35 -4.385  0.0002 
##  soy        39.1 1.47 23     36.2     42.3   35  2.988  0.0066 
##  skim       44.6 1.75 23     41.1     48.3   35  6.130  <.0001 
## 
## Results are averaged over the levels of: percent 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale 
## Tests are performed on the log scale

Timing is everything

Dealing with transformations in emmeans is somewhat complex, due to the large number of possibilities. But the key is understanding what happens, when. These results come from a sequence of steps. Here is what happens (and doesn’t happen) at each step:

  1. The reference grid is constructed for the log(conc) model. The fact that a log transformation is used is recorded, but nothing else is done with that information.
  2. The predictions on the reference grid are averaged over the four percent levels, for each source, to obtain the EMMs for sourcestill on the log(conc) scale.
  3. The standard errors and confidence intervals for these EMMs are computed – still on the log(conc) scale.
  4. Only now do we do back-transformation…
    1. The EMMs are back-transformed to the conc scale.
    2. The endpoints of the confidence intervals are back-transformed.
    3. The t tests and P values are left as-is.
    4. The standard errors are converted to the conc scale using the delta method. These SEs were not used in constructing the tests and confidence intervals.

The model is our best guide

This choice of timing is based on the idea that the model is right. In particular, the fact that the response is transformed suggests that the transformed scale is the best scale to be working with. In addition, the model specifies that the effects of source and percent are linear on the transformed scale; inasmuch as marginal averaging to obtain EMMs is a linear operation, that averaging is best done on the transformed scale. For those two good reasons, back-transforming to the response scale is delayed until the very end by default.

Back to Contents

Re-gridding

As well-advised as it is, some users may not want the default timing of things. The tool for changing when back-transformation is performed is the regrid() function – which, with default settings of its arguments, back-transforms an emmGrid object and adjusts everything in it appropriately. For example:

str(regrid(pigs.emm.s))
## 'emmGrid' object with variables:
##     source = fish, soy, skim
summary(regrid(pigs.emm.s), infer = TRUE, null = 35)
##  source response   SE df lower.CL upper.CL null t.ratio p.value
##  fish       29.8 1.09 23     27.5     32.1   35 -4.758  0.0001 
##  soy        39.1 1.47 23     36.1     42.2   35  2.827  0.0096 
##  skim       44.6 1.75 23     40.9     48.2   35  5.446  <.0001 
## 
## Results are averaged over the levels of: percent 
## Confidence level used: 0.95

Notice that the structure no longer includes the transformation. That’s because it is no longer relevant; the reference grid is on the conc scale, and how we got there is now forgotten. Compare this summary() result with the preceding one, and note the following:

Understood, right? But think carefully about how these EMMs were obtained. They are back-transformed from pigs.emm.s, in which the marginal averaging was done on the log scale. If we want to back-transform before doing the averaging, we need to call regrid() after the reference grid is constructed but before the averaging takes place:

pigs.rg <- ref_grid(pigs.lm)
pigs.remm.s <- emmeans(regrid(pigs.rg), "source")
summary(pigs.remm.s, infer = TRUE, null = 35)
##  source response   SE df lower.CL upper.CL null t.ratio p.value
##  fish       30.0 1.10 23     27.7     32.2   35 -4.585  0.0001 
##  soy        39.4 1.49 23     36.3     42.5   35  2.927  0.0076 
##  skim       44.8 1.79 23     41.1     48.5   35  5.486  <.0001 
## 
## Results are averaged over the levels of: percent 
## Confidence level used: 0.95

These results all differ from either of the previous two summaries – again, because the averaging is done on the conc scale rather than the log(conc) scale.

Note: For those who want to routinely back-transform before averaging, the transform argument in ref_grid() simplifies this. The first two steps above could have been done more easily as follows:

pigs.remm.s <- emmeans(pigs.lm, "source", transform = "response")

But don’t get transform and type confused. The transform argument is passed to regrid() after the reference grid is constructed, whereas the type argument is simply remembered and used by summary(). So a similar-looking call:

emmeans(pigs.lm, "source", type = "response")

will compute the results we have seen for pigs.emm.s – back-transformed after averaging on the log scale.

Remember again: When it comes to transformations, timing is everything.

Back to Contents

Special transformations

The make.tran() function provides several special transformations and sets things up so they can be handled in emmeans with relative ease. (See help("make.tran", "emmeans") for descriptions of what is available.) make.tran() works much like stats::make.link() in that it returns a list of functions linkfun(), linkinv(), etc. that serve in managing results on a transformed scale. The difference is that most transformations with make.tran() require additional arguments.

To use this capability in emmeans(), it is fortuitous to first obtain the make.tran() result, and then to use it as the enclosing environment for fitting the model, with linkfun as the transformation. For example, suppose the response variable is a percentage and we want to use the response transformation \(\sin^{-1}\sqrt{y/100}\). Then proceed like this:

tran <- make.tran("asin.sqrt", 100)
my.model <- with(tran, 
    lmer(linkfun(percent) ~ treatment + (1|Block), data = mydata))

Subsequent calls to ref_grid(), emmeans(), regrid(), etc. will then be able to access the transformation information correctly.

The help page for make.tran() has an example like this using a Box-Cox transformation.

Back to Contents

Specifying a transformation after the fact

It is not at all uncommon to fit a model using statements like the following:

mydata <- transform(mydata, logy.5 = log(yield + 0.5))
my.model <- lmer(logy.5 ~ treatment + (1|Block), data = mydata)

In this case, there is no way for ref_grid() to figure out that a response transformation was used. What can be done is to update the reference grid with the required information:

my.rg <- update(ref_grid(my.model), tran = make.tran("genlog", .5))

Subsequently, use my.rg in place of my.mnodel in any emmeans() analyses, and the transformation information will be there.

For standard transformations (those in stats::make.link()), just give the name of the transformation; e.g.,

model.rg <- update(ref_grid(model), tran = "sqrt")

Auto-detected response transformations

As can be seen in the initial pigs.lm example in this vignette, certain straightforward response transformations such as log, sqrt, etc. are automatically detected when emmeans() (really, ref_grid()) is called on the model object. In fact, scaling and shifting is supported too; so the preceding example with my.model could have been done more easily by specifying the transformation directly in the model formula:

my.better.model <- lmer(log(yield + 0.5) ~ treatment + (1|Block), data = mydata)

The transformation would be auto-detected, saving you the trouble of adding it later. Similarly, a response transformation of 2 * sqrt(y + 1) would be correctly auto-detected. A model with a linearly transformed response, e.g. 4*(y - 1), would not be auto-detected, but 4*I(y + -1) would be interpreted as 4*identity(y + -1). Parsing is such that the response expression must be of the form mult * fcn(resp + const); operators of - and / are not recognized.

Back to Contents

Faking a log transformation

The regrid() function makes it possible to fake a log transformation of the response. Why would you want to do this? So that you can make comparisons using ratios instead of differences.

Consider the pigs example once again, but suppose we had fitted a model with a square-root transformation instead of a log:

pigroot.lm <- lm(sqrt(conc) ~ source + factor(percent), data = pigs)
piglog.emm.s <- regrid(emmeans(pigroot.lm, "source"), transform = "log")
confint(piglog.emm.s, type = "response")
##  source response   SE df lower.CL upper.CL
##  fish       29.8 1.32 23     27.2     32.7
##  soy        39.2 1.54 23     36.2     42.6
##  skim       45.0 1.74 23     41.5     48.7
## 
## Results are averaged over the levels of: percent 
## Confidence level used: 0.95 
## Intervals are back-transformed from the log scale
pairs(piglog.emm.s, type = "response")
##  contrast    ratio     SE df t.ratio p.value
##  fish / soy  0.760 0.0454 23 -4.591  0.0004 
##  fish / skim 0.663 0.0391 23 -6.965  <.0001 
##  soy / skim  0.872 0.0469 23 -2.548  0.0457 
## 
## Results are averaged over the levels of: percent 
## P value adjustment: tukey method for comparing a family of 3 estimates 
## Tests are performed on the log scale

These results are not identical, but very similar to the back-transformed confidence intervals above for the EMMs and the pairwise ratios in the “comparisons” vignette, where the fitted model actually used a log response.

Back to Contents

Bias adjustment

So far, we have discussed ideas related to back-transforming results as a simple way of expressing results on the same scale as the response. In particular, means obtained in this way are known as generalized means; for example, a log transformation of the response is associated with geometric means. When the goal is simply to make inferences about which means are less than which other means, and a response transformation is used, it is often acceptable to present estimates and comparisons of these generalized means. However, sometimes it is important to report results that actually do reflect expected values of the untransformed response. An example is a financial study, where the response is in some monetary unit. It may be convenient to use a response transformation for modeling purposes, but ultimately we may want to make financial projections in those same units.

In such settings, we need to make a bias adjustment when we back-transform, because any nonlinear transformation biases the expected values of statistical quantities. More specifically, suppose that we have a response \(Y\) and the transformed response is \(U\). To back-transform, we use \(Y = h(U)\); and using a Taylor approximation, \(Y \approx h(\eta) + h'(\eta)(U-\eta) + \frac12h''(\eta)(U-\eta)^2\), so that \(E(Y) \approx h(\eta) + \frac12h''(\eta)Var(U)\). This shows that the amount of needed bias adjustment is approximately \(\frac12h''(\eta)\sigma^2\) where \(\sigma\) is the error SD in the model for \(U\). It depends on \(\sigma\), and the larger this is, the greater the bias adjustment is needed. This second-order bias adjustment is what is currently used in the emmeans package when bias-adjustment is requested. There are better or exact adjustments for certain cases, and future updates may incorporate some of those.

CBPP example

Consider an example adapted from the help page for lme4::cbpp. Contagious bovine pleuropneumonia (CBPP) is a disease in African cattle, and the dataset contains data on incidence of CBPP in several herds of cattle over four time periods. We will fit a mixed model that accounts for herd variations as well as overdispersion (variations larger than expected with a simple binomial model):

require(lme4)
cbpp <- transform(cbpp, unit = 1:nrow(cbpp))
cbpp.glmer <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd) +  (1|unit),
                    family = binomial, data = cbpp)

emm <- emmeans(cbpp.glmer, "period")
summary(emm, type = "response")
##  period   prob     SE  df asymp.LCL asymp.UCL
##  1      0.1824 0.0442 Inf    0.1109    0.2852
##  2      0.0614 0.0230 Inf    0.0290    0.1252
##  3      0.0558 0.0220 Inf    0.0254    0.1182
##  4      0.0334 0.0172 Inf    0.0120    0.0894
## 
## Confidence level used: 0.95 
## Intervals are back-transformed from the logit scale

The above summary reflects the back-transformed estimates, with no bias adjustment. However, the model estimates two independent sources of random variation that probably should be taken into account:

lme4::VarCorr(cbpp.glmer)
##  Groups Name        Std.Dev.
##  unit   (Intercept) 0.89107 
##  herd   (Intercept) 0.18396

Notably, the over-dispersion SD is considerably greater than the herd SD. Suppose we want to estimate the marginal probabilities of CBPP incidence, averaged over herds and over-dispersion variations. For this purpose, we need the combined effect of these variations; so we compute the overall SD via the Pythagorean theorem:

total.SD = sqrt(0.89107^2 + 0.18396^2)

Accordingly, here are the bias-adjusted estimates of the marginal probabilities:

summary(emm, type = "response", bias.adjust = TRUE, sigma = total.SD)
##  period   prob     SE  df asymp.LCL asymp.UCL
##  1      0.2216 0.0462 Inf    0.1426     0.321
##  2      0.0823 0.0292 Inf    0.0400     0.159
##  3      0.0751 0.0282 Inf    0.0351     0.151
##  4      0.0458 0.0230 Inf    0.0168     0.117
## 
## Confidence level used: 0.95 
## Intervals are back-transformed from the logit scale 
## Bias adjustment applied based on sigma = 0.90986

These estimates are somewhat larger than the unadjusted estimates (actually, any estimates greater than 0.5 would have been adjusted downward). These adjusted estimates are more appropriate for describing the marginal incidence of CBPP for all herds. In fact, these estimates are fairly close to those obtained directly from the incidences in the data:

cases <- with(cbpp, tapply(incidence, period, sum))
trials <- with(cbpp, tapply(size, period, sum))
cases / trials
##          1          2          3          4 
## 0.21942446 0.08018868 0.07106599 0.04516129

Back to Contents

Index of all vignette topics