The motivation for this package is to provide functions which help with the development and tuning of machine learning models in biomedical data where the sample size is frequently limited, but the number of predictors may be significantly larger (P >> n). While most machine learning pipelines involve splitting data into training and testing cohorts, typically 2/3 and 1/3 respectively, medical datasets may be too small for this, and so determination of accuracy in the left-out test set suffers because the test set is small. Nested cross-validation (CV) provides a way to get round this, by maximising use of the whole dataset for testing overall accuracy, while maintaining the split between training and testing.
In addition typical biomedical datasets often have many 10,000s of possible predictors, so filtering of predictors is commonly needed. However, it has been demonstrated that filtering on the whole dataset creates a bias when determining accuracy of models (Vabalas et al, 2019). Feature selection of predictors should be considered an integral part of a model, with feature selection performed only on training data. Then the selected features and accompanying model can be tested on hold-out test data without bias. Thus, it is recommended that any filtering of predictors is performed within the CV loops, to prevent test data information leakage.
This package enables nested cross-validation (CV) to be performed
using the commonly used glmnet
package, which fits elastic
net regression models, and the caret
package, which is a
general framework for fitting a large number of machine learning models.
In addition, nestedcv
adds functionality to enable
cross-validation of the elastic net alpha parameter when fitting
glmnet
models.
nestedcv
partitions the dataset into outer and inner
folds (default 10 x 10 folds). The inner fold CV, (default is 10-fold),
is used to tune optimal hyperparameters for models. Then the model is
fitted on the whole inner fold and tested on the left-out data from the
outer fold. This is repeated across all outer folds (default 10 outer
folds), and the unseen test predictions from the outer folds are
compared against the true results for the outer test folds and the
results concatenated, to give measures of accuracy (e.g. AUC and
accuracy for classification, or RMSE for regression) across the whole
dataset.
A final round of CV is performed on the whole dataset to determine hyperparameters to fit the final model to the whole data, which can be used for prediction with external data.
While some models such as glmnet
allow for sparsity and
have variable selection built-in, many models fail to fit when given
massive numbers of predictors, or perform poorly due to overfitting
without variable selection. In addition, in medicine one of the goals of
predictive modelling is commonly the development of diagnostic or
biomarker tests, for which reducing the number of predictors is
typically a practical necessity.
Several filter functions (t-test, Wilcoxon test, anova,
Pearson/Spearman correlation, random forest variable importance, and
ReliefF from the CORElearn
package) for feature selection
are provided, and can be embedded within the outer loop of the nested
CV.
install.packages("nestedcv")
library(nestedcv)
The following simulated example demonstrates the bias intrinsic to datasets where P >> n when applying filtering of predictors to the whole dataset rather than to training folds.
## Example binary classification problem with P >> n
x <- matrix(rnorm(150 * 2e+04), 150, 2e+04) # predictors
y <- factor(rbinom(150, 1, 0.5)) # binary response
## Partition data into 2/3 training set, 1/3 test set
trainSet <- caret::createDataPartition(y, p = 0.66, list = FALSE)
## t-test filter using whole test set
filt <- ttest_filter(y, x, nfilter = 100)
filx <- x[, filt]
## Train glmnet on training set only using filtered predictor matrix
library(glmnet)
## Loading required package: Matrix
## Loaded glmnet 4.1-7
fit <- cv.glmnet(filx[trainSet, ], y[trainSet], family = "binomial")
## Predict response on test set
predy <- predict(fit, newx = filx[-trainSet, ], s = "lambda.min", type = "class")
predy <- as.vector(predy)
predyp <- predict(fit, newx = filx[-trainSet, ], s = "lambda.min", type = "response")
predyp <- as.vector(predyp)
output <- data.frame(testy = y[-trainSet], predy = predy, predyp = predyp)
## Results on test set
## shows bias since univariate filtering was applied to whole dataset
predSummary(output)
## Reference
## Predicted 0 1
## 0 20 5
## 1 2 23
##
## AUC Accuracy Balanced accuracy
## 0.9659 0.8600 0.8653
## Nested CV
fit2 <- nestcv.glmnet(y, x, family = "binomial", alphaSet = 7:10 / 10,
filterFUN = ttest_filter,
filter_options = list(nfilter = 100))
fit2
## Nested cross-validation with glmnet
## Filter: ttest_filter
##
## Final parameters:
## lambda alpha
## 0.0004073 0.7000000
##
## Final coefficients:
## (Intercept) V11340 V809 V3093 V11136 V1898
## 1.013367 1.105259 0.831687 0.831361 -0.824911 0.814591
## V14044 V19143 V330 V15259 V2525 V15804
## -0.753472 -0.748752 -0.737843 -0.735942 -0.721606 0.720582
## V4222 V15001 V9305 V2311 V10611 V18036
## 0.716133 -0.693554 -0.692168 0.662793 0.599696 0.598271
## V13605 V10852 V14990 V11679 V8441 V5157
## -0.578334 0.553756 0.530844 0.526169 -0.507045 -0.504812
## V9003 V12899 V13515 V16962 V13004 V6896
## 0.494032 -0.489592 -0.488853 0.482397 -0.476452 -0.464210
## V7378 V7225 V1237 V15288 V9279 V15391
## 0.459446 0.445115 0.434062 -0.430543 0.424482 -0.376888
## V19722 V13854 V8685 V11155 V9965 V11922
## 0.371375 0.370960 0.329974 0.325913 0.316078 -0.304247
## V2042 V5286 V968 V3165 V2306 V13036
## -0.293942 -0.283699 0.274052 0.268407 -0.265957 -0.264226
## V12900 V16957 V10452 V3190 V7482 V12977
## 0.257300 0.243624 -0.238305 -0.234853 0.231684 0.227525
## V4443 V19091 V6292 V7249 V9122 V3887
## 0.226391 -0.215471 0.211408 -0.205365 -0.203201 -0.203199
## V1882 V10897 V6416 V18377 V3286 V19556
## -0.198591 0.187572 0.183630 0.178799 -0.172935 -0.170026
## V11587 V16605 V17478 V7153 V18997 V3451
## 0.168206 0.152206 -0.141042 -0.138102 0.134828 -0.103274
## V4671 V16332 V975 V5763 V9203 V1175
## -0.083851 -0.083797 0.082360 0.075921 0.072737 -0.055618
## V13391 V17882 V2772 V10407 V13457 V8797
## 0.053358 -0.037993 -0.025226 -0.020989 -0.019728 -0.013841
## V8524
## 0.004817
##
## Result:
## Reference
## Predicted 0 1
## 0 27 33
## 1 40 50
##
## AUC Accuracy Balanced accuracy
## 0.4560 0.5133 0.5027
testroc <- pROC::roc(output$testy, output$predyp, direction = "<", quiet = TRUE)
inroc <- innercv_roc(fit2)
plot(fit2$roc)
lines(inroc, col = 'blue')
lines(testroc, col = 'red')
legend('bottomright', legend = c("Nested CV", "Left-out inner CV folds",
"Test partition, non-nested filtering"),
col = c("black", "blue", "red"), lty = 1, lwd = 2, bty = "n")
In this example the dataset is pure noise. Filtering of predictors on the whole dataset is a source of leakage of information about the test set, leading to substantially overoptimistic performance on the test set as measured by ROC AUC.
Figures A & B below show two commonly used, but biased methods in which cross-validation is used to fit models, but the result is a biased estimate of model performance. In scheme A, there is no hold-out test set at all, so there are two sources of bias/ data leakage: first, the filtering on the whole dataset, and second, the use of left-out CV folds for measuring performance. Left-out CV folds are known to lead to biased estimates of performance as the tuning parameters are ‘learnt’ from optimising the result on the left-out CV fold.
In scheme B, the CV is used to tune parameters and a hold-out set is used to measure performance, but information leakage occurs when filtering is applied to the whole dataset. Unfortunately this is commonly observed in many studies which apply differential expression analysis on the whole dataset to select predictors which are then passed to machine learning algorithms.