agghoo: Aggregated Hold-Out Cross Validation

The 'agghoo' procedure is an alternative to usual cross-validation. Instead of choosing the best model trained on V subsamples, it determines a winner model for each subsample, and then aggregates the V outputs. For the details, see "Aggregated hold-out" by Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle (2021) <arXiv:1909.04890> published in Journal of Machine Learning Research 22(20):1–55.

Version: 0.1-0
Depends: R (≥ 3.5.0)
Imports: class, parallel, R6, rpart, FNN
Suggests: roxygen2, mlbench
Published: 2023-05-25
Author: Sylvain Arlot [ctb], Benjamin Auder [aut, cre, cph], Melina Gallopin [ctb], Matthieu Lerasle [ctb], Guillaume Maillard [ctb]
Maintainer: Benjamin Auder <benjamin.auder at>
License: MIT + file LICENSE
NeedsCompilation: no
Materials: README
CRAN checks: agghoo results


Reference manual: agghoo.pdf


Package source: agghoo_0.1-0.tar.gz
Windows binaries: r-devel:, r-release:, r-oldrel:
macOS binaries: r-release (arm64): agghoo_0.1-0.tgz, r-oldrel (arm64): agghoo_0.1-0.tgz, r-release (x86_64): agghoo_0.1-0.tgz, r-oldrel (x86_64): agghoo_0.1-0.tgz


Please use the canonical form to link to this page.