iml: Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <arXiv:1801.01489>, partial dependence plots described by Friedman (2001) <>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <arXiv:1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x> and tree surrogate models.

Version: 0.2.1
Imports: R6, checkmate, dplyr, tidyr, ggplot2, partykit, glmnet, Metrics, data.table
Suggests: randomForest, gower, testthat, rpart, MASS, caret, e1071, lime, mlr
Published: 2018-03-13
Author: Christoph Molnar [aut, cre]
Maintainer: Christoph Molnar <christoph.molnar at>
License: MIT + file LICENSE
NeedsCompilation: no
Materials: NEWS
CRAN checks: iml results


Reference manual: iml.pdf
Package source: iml_0.2.1.tar.gz
Windows binaries: r-devel:, r-release:, r-oldrel:
OS X El Capitan binaries: r-release: iml_0.2.1.tgz
OS X Mavericks binaries: r-oldrel: not available


Please use the canonical form to link to this page.