contextual: Simulation and Analysis of Contextual Multi-Armed Bandit Policies

Facilitates the simulation and evaluation of context-free and contextual multi-Armed Bandit policies or algorithms to ease the implementation, evaluation, and dissemination of both existing and new bandit algorithms and policies.

Version: 0.9.8
Imports: R6 (≥ 2.3.0), data.table, R.devices, foreach, doParallel, itertools, iterators, Formula
Suggests: testthat, RCurl, splitstackshape, covr, knitr, here, rmarkdown, devtools, ggplot2, vdiffr
Published: 2019-02-10
Author: Robin van Emden ORCID iD [aut, cre], Maurits Kaptein ORCID iD [ctb]
Maintainer: Robin van Emden <robinvanemden at gmail.com>
BugReports: https://github.com/Nth-iteration-labs/contextual/issues
License: GPL-3
URL: https://github.com/Nth-iteration-labs/contextual
NeedsCompilation: no
Materials: README NEWS
CRAN checks: contextual results

Downloads:

Reference manual: contextual.pdf
Vignettes: Demo: Basic Synthetic cMAB Policies
Demo: Offline cMAB LinUCB evaluation
Demo: MAB Replication Eckles & Kaptein (Bootstrap Thompson Sampling)
Demo: Basic Epsilon Greed
Getting started: running simulations
Demo: MAB Policies Comparison
Demo: MovieLens 10M Dataset
Demo: Offline cMAB: CarsKit DePaul Movie Dataset
Offline evaluation: Replication of Li et al 2010
Demo: Bandits, Propensity Weighting & Simpson's Paradox in R
Demo: Replication Sutton & Barto, Reinforcement Learning: An Introduction, Chapter 2
Demo: Replication of John Myles White, Bandit Algorithms for Website Optimization
Package source: contextual_0.9.8.tar.gz
Windows binaries: r-devel: contextual_0.9.8.zip, r-release: contextual_0.9.8.zip, r-oldrel: contextual_0.9.8.zip
OS X binaries: r-release: contextual_0.9.8.tgz, r-oldrel: contextual_0.9.8.tgz
Old sources: contextual archive

Linking:

Please use the canonical form https://CRAN.R-project.org/package=contextual to link to this page.