oolong: Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion and Topic intrusion tests, as described in Chang et al. (2009) <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.

Version: 0.3.4
Depends: R (≥ 3.5)
Imports: stm, purrr, tibble, shiny, miniUI, text2vec (≥ 0.6), digest, R6, quanteda, irr, ggplot2, cowplot, dplyr, stats, utils
Suggests: testthat (≥ 2.1.0), topicmodels, covr, stringr, knitr, rmarkdown
Published: 2020-03-21
Author: Chung-hong Chan ORCID iD [aut, cre]
Maintainer: Chung-hong Chan <chainsawtiney at gmail.com>
BugReports: https://github.com/chainsawriot/oolong/issues
License: LGPL-2.1 | LGPL-3 [expanded from: LGPL (≥ 2.1)]
URL: https://github.com/chainsawriot/oolong
NeedsCompilation: no
Materials: NEWS
CRAN checks: oolong results

Downloads:

Reference manual: oolong.pdf
Vignettes: overview
Package source: oolong_0.3.4.tar.gz
Windows binaries: r-devel: not available, r-devel-gcc8: oolong_0.3.4.zip, r-release: oolong_0.3.4.zip, r-oldrel: not available
OS X binaries: r-release: oolong_0.3.4.tgz, r-oldrel: not available

Linking:

Please use the canonical form https://CRAN.R-project.org/package=oolong to link to this page.