cleanNLP: A Tidy Data Model for Natural Language Processing

Provides a set of fast tools for converting a textual corpus into a set of normalized tables. Users may make use of a Python back end with 'spaCy' <> or the Java back end 'CoreNLP' <>. A minimal back end with no external dependencies is also provided. Exposed annotation tasks include tokenization, part of speech tagging, named entity recognition, entity linking, sentiment analysis, dependency parsing, coreference resolution, and word embeddings. Summary statistics regarding token unigram, part of speech tag, and dependency type frequencies are also included to assist with analyses.

Version: 1.5.2
Depends: dplyr, readr, R (≥ 3.0)
Imports: Matrix, stats, methods, utils
Suggests: reticulate, rJava, tokenizers, RCurl, knitr, rmarkdown, testthat
Published: 2017-04-12
Author: Taylor B. Arnold [aut, cre]
Maintainer: Taylor B. Arnold <taylor.arnold at>
License: GPL-3
NeedsCompilation: no
SystemRequirements: Python (>= 2.7.0); spaCy <> (>= 1.7); Java (>= 7.0); Stanford CoreNLP <> (>= 3.7.0)
Materials: README
CRAN checks: cleanNLP results


Reference manual: cleanNLP.pdf
Vignettes: Introduction to the cleanNLP package
Introduction to the cleanNLP package
A Data Model for the NLP Pipeline
Package source: cleanNLP_1.5.2.tar.gz
Windows binaries: r-devel:, r-release:, r-oldrel:
OS X El Capitan binaries: r-release: cleanNLP_1.5.2.tgz
OS X Mavericks binaries: r-oldrel: cleanNLP_1.5.2.tgz
Old sources: cleanNLP archive


Please use the canonical form to link to this page.