Timings of Big Data visualization with tabplot

We test the speed of tabplot package with datasets over 1,00,000,000 records. For this purpose we multiply the diamonds dataset from the ggplot2 package 2,000 times. This dataset contains 53940 records and 10 variables.

Create testdata

require(ggplot2)
data(diamonds)
## add some NA's
is.na(diamonds$price) <- diamonds$cut == "Ideal"
is.na(diamonds$cut) <- (runif(nrow(diamonds)) > 0.8)
n <- nrow(diamonds)
N <- 200L * n

## convert to ff format (not enough memory otherwise)
require(ffbase)
diamondsff <- as.ffdf(diamonds)
nrow(diamondsff) <- N

# fill with identical data
for (i in chunk(from=1, to=N, by=n)){
  diamondsff[i,] <- diamonds
}

Prepare data

The preparation step is the most time consuming. Per column, the rank order is determined.

system.time(
    p <- tablePrepare(diamondsff)
)
##    user  system elapsed 
##   19.84    2.78   23.14

Create tableplots

To focus on the processing time of the tableplot function, the plot argument is set to FALSE.

system.time(
    tab <- tableplot(p, plot=FALSE)
)
##    user  system elapsed 
##    3.80    0.98    4.81

The following tableplots are samples with respectively 100, 1,000 and 10,000 objects per bin.

system.time(
    tab <- tableplot(p, sample=TRUE, sampleBinSize=1e2, plot=FALSE)
)
##    user  system elapsed 
##    0.04    0.03    0.08
system.time(
    tab <- tableplot(p, sample=TRUE, sampleBinSize=1e3, plot=FALSE)
)
##    user  system elapsed 
##    0.25    0.06    0.44
system.time(
    tab <- tableplot(p, sample=TRUE, sampleBinSize=1e4, plot=FALSE)
)
##    user  system elapsed 
##    2.11    0.23    2.41