mirai mirai logo

CRAN status mirai status badge R-CMD-check codecov DOI

Minimalist async evaluation framework for R.

Lightweight parallel code execution and distributed computing.

Designed for simplicity, a ‘mirai’ evaluates an R expression asynchronously, on local or network resources, resolving automatically upon completion.

mirai() returns a ‘mirai’ object immediately. ‘mirai’ (未来 みらい) is Japanese for ‘future’.

Efficient scheduling over fast inter-process communications or secure TLS connections over TCP/IP, built on ‘nanonext’ and ‘NNG’ (Nanomsg Next Gen).

{mirai} has a tiny pure R code base, relying solely on nanonext, a high-performance binding for the ‘NNG’ (Nanomsg Next Gen) C library with zero package dependencies.

Installation

Install the latest release from CRAN:

install.packages("mirai")

or the development version from rOpenSci R-universe:

install.packages("mirai", repos = "https://shikokuchuo.r-universe.dev")

Quick Start

Use mirai() to evaluate an expression asynchronously in a separate, clean R process.

A ‘mirai’ object is returned immediately.

library(mirai)

m <- mirai(
  {
    res <- rnorm(x) + y ^ 2
    res / rev(res)
  },
  x = 10,
  y = runif(1)
)

m
#> < mirai | $data >

Above, all specified name = value pairs are passed through to the ‘mirai’.

The ‘mirai’ yields an ‘unresolved’ logical NA whilst the async operation is ongoing.

m$data
#> 'unresolved' logi NA

To check whether a mirai has resolved:

unresolved(m)
#> [1] FALSE

Upon completion, the ‘mirai’ resolves automatically to the evaluated result.

m$data
#>  [1]   6.34034300  -0.04935289 -16.62688852 -21.83976726   0.32279128
#>  [6]   3.09797711  -0.04578803  -0.06014354 -20.26223825   0.15772017

Alternatively, explicitly call and wait for the result using call_mirai().

call_mirai(m)$data
#>  [1]   6.34034300  -0.04935289 -16.62688852 -21.83976726   0.32279128
#>  [6]   3.09797711  -0.04578803  -0.06014354 -20.26223825   0.15772017

Daemons

Daemons are persistent background processes created to receive ‘mirai’ requests.

They may be deployed for:

local parallel processing, or

remote network distributed computing.

Launchers allow daemons to be started both on the local machine and across the network via SSH etc.

Secure TLS connections can be automatically-configured on-the-fly for remote daemon connections.

Refer to the {mirai} vignette for full package functionality. This may be accessed within R by:

vignette("mirai", package = "mirai")

Integrations

The following core integrations are documented, with usage examples in the linked vignettes:

{parallel} - provides an alternative communications backend for R, implementing a low-level feature request by R-Core at R Project Sprint 2023.

{promises} - ‘mirai’ may be used interchangeably with ‘promises’ by using the promise pipe %...>% or the as.promise() method.

{plumber} - serves as an asynchronous / distributed backend, scaling applications via the use of promises.

{shiny} - serves as an asynchronous / distributed backend, plugging directly into the reactive framework without the need for promises.

{torch} - the custom serialization interface allows tensors and complex objects such as models and optimizers to be used seamlessly across parallel processes.

Powering Crew and Targets High Performance Computing

{targets}, a Make-like pipeline tool for statistics and data science, has integrated and adopted {crew} as its default high-performance computing backend.

{crew} is a distributed worker-launcher extending {mirai} to different distributed computing platforms, from traditional clusters to cloud services.

{crew.cluster} enables mirai-based workflows on traditional high-performance computing clusters using LFS, PBS/TORQUE, SGE and SLURM.

{crew.aws.batch} extends {mirai} to cloud computing using AWS Batch.

Thanks

We would like to thank in particular:

William Landau, for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of {crew} and {targets}.

Henrik Bengtsson, for valuable and incisive insights leading to the interface accepting broader usage patterns.

Luke Tierney, R Core, for discussion on R’s implementation of L’Ecuyer-CMRG streams, used to ensure statistical independence in parallel processing.

Daniel Falbel, for discussion around an efficient solution to serialization and transmission of {torch} tensors.

« Back to ToC

mirai website: https://shikokuchuo.net/mirai/
mirai on CRAN: https://cran.r-project.org/package=mirai

Listed in CRAN Task View:
- High Performance Computing: https://cran.r-project.org/view=HighPerformanceComputing

nanonext website: https://shikokuchuo.net/nanonext/
nanonext on CRAN: https://cran.r-project.org/package=nanonext

NNG website: https://nng.nanomsg.org/

« Back to ToC

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.