Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tidyverse/multidplyr
A dplyr backend that partitions a data frame over multiple processes
https://github.com/tidyverse/multidplyr
dplyr multiprocess
Last synced: about 23 hours ago
JSON representation
A dplyr backend that partitions a data frame over multiple processes
- Host: GitHub
- URL: https://github.com/tidyverse/multidplyr
- Owner: tidyverse
- License: other
- Created: 2015-11-05T22:55:06.000Z (about 9 years ago)
- Default Branch: main
- Last Pushed: 2024-07-29T14:46:13.000Z (4 months ago)
- Last Synced: 2024-10-30T16:52:25.455Z (12 days ago)
- Topics: dplyr, multiprocess
- Language: R
- Homepage: https://multidplyr.tidyverse.org
- Size: 2.32 MB
- Stars: 642
- Watchers: 40
- Forks: 75
- Open Issues: 18
-
Metadata Files:
- Readme: README.Rmd
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - tidyverse/multidplyr - A dplyr backend that partitions a data frame over multiple processes (R)
README
---
output: github_document
---```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```# multidplyr
[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)
[![R-CMD-check](https://github.com/tidyverse/multidplyr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/tidyverse/multidplyr/actions/workflows/R-CMD-check.yaml)
[![Codecov test coverage](https://codecov.io/gh/tidyverse/multidplyr/branch/main/graph/badge.svg)](https://app.codecov.io/gh/tidyverse/multidplyr?branch=main)
[![CRAN status](https://www.r-pkg.org/badges/version/multidplyr)](https://cran.r-project.org/package=multidplyr)## Overview
multidplyr is a backend for dplyr that partitions a data frame across multiple cores. You tell multidplyr how to split the data up with `partition()` and then the data stays on each node until you explicitly retrieve it with `collect()`. This minimises the amount of time spent moving data around, and maximises parallel performance. This idea is inspired by [partools](https://github.com/matloff/partools) by Norm Matloff and [distributedR](https://github.com/vertica/DistributedR) by the Vertica Analytics team.
Due to the overhead associated with communicating between the nodes, you won't see much performance improvement with simple operations on less than ~10 million observations, and you may want to instead try [dtplyr](https://dtplyr.tidyverse.org/), which uses [data.table](https://R-datatable.com/). multidplyr's strength is found parallelising calls to slower and more complex functions.
(Note that unlike other packages in the tidyverse, multidplyr requires R 3.5 or greater. We hope to relax this requirement [in the future](https://github.com/traversc/qs/issues/11).)
## Installation
You can install the released version of multidplyr from [CRAN](https://CRAN.R-project.org) with:
``` r
install.packages("multidplyr")
```And the development version from [GitHub](https://github.com/) with:
``` r
# install.packages("pak")
pak::pak("tidyverse/multidplyr")
```## Usage
To use multidplyr, you first create a cluster of the desired number of workers. Each one of these workers is a separate R process, and the operating system will spread their execution across multiple cores:
```{r setup}
library(multidplyr)cluster <- new_cluster(4)
cluster_library(cluster, "dplyr")
```There are two primary ways to use multidplyr. The first, and most efficient, way is to read different files on each worker:
```{r, eval = FALSE}
# Create a filename vector containing different values on each worker
cluster_assign_each(cluster, filename = c("a.csv", "b.csv", "c.csv", "d.csv"))# Use vroom to quickly load the csvs
cluster_send(cluster, my_data <- vroom::vroom(filename))# Create a party_df using the my_data variable on each worker
my_data <- party_df(cluster, "my_data")
```Alternatively, if you already have the data loaded in the main session, you can use `partition()` to automatically spread it across the workers. Before calling `partition()`, it's a good idea to call `group_by()` to ensure that all of the observations belonging to a group end up on the same worker.
```{r}
library(nycflights13)flight_dest <- flights %>% group_by(dest) %>% partition(cluster)
flight_dest
```Now you can work with it like a regular data frame, but the computations will be spread across multiple cores. Once you've finished computation, use `collect()` to bring the data back to the host session:
```{r}
flight_dest %>%
summarise(delay = mean(dep_delay, na.rm = TRUE), n = n()) %>%
collect()
```Note that there is some overhead associated with copying data from the worker nodes back to the host node (and vice versa), so you're best off using multidplyr with more complex operations. See `vignette("multidplyr")` for more details.