{"id":36212688,"url":"https://github.com/fastverse/collapse","last_synced_at":"2026-01-11T04:04:40.178Z","repository":{"id":37446831,"uuid":"172910283","full_name":"fastverse/collapse","owner":"fastverse","description":"Advanced and Fast Data Transformation in R","archived":false,"fork":false,"pushed_at":"2026-01-10T08:28:17.000Z","size":118483,"stargazers_count":696,"open_issues_count":16,"forks_count":37,"subscribers_count":8,"default_branch":"master","last_synced_at":"2026-01-11T01:45:13.884Z","etag":null,"topics":["cran","data-aggregation","data-analysis","data-manipulation","data-processing","data-science","data-transformation","econometrics","high-performance","panel-data","r","rstats","scientific-computing","statistics","time-series","weighted","weights"],"latest_commit_sha":null,"homepage":"https://fastverse.org/collapse","language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fastverse.png","metadata":{"files":{"readme":"README.md","changelog":"NEWS.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2019-02-27T12:21:05.000Z","updated_at":"2026-01-10T06:42:40.000Z","dependencies_parsed_at":"2023-09-22T14:19:56.494Z","dependency_job_id":"5016c4d2-e809-4ce2-bbf4-bb375cd0c0ed","html_url":"https://github.com/fastverse/collapse","commit_stats":{"total_commits":2372,"total_committers":11,"mean_commits":"215.63636363636363","dds":"0.025295109612141653","last_synced_commit":"e17a54679b741b9617c5a88feadb7e54e44db346"},"previous_names":["fastverse/collapse"],"tags_count":60,"template":false,"template_full_name":null,"purl":"pkg:github/fastverse/collapse","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastverse%2Fcollapse","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastverse%2Fcollapse/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastverse%2Fcollapse/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastverse%2Fcollapse/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fastverse","download_url":"https://codeload.github.com/fastverse/collapse/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastverse%2Fcollapse/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28280483,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-11T03:48:11.750Z","status":"ssl_error","status_checked_at":"2026-01-11T03:48:02.765Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cran","data-aggregation","data-analysis","data-manipulation","data-processing","data-science","data-transformation","econometrics","high-performance","panel-data","r","rstats","scientific-computing","statistics","time-series","weighted","weights"],"created_at":"2026-01-11T04:04:35.225Z","updated_at":"2026-01-11T04:04:40.170Z","avatar_url":"https://github.com/fastverse.png","language":"C","readme":"# collapse \u003cimg src='man/figures/logo.png' width=\"150px\" align=\"right\" /\u003e\n\n\u003c!-- badges: start --\u003e\n[![R-CMD-check](https://github.com/fastverse/collapse/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/fastverse/collapse/actions/workflows/R-CMD-check.yaml)\n[![collapse status badge](https://fastverse.r-universe.dev/badges/collapse)](https://fastverse.r-universe.dev/collapse)\n[![CRAN status](https://www.r-pkg.org/badges/version/collapse)](https://cran.r-project.org/package=collapse) \n[![cran checks](https://badges.cranchecks.info/worst/collapse.svg)](https://cran.r-project.org/web/checks/check_results_collapse.html)\n![downloads per month](https://cranlogs.r-pkg.org/badges/collapse) \u003c!-- ?color=blue --\u003e\n![downloads](https://cranlogs.r-pkg.org/badges/grand-total/collapse) \u003c!-- ?color=blue --\u003e\n [![Conda Version](https://img.shields.io/conda/vn/conda-forge/r-collapse.svg)](https://anaconda.org/conda-forge/r-collapse)\n [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/r-collapse.svg)](https://anaconda.org/conda-forge/r-collapse)\n[![Codecov test coverage](https://codecov.io/gh/fastverse/collapse/branch/master/graph/badge.svg)](https://app.codecov.io/gh/fastverse/collapse?branch=master)\n[![minimal R version](https://img.shields.io/badge/R%3E%3D-3.5.0-6666ff.svg)](https://cran.r-project.org/)\n[![dependencies](https://tinyverse.netlify.app/badge/collapse)](https://CRAN.R-project.org/package=collapse)\n[![DOI](https://zenodo.org/badge/172910283.svg)](https://zenodo.org/badge/latestdoi/172910283)\n[![arXiv](https://img.shields.io/badge/arXiv-2403.05038-0969DA.svg)](https://arxiv.org/abs/2403.05038)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/fastverse/collapse)\n\u003c!-- badges: end --\u003e\n\n*collapse* is a large C/C++-based package for data transformation and statistical computing in R. It aims to:\n\n* Facilitate complex data transformation, exploration and computing tasks in R.\n* Help make R code fast, flexible, parsimonious and programmer friendly. \n\nIts novel [class-agnostic architecture](https://fastverse.org/collapse/articles/collapse_object_handling.html) supports all basic R objects and their popular extensions, including *units*, *integer64*, *xts*/*zoo*, *tibble*, *grouped_df*, *data.table*, *sf*, *pseries* and *pdata.frame*. \n\n\n\n**Key Features:**\n\n*  **Advanced statistical programming**: A full set of fast statistical functions \n        supporting grouped and weighted computations on vectors, matrices and \n        data frames. Fast and programmable grouping, ordering, matching, deduplication, \n        factor generation and interactions. \n        \n* **Fast data manipulation**: Fast and flexible functions for data \n        manipulation, data object conversions and memory efficient R programming.\n\n*  **Advanced aggregation**: Fast and easy multi-type, weighted and parallelized data aggregation.\n\n*  **Advanced transformations**: Fast row/column arithmetic, (grouped) sweeping out of statistics (by reference), \n        (grouped, weighted) scaling and (higher-dimensional) centering and averaging.\n\n*  **Advanced time-computations**: Fast and flexible indexed time series and panel data classes, lags/leads, \n       differences and (compound) growth rates on (irregular) time series and panels, panel-autocorrelation functions and panel data to array conversions.\n\n*  **List processing**: Recursive list search, filtering, splitting, apply and unlisting to data frame.\n\n* **Advanced data exploration**: Fast (grouped, weighted, multi-level) descriptive statistical tools.\n\n*collapse* is written in C and C++, with algorithms much faster than base R's, has extremely low evaluation overheads, scales well (benchmarks: [linux](https://duckdblabs.github.io/db-benchmark/) | [windows](https://github.com/AdrianAntico/Benchmarks?tab=readme-ov-file#benmark-results)), and excels on complex statistical tasks. \u003c!--, such as weighted statistics, mode/counting/deduplication, joins, pivots, panel data.  Optimized R code ensures minimal evaluation overheads.  , but imports C/C++ functions from *fixest*, *weights*, *RcppArmadillo*, and *RcppEigen* for certain statistical tasks.  --\u003e\n\n## Installation\n\n``` r\n# Install the current version on CRAN\ninstall.packages(\"collapse\")\n\n# Install a stable development version (Windows/Mac binaries) from R-universe\ninstall.packages(\"collapse\", repos = \"https://fastverse.r-universe.dev\")\n\n# Install a stable development version from GitHub (requires compilation)\nremotes::install_github(\"fastverse/collapse\")\n\n# Install previous versions from the CRAN Archive (requires compilation)\ninstall.packages(\"https://cran.r-project.org/src/contrib/Archive/collapse/collapse_2.0.19.tar.gz\", \n                 repos = NULL, type = \"source\") \n# Older stable versions: 1.9.6, 1.8.9, 1.7.6, 1.6.5, 1.5.3, 1.4.2, 1.3.2, 1.2.1\n```\n\n## Documentation\n\n*collapse* installs with a built-in structured [documentation](\u003chttps://fastverse.org/collapse/reference/collapse-documentation.html\u003e), implemented via a set of separate help pages. Calling `help('collapse-documentation')` brings up the the top-level documentation page, providing an overview of the entire package and links to all other documentation pages. \n\nIn addition there are several [vignettes](\u003chttps://fastverse.org/collapse/articles/index.html\u003e), among them one on [Documentation and Resources](https://fastverse.org/collapse/articles/collapse_documentation.html).\n\n### Cheatsheet\n\n\u003ca href=\"https://raw.githubusercontent.com/fastverse/collapse/master/misc/collapse%20cheat%20sheet/collapse_cheat_sheet.pdf\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/fastverse/collapse/master/misc/collapse%20cheat%20sheet/preview/page1.png\" width=\"330\"/\u003e\u003c/a\u003e  \u003c!-- height=\"227\" 294 --\u003e\n\u003ca href=\"https://raw.githubusercontent.com/fastverse/collapse/master/misc/collapse%20cheat%20sheet/collapse_cheat_sheet.pdf\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/fastverse/collapse/master/misc/collapse%20cheat%20sheet/preview/page2.png\" width=\"330\"/\u003e\u003c/a\u003e \n\n### Article on arXiv\n\nAn [**article**](https://arxiv.org/abs/2403.05038) on *collapse* is forthcoming at [Journal of Statistical Software](https://www.jstatsoft.org/). \n\n### Presentation at [useR 2022](https://user2022.r-project.org)\n\n[**Video Recording**](\u003chttps://www.youtube.com/watch?v=OwWT1-dSEts\u003e) | \n[**Slides**](\u003chttps://raw.githubusercontent.com/fastverse/collapse/master/misc/useR2022%20presentation/collapse_useR2022_final.pdf\u003e)\n\n## Example Usage\nThis provides a simple set of examples introducing some important features of *collapse*. It should be easy to follow for readers familiar with R. \n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003e\u003ca style=\"cursor: pointer;\"\u003eClick here to expand \u003c/a\u003e\u003c/b\u003e \u003c/summary\u003e\n  \n``` r\nlibrary(collapse)\ndata(\"iris\")            # iris dataset in base R\nv \u003c- iris$Sepal.Length  # Vector\nd \u003c- num_vars(iris)     # Saving numeric variables (could also be a matrix, statistical functions are S3 generic)\ng \u003c- iris$Species       # Grouping variable (could also be a list of variables)\n\n## Advanced Statistical Programming -----------------------------------------------------------------------------\n\n# Simple (column-wise) statistics...\nfmedian(v)                       # Vector\nfsd(qM(d))                       # Matrix (qM is a faster as.matrix)\nfmode(d)                         # data.frame\nfmean(qM(d), drop = FALSE)       # Still a matrix\nfmax(d, drop = FALSE)            # Still a data.frame\n\n# Fast grouped and/or weighted statistics\nw \u003c- abs(rnorm(fnrow(iris)))\nfmedian(d, w = w)                 # Simple weighted statistics\nfnth(d, 0.75, g)                  # Grouped statistics (grouped third quartile)\nfmedian(d, g, w)                  # Groupwise-weighted statistics\nfsd(v, g, w)                      # Similarly for vectors\nfmode(qM(d), g, w, ties = \"max\")  # Or matrices (grouped and weighted maximum mode) ...\n\n# A fast set of data manipulation functions allows complex piped programming at high speeds\nlibrary(magrittr)                            # Pipe operators\niris %\u003e% fgroup_by(Species) %\u003e% fndistinct   # Grouped distinct value counts\niris %\u003e% fgroup_by(Species) %\u003e% fmedian(w)   # Weighted group medians \niris %\u003e% add_vars(w) %\u003e%                     # Adding weight vector to dataset\n  fsubset(Sepal.Length \u003c fmean(Sepal.Length), Species, Sepal.Width:w) %\u003e% # Fast selecting and subsetting\n  fgroup_by(Species) %\u003e%                     # Grouping (efficiently creates a grouped tibble)\n  fvar(w) %\u003e%                                # Frequency-weighted group-variance, default (keep.w = TRUE)  \n  roworder(sum.w)                            # also saves group weights in a column called 'sum.w'\n\n# Can also use dplyr (but dplyr manipulation verbs are a lot slower)\nlibrary(dplyr)\niris %\u003e% add_vars(w) %\u003e% \n  filter(Sepal.Length \u003c fmean(Sepal.Length)) %\u003e% \n  select(Species, Sepal.Width:w) %\u003e% \n  group_by(Species) %\u003e% \n  fvar(w) %\u003e% arrange(sum.w)\n  \n## Fast Data Manipulation ---------------------------------------------------------------------------------------\n\nhead(GGDC10S)\n\n# Pivot Wider: Only SUM (total)\nSUM \u003c- GGDC10S |\u003e pivot(c(\"Country\", \"Year\"), \"SUM\", \"Variable\", how = \"wider\")\nhead(SUM)\n\n# Joining with data from wlddev\nwlddev |\u003e\n    join(SUM, on = c(\"iso3c\" = \"Country\", \"year\" = \"Year\"), how = \"inner\")\n\n# Recast pivoting + supplying new labels for generated columns\npivot(GGDC10S, values = 6:16, names = list(\"Variable\", \"Sectorcode\"),\n      labels = list(to = \"Sector\",\n                    new = c(Sectorcode = \"GGDC10S Sector Code\",\n                            Sector = \"Long Sector Description\",\n                            VA = \"Value Added\",\n                            EMP = \"Employment\")), \n      how = \"recast\", na.rm = TRUE)\n\n## Advanced Aggregation -----------------------------------------------------------------------------------------\n\ncollap(iris, Sepal.Length + Sepal.Width ~ Species, fmean)  # Simple aggregation using the mean..\ncollap(iris, ~ Species, list(fmean, fmedian, fmode))       # Multiple functions applied to each column\nadd_vars(iris) \u003c- w                                        # Adding weights, return in long format..\ncollap(iris, ~ Species, list(fmean, fmedian, fmode), w = ~ w, return = \"long\")\n\n# Generate some additional logical data\nsettransform(iris, AWMSL = Sepal.Length \u003e fmedian(Sepal.Length, w = w), \n                   AWMSW = Sepal.Width \u003e fmedian(Sepal.Width, w = w))\n\n# Multi-type data aggregation: catFUN applies to all categorical columns (here AMWSW)\ncollap(iris, ~ Species + AWMSL, list(fmean, fmedian, fmode), \n       catFUN = fmode, w = ~ w, return = \"long\")\n\n# Custom aggregation gives the greatest possible flexibility: directly mapping functions to columns\ncollap(iris, ~ Species + AWMSL, \n       custom = list(fmean = 2:3, fsd = 3:4, fmode = \"AWMSL\"), w = ~ w, \n       wFUN = list(fsum, fmin, fmax), # Here also aggregating the weight vector with 3 different functions\n       keep.col.order = FALSE)        # Column order not maintained -\u003e grouping and weight variables first\n\n# Can also use grouped tibble: weighted median for numeric, weighted mode for categorical columns\niris %\u003e% fgroup_by(Species, AWMSL) %\u003e% collapg(fmedian, fmode, w = w)\n\n## Advanced Transformations -------------------------------------------------------------------------------------\n\n# All Fast Statistical Functions have a TRA argument, supporting 10 different replacing and sweeping operations\nfmode(d, TRA = \"replace\")     # Replacing values with the mode\nfsd(v, TRA = \"/\")             # dividing by the overall standard deviation (scaling)\nfsum(d, TRA = \"%\")            # Computing percentages\nfsd(d, g, TRA = \"/\")          # Grouped scaling\nfmin(d, g, TRA = \"-\")         # Setting the minimum value in each species to 0\nffirst(d, g, TRA = \"%%\")      # Taking modulus of first value in each species\nfmedian(d, g, w, \"-\")         # Groupwise centering by the weighted median\nfnth(d, 0.95, g, w, \"%\")      # Expressing data in percentages of the weighted species-wise 95th percentile\nfmode(d, g, w, \"replace\",     # Replacing data by the species-wise weighted minimum-mode\n      ties = \"min\")\n\n# TRA() can also be called directly to replace or sweep with a matching set of computed statistics\nTRA(v, sd(v), \"/\")                       # Same as fsd(v, TRA = \"/\")\nTRA(d, fmedian(d, g, w), \"-\", g)         # Same as fmedian(d, g, w, \"-\")\nTRA(d, BY(d, g, quantile, 0.95), \"%\", g) # Same as fnth(d, 0.95, g, TRA = \"%\") (apart from quantile algorithm)\n\n# For common uses, there are some faster and more advanced functions\nfbetween(d, g)                           # Grouped averaging [same as fmean(d, g, TRA = \"replace\") but faster]\nfwithin(d, g)                            # Grouped centering [same as fmean(d, g, TRA = \"-\") but faster]\nfwithin(d, g, w)                         # Grouped and weighted centering [same as fmean(d, g, w, \"-\")]\nfwithin(d, g, w, theta = 0.76)           # Quasi-centering i.e. d - theta*fbetween(d, g, w)\nfwithin(d, g, w, mean = \"overall.mean\")  # Preserving the overall weighted mean of the data\n\nfscale(d)                                # Scaling and centering (default mean = 0, sd = 1)\nfscale(d, mean = 5, sd = 3)              # Custom scaling and centering\nfscale(d, mean = FALSE, sd = 3)          # Mean preserving scaling\nfscale(d, g, w)                          # Grouped and weighted scaling and centering\nfscale(d, g, w, mean = \"overall.mean\",   # Setting group means to overall weighted mean,\n       sd = \"within.sd\")                 # and group sd's to fsd(fwithin(d, g, w), w = w)\n\nget_vars(iris, 1:2)                      # Use get_vars for fast selecting data.frame columns, gv is shortcut\nfhdbetween(gv(iris, 1:2), gv(iris, 3:5)) # Linear prediction with factors and continuous covariates\nfhdwithin(gv(iris, 1:2), gv(iris, 3:5))  # Linear partialling out factors and continuous covariates\n\n# This again opens up new possibilities for data manipulation...\niris %\u003e%  \n  ftransform(ASWMSL = Sepal.Length \u003e fmedian(Sepal.Length, Species, w, \"replace\")) %\u003e%\n  fgroup_by(ASWMSL) %\u003e% collapg(w = w, keep.col.order = FALSE)\n\niris %\u003e% fgroup_by(Species) %\u003e% num_vars %\u003e% fwithin(w)  # Weighted demeaning\n\n\n## Time Series and Panel Series ---------------------------------------------------------------------------------\n\nflag(AirPassengers, -1:3)                      # A sequence of lags and leads\nEuStockMarkets %\u003e%                             # A sequence of first and second seasonal differences\n  fdiff(0:1 * frequency(.), 1:2)  \nfdiff(EuStockMarkets, rho = 0.95)              # Quasi-difference [x - rho*flag(x)]\nfdiff(EuStockMarkets, log = TRUE)              # Log-difference [log(x/flag(x))]\nEuStockMarkets %\u003e% fgrowth(c(1, frequency(.))) # Ordinary and seasonal growth rate\nEuStockMarkets %\u003e% fgrowth(logdiff = TRUE)     # Log-difference growth rate [log(x/flag(x))*100]\n\n# Creating panel data\npdata \u003c- EuStockMarkets %\u003e% list(`A` = ., `B` = .) %\u003e% \n         unlist2d(idcols = \"Id\", row.names = \"Time\")  \n\nL(pdata, -1:3, ~Id, ~Time)                   # Sequence of fully identified panel-lags (L is operator for flag) \npdata %\u003e% fgroup_by(Id) %\u003e% flag(-1:3, Time) # Same thing..\n\n# collapse also supports indexed series and data frames (and plm panel data classes)\npdata \u003c- findex_by(pdata, Id, Time)         \nL(pdata, -1:3)          # Same as above, ...\npsacf(pdata)            # Multivariate panel-ACF\npsmat(pdata) %\u003e% plot   # 3D-array of time series from panel data + plotting\n\nHDW(pdata)              # This projects out id and time fixed effects.. (HDW is operator for fhdwithin)\nW(pdata, effect = \"Id\") # Only Id effects.. (W is operator for fwithin)\n\n## List Processing ----------------------------------------------------------------------------------------------\n\n# Some nested list of heterogenous data objects..\nl \u003c- list(a = qM(mtcars[1:8]),                                   # Matrix\n          b = list(c = mtcars[4:11],                             # data.frame\n                   d = list(e = mtcars[2:10], \n                            f = fsd(mtcars))))                   # Vector\n\nldepth(l)                       # List has 4 levels of nesting (considering that mtcars is a data.frame)\nis_unlistable(l)                # Can be unlisted\nhas_elem(l, \"f\")                # Contains an element by the name of \"f\"\nhas_elem(l, is.matrix)          # Contains a matrix\n\nget_elem(l, \"f\")                # Recursive extraction of elements..\nget_elem(l, c(\"c\",\"f\"))         \nget_elem(l, c(\"c\",\"f\"), keep.tree = TRUE)\nunlist2d(l, row.names = TRUE)   # Intelligent recursive row-binding to data.frame   \nrapply2d(l, fmean) %\u003e% unlist2d # Taking the mean of all elements and repeating\n\n# Application: extracting and tidying results from (potentially nested) lists of model objects\nlist(mod1 = lm(mpg ~ carb, mtcars), \n     mod2 = lm(mpg ~ carb + hp, mtcars)) %\u003e%\n  lapply(summary) %\u003e% \n  get_elem(\"coef\", regex = TRUE) %\u003e%   # Regular expression search and extraction\n  unlist2d(idcols = \"Model\", row.names = \"Predictor\")\n\n## Summary Statistics -------------------------------------------------------------------------------------------\n\nirisNA \u003c- na_insert(iris, prop = 0.15)  # Randmonly set 15% missing\nfnobs(irisNA)                           # Observation count\npwnobs(irisNA)                          # Pairwise observation count\nfnobs(irisNA, g)                        # Grouped observation count\nfndistinct(irisNA)                      # Same with distinct values... (default na.rm = TRUE skips NA's)\nfndistinct(irisNA, g)  \n\ndescr(iris)                                   # Detailed statistical description of data\n\nvarying(iris, ~ Species)                      # Show which variables vary within Species\nvarying(pdata)                                # Which are time-varying ? \nqsu(iris, w = ~ w)                            # Fast (one-pass) summary (with weights)\nqsu(iris, ~ Species, w = ~ w, higher = TRUE)  # Grouped summary + higher moments\nqsu(pdata, higher = TRUE)                     # Panel-data summary (between and within entities)\npwcor(num_vars(irisNA), N = TRUE, P = TRUE)   # Pairwise correlations with p-value and observations\npwcor(W(pdata, keep.ids = FALSE), P = TRUE)   # Within-correlations\n\n```\n\n\u003c/details\u003e\n\u003cp\u003e \u003c/p\u003e\n\nEvaluated and more extensive sets of examples are provided on the [package page](\u003chttps://fastverse.org/collapse/reference/collapse-package.html\u003e) (also accessible from R by calling `example('collapse-package')`), and further in the [vignettes](\u003chttps://fastverse.org/collapse/articles/index.html\u003e) and  [documentation](\u003chttps://fastverse.org/collapse/reference/index.html\u003e).\n\n## Citation\n\nIf *collapse* was instrumental for your research project, please consider citing it using `citation(\"collapse\")`.\n\n\n\n","funding_links":[],"categories":["C"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffastverse%2Fcollapse","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffastverse%2Fcollapse","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffastverse%2Fcollapse/lists"}