{"id":18696626,"url":"https://github.com/sergiocorreia/ftools","last_synced_at":"2026-02-12T00:01:50.020Z","repository":{"id":46536181,"uuid":"63449974","full_name":"sergiocorreia/ftools","owner":"sergiocorreia","description":"Fast Stata commands for large datasets","archived":false,"fork":false,"pushed_at":"2023-08-21T03:06:23.000Z","size":840,"stargazers_count":136,"open_issues_count":13,"forks_count":41,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-08-28T23:57:26.279Z","etag":null,"topics":["collapse","data-manipulation","egen","factor","mata","merge","stata"],"latest_commit_sha":null,"homepage":null,"language":"Stata","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sergiocorreia.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2016-07-15T20:43:31.000Z","updated_at":"2025-08-22T02:04:29.000Z","dependencies_parsed_at":"2022-09-02T14:31:01.004Z","dependency_job_id":"b537e331-2736-41d1-9843-80a0269b6cf0","html_url":"https://github.com/sergiocorreia/ftools","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/sergiocorreia/ftools","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiocorreia%2Fftools","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiocorreia%2Fftools/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiocorreia%2Fftools/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiocorreia%2Fftools/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sergiocorreia","download_url":"https://codeload.github.com/sergiocorreia/ftools/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sergiocorreia%2Fftools/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29350079,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-11T20:11:40.865Z","status":"ssl_error","status_checked_at":"2026-02-11T20:10:41.637Z","response_time":97,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["collapse","data-manipulation","egen","factor","mata","merge","stata"],"created_at":"2024-11-07T11:19:54.701Z","updated_at":"2026-02-12T00:01:50.000Z","avatar_url":"https://github.com/sergiocorreia.png","language":"Stata","readme":"# FTOOLS: A faster Stata for large datasets\n![GitHub release (latest by date)](https://img.shields.io/github/v/release/sergiocorreia/ftools?label=last%20version)\n![GitHub Release Date](https://img.shields.io/github/release-date/sergiocorreia/ftools)\n![GitHub commits since latest release (by date)](https://img.shields.io/github/commits-since/sergiocorreia/ftools/latest)\n![StataMin](https://img.shields.io/badge/stata-%3E%3D%2012.1-blue)\n[![DOI](https://zenodo.org/badge/63449974.svg)](https://zenodo.org/badge/latestdoi/63449974)\n- Jump to: [`usage`](#usage) [`benchmarks`](#benchmarks) [`install`](#installation)\n\n-----------\n\n## Introduction\n\nSome of the most common Stata commands (collapse, merge, sort, etc.) are not designed for large datasets. This package provides alternative implementations that solves this problem, speeding up these commands by 3x-10x:\n\n![collapse benchmark](docs/benchmark_small.png \"collapse benchmark\")\n\nOther user commands that are very useful for speeding up Stata with large datasets include:\n\n- [`gtools`](https://github.com/mcaceresb/stata-gtools), a package similar to `ftools` but written in C. In most cases it's much faster than both ftools and the standard Stata commands, as shown in the graph above. Try it out!\n- [`sumup`](https://github.com/matthieugomez/sumup.ado) provides fast summary statistics, and includes the `fasttabstat` command, a faster version of `tabstat`.\n- [`egenmisc`](https://github.com/matthieugomez/stata-egenmisc) introduces the egen functions `fastxtile`, `fastwpctile`, etc. that provide much faster alternatives to `xtile` and `pctile`. Also see the [`fastxtile`](https://github.com/michaelstepner/fastxtile) package, which provides similar functionality.\n- [`randomtag`](https://ideas.repec.org/c/boc/bocode/s457898.html) is a much faster alternative to `sample`.\n- [`reghdfe`](https://github.com/sergiocorreia/reghdfe/) provides a faster alternative to `xtreg` and `areg`, as well as multi-way clustering and IV regression.\n- [`parallel`](https://github.com/gvegayon/parallel) allows for easier parallel computing in Stata (useful when running simulations, reshaping, etc.)\n- [`boottest`](https://github.com/droodman/boottest), for efficiently running wild bootstraps.\n- The [`rangerun`](https://ideas.repec.org/c/boc/bocode/s458356.html), [`runby`](https://ideas.repec.org/c/boc/bocode/s458413.html) and [`rangestat`](https://ideas.repec.org/c/boc/bocode/s458161.html) commands are useful for running commands and collecting statistics on rolling windows of observations.\n\n`ftools` can also be used to speed up your own commands. For more information, see [this presentation](https://github.com/sergiocorreia/ftools/raw/master/docs/baltimore17_correia.pdf) from the 2017 Stata Conference (slides 14 and 15 show how to create faster alternatives to `unique` and `xmiss` with only a couple lines of code). Also, see `help ftools` for the detailed documentation.\n\n\n## Details\n\n**ftools** is two things:\n\n1. A list of Stata commands optimized for large datasets, replacing commands such as: collapse, contract, merge, egen, sort, levelsof, etc.\n2. A Mata class (*Factor*) that focuses on working with categorical variables. This class is what makes the above commands fast, and is also what powers [`reghdfe`](https://github.com/sergiocorreia/reghdfe)\n\nCurrently the following commands are implemented:\n\n- `fegen group` replacing `egen group`\n- `fcollapse` replacing `collapse`, `contract` and most of `egen` (through the `, merge` option)\n- `join` (and its wrapper `fmerge`) replacing `merge`\n- `fisid` replacing `isid`\n- `flevelsof` replacing `levelsof`\n- `fsort` replacing `sort` (although it is [rarely](https://github.com/sergiocorreia/ftools/blob/master/test/bench_sort.do) faster than sort)\n\n\n## Usage\n\n```stata\n* Stata usage:\nsysuse auto\n\nfsort turn\nfegen id = group(turn trunk)\nfcollapse (sum) price (mean) gear, by(turn foreign) freq\n\n* Advanced: creating the .mlib library:\nftools, compile\n\n* Mata usage:\nsysuse auto, clear\nmata: F = factor(\"turn\")\nmata: F.keys, F.counts\nmata: sorted_price = F.sort(st_data(., \"price\"))\n```\n\nOther features include:\n\n- Add your own functions to -fcollapse-\n- View the levels of each variable with `mata: F.keys`\n- Embed -factor()- into your own Mata program. For this, you can\n  use `F.sort()` and the built-in `panelsubmatrix()`.\n\n## Benchmarks\n\n(see the *test* folder for the details of the tests and benchmarks)\n\n### egen group\n\nGiven a dataset with 20 million obs. and 5 variables, we create the following variable, and create IDs based on that:\n\n```stata\ngen long x = ceil(uniform()*5000)\n```\n\nThen, we compare five different variants of egen group:\n\n| Method               | Min    | Avg    |\n|----------------------|--------|--------|\n| egen id = group(x)           | 49.17 | 51.26 |\n| fegen id = group(x)  | 1.44  | 1.53  |\n| fegen id = group(x), method(hash0)      | 1.41  | 1.60  |\n| fegen id = group(x), method(hash1)      | 8.87  | 9.35  |\n| fegen id = group(x), method(stata)     | 34.73 | 35.43  |\n\nOur variant takes roughly 3% of the time of egen group.\nIf we were to choose a more complex hash method, it would take 18% of the time.\nWe also report the most efficient method based in Stata (that uses `bysort`),\nwhich is still significantly slower than our Mata approach.\n\nNotes:\n\n- The gap is larger in systems with two or less cores, and smaller in systems with many cores (because our approach does not take much advantage of multicore)\n- The gap is larger in datasets with more observations or variables.\n- The gap is larger with fewer levels\n\n### collapse\n\nOn a dataset of similar size, we ran `collapse (sum) y1-y15, by(x3)` where `x3` takes 100 different values:\n\n| Method                     | Time  | % of Collapse |\n|----------------------------|-------|---------------|\n| collapse … , fast          | 81.87 | 100%          |\n| [sumup](https://github.com/matthieugomez/stata-sumup)                      | 56.18 | 69%           |\n| fcollapse … , fast         | 38.54 | 47%           |\n| fcollapse … , fast pool(5) | 28.32 | 35%           |\n| tab ...                    | 9.39  | 11%           |\n\nWe can see that `fcollapse` takes roughly a third of the time of `collapse`\n(although it uses more memory when moving data from Stata to Mata).\nAs a comparison, tabulating the data (one of the most efficient Stata operations) takes 11% of the time of `collapse`.\n\nAlternatively, the `pool(#)` option will use very little memory (similar to `collapse`) at also very good speeds.\n\nNotes:\n\n- The gap is larger if you want to collapse fewer variables\n- The gap is larger if you want to collapse to fewer levels\n- The gap is larger for more complex stats. (such as median)\n- `compress`ing the by() identifiers beforehand might lead to significant improvements in speed (by allowing the use of the internal hash0 function instead of hash1).\n- In a computer with less memory, it seems `pool(#)` might actually be faster.\n\n\n#### collapse: alternative benchmark\n\nWe can run a more complex query, collapsing means and medians instead of sums, also with 20mm obs.:\n\n\n| Method                     | Time  | % of Collapse |\n|----------------------------|-------|---------------|\n| collapse … , fast          | 81.06 | 100%          |\n| [sumup](https://github.com/matthieugomez/stata-sumup)                      | 67.05 | 83%           |\n| fcollapse … , fast         | 30.93 | 38%           |\n| fcollapse … , fast pool(5) | 33.85 | 42%           |\n| tab                        | 8.06  | 10%           |\n\n*(Note: `sumup` might be better for medium-sized datasets, although some benchmarking is needed)*\n\nAnd we can see that the results are similar.\n\n\n### join (and fmerge)\n\n\nSimilar to `merge` but avoids sorting the datasets. It is faster than `merge`\nfor datasets larger than ~ 100,000 obs., and for datasets above 1mm obs. it\ntakes a third of the time.\n\n[Benchmark:](https://github.com/sergiocorreia/ftools/blob/master/test/bench_merge.do)\n\n| Method      | Time  | % of merge |\n|-------------|-------|------------|\n| merge       | 28.89 | 100%       |\n| join/fmerge | 8.69  | 30%        |\n\n\n### fisid\n\nSimilar to `isid`, but allowing for `if in` and on the other hand not allowing for `using` and `sort`.\n\nIn very large datasets, it takes roughly a third of the time of `isid`.\n\n\n### flevelsof\n\nProvides the same results as `levelsof`.\n\nIn large datasets, takes up to 20% of the time of `levelsof`.\n\n\n### fsort\n\nAt this stage, you would need a significantly large dataset (50 million+) for `fsort` to be faster than `sort`.\n\n| Method          | Avg. 1 | Avg. 2 |\n|-----------------|--------|--------|\n| sort id         | 62.52  | 71.15  |\n| sort id, stable | 63.74  | 65.72  |\n| fsort id        | 55.4   | 67.62  |\n\nThe table above shows the benchmark\non a 50 million obs. dataset.\nThe unstable sorting is slightly slower (col. 1) or slighlty faster (col. 2)\nthan the `fsort` approach. On the other hand, a stable sort is clearly\nslower than `fsort` (which always produces a stable sort)\n\n## Installation\n\n### Stable Version\n\nWithin Stata, type:\n\n```\ncap ado uninstall ftools\nssc install ftools\n```\n\n### Dev Version\n\nWith Stata 13+, type:\n\n```\ncap ado uninstall ftools\nnet install ftools, from(https://github.com/sergiocorreia/ftools/raw/master/src/)\n```\n\nFor older versions, first download and extract the [zip file](https://github.com/sergiocorreia/ftools/archive/master.zip), and then run\n\n```\ncap ado uninstall ftools\nnet install ftools, from(SOME_FOLDER)\n```\n\nWhere *SOME_FOLDER* is the folder that contains the *stata.toc* and related files.\n\n### Compiling the mata library\n\nIn case of a Mata error, try typing `ftools` to create the Mata library (lftools.mlib).\n\n\n### Installing local versions\n\nTo install from a git fork, type something like:\n\n```\ncap ado uninstall ftools\nnet install ftools, from(\"C:/git/ftools/src\")\nftools, compile\n```\n\n(Changing \"C:/git/\" to your own folder)\n\n### Dependencies\n\nThe `fcollapse` function requires the [`moremata`](https://ideas.repec.org/c/boc/bocode/s455001.html) package for some the median and percentile stats:\n\n```\nssc install moremata\n```\n\nUsers of Stata 11 and 12 need to install the [`boottest`](https://ideas.repec.org/c/boc/bocode/s458121.html) package:\n\n```\nssc install boottest\n```\n\n## FAQ:\n\n### \"What features is this missing?\"\n\n- You can create levels based on one or more variables, and on numeric or string variables, but *not* on combinations of both. Thus, you can't do something like `fcollapse price, by(make foreign)` because make is string and foreign is numeric. This is due to a limitation in Mata and is probably a hard restriction. As a workaround, just run something like `fegen id = group(make)`, to create a numeric ID.\n- Support for weights is incomplete (datasets that use weights are often relatively small, so this feature has less priority)\n- Some commands could also gain large speedups (merge, reshape, etc.)\n- Since Mata is ~4 times slower than C, rewriting this in a C plugin should lead to a large speedup.\n\n### \"How can this be faster than existing commands?\"\n\nExisting commands (e.g. sort) are often compiled and don't have to move data\nfrom Stata to Mata and viceversa.\nHowever, they use inefficient algorithms, so for datasets large enough, they are slower.\nIn particular, creating identifiers can be an ~O(N) operation if we use hashes instead of sorting the data (see the help file).\nSimilarly, once the identifiers are created, sorting other variables by these identifiers can be done as an O(N) operation instead of O(N log N).\n\n### \"But I already tried to use Mata's `asarray` and it was much slower\"\n\nMata's `asarray()` has a key problem: it is very slow with hash collisions (which you see a lot in this use case). Thus, I avoid using `asarray()` and instead use `hash1()` to create a hash table with open addressing (see a comparision between both approaches [here](http://www.algolist.net/Data_structures/Hash_table/Open_addressing#open_addressing_vs_chaining)).\n\n\n\n## Updates\n\n- `2.49.0 06may2022`: fixed a bug in `fcollapse` with quantiles (p**, median, and iqr stats). `ftools` computes these statistics using `moremata` and had failed to update its function arguments as required by recent changes in moremata.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsergiocorreia%2Fftools","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsergiocorreia%2Fftools","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsergiocorreia%2Fftools/lists"}