Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sethfalco/nodejs-microbenchmarks
Just some benchmarks I've written during development both professionally and during open-source contributions.
https://github.com/sethfalco/nodejs-microbenchmarks
Last synced: about 1 month ago
JSON representation
Just some benchmarks I've written during development both professionally and during open-source contributions.
- Host: GitHub
- URL: https://github.com/sethfalco/nodejs-microbenchmarks
- Owner: SethFalco
- License: apache-2.0
- Created: 2024-07-01T16:34:23.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-09-04T16:21:34.000Z (4 months ago)
- Last Synced: 2024-10-13T02:16:29.527Z (2 months ago)
- Language: JavaScript
- Size: 15.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Node.js Microbenchmarks
Just some benchmarks I've written during development both professionally and during open-source contributions. The repository is for me to scaffold benchmarks quickly, and to refer back to benchmark results later.
While the repository is public, the purpose is to share why I made certain decisions. This is not a collaborative effort to publish and maintain benchmarks together. If you're able to pitch a better solution to a problem covered in the repository, feel free to share it! However, pull requests adding benchmarks for new problems won't be accepted.
## Running Benchmarks
Install npm dependencies with:
```sh
npm i
```Then run the relevant benchmark with Node.js:
```sh
BENCHMARK=is-string-whitespace
node src/benchmarks/$BENCHMARK.js
```## Methodology
All tests cases are constructed the same way and use the same options.
For input, instead of testing a single set of arguments, we test an array of arguments. This is so we can get what is generally most performant, rather than what is most performant for a specific scenario.
The data array may need to be tweaked depending on the data you expect to encounter in the real-world. For example, two solutions could be correct, but perform differently based on the input received. Sometimes the solution that's generally slower, is better because the input it performs faster on is what you'll encounter 99% of the time in production.
For most benchmarks, we enforce that all functions must have identical input/output. However, there are a few exceptions due to quirks like floating point precision. Cases like these will have warnings documented at the top of the file, and you'll need to strike a balance between performance and precision on a case-by-case basis.