Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/xpl/what-code-is-faster
A browser-based tool for speedy and correct JS performance comparisons!
https://github.com/xpl/what-code-is-faster
benchmark benchmark-scripts benchmarking comparison-benchmarks comparison-tool javascript performance performance-benchmarking tool
Last synced: 3 months ago
JSON representation
A browser-based tool for speedy and correct JS performance comparisons!
- Host: GitHub
- URL: https://github.com/xpl/what-code-is-faster
- Owner: xpl
- Created: 2020-12-07T05:37:32.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2024-09-13T02:14:19.000Z (5 months ago)
- Last Synced: 2024-09-13T14:56:37.638Z (5 months ago)
- Topics: benchmark, benchmark-scripts, benchmarking, comparison-benchmarks, comparison-tool, javascript, performance, performance-benchmarking, tool
- Language: TypeScript
- Homepage: https://xpl.github.io/what-code-is-faster/
- Size: 2.39 MB
- Stars: 25
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
**A browser-based tool for speedy and correct JS performance comparisons!**
- Minimalistic UI
- Code editor with IntelliSense
- All state is saved to URL - copy it and share with friends in no time!
- Automatically determines the number of iterations needed for a proper measurement — no hard-coding!
- Prevents dead code elimination and compile-time eval. optimizations from ruining your test!
- Verifies correctness (functions must compute the same value, be deterministic, depend on their inputs)
- Warms up functions before measuring (to give time for JIT to compile & optimize them)## How Does It Work?
Benchmarked functions are written as _reducers_, i.e. taking a previous value and returning some other value. The runtime executes your functions in a tight loop against some random initial value, saving the final value to a global variable (thus producing a _side effect_), so that no smart compiler could optimize out our computation!
So you must also provide a random initial value (not [something like that](https://xkcd.com/221/)) and ensure that your reducers follow some simple rules. **Those rules are programmatically enforced** — so you won't shoot yourself in the foot. Check the examples to get a sense of how to write a benchmark.
The rules:
1. The result of a function must depend on its input — and only on its input! You cannot return the same value again and again, or return some random values — there should be some genuine non-throwable computation on a passed input.
2. Given the same input, the output of the functions must be all the same. The comparison should be fair — we want to compare different implementations of exactly the same thing!
## Examples
- Array push vs. assign to last index
- BigInt (64-bit) vs. number (increment)
- Math.hypot or Math.sqrt?
- Do local function declarations affect performance?
- Do closures affect performance (vs. free functions)?
- For..of loop over Set vs. Array
- For..of loop over Object.values vs. Map.forEach (large integer keys)
- Map vs. Object (lookup)
- Null or undefined? (equality check)
- Arguments passing: spread vs. call() vs. apply()
- JSON.parse() vs. eval()
- Array vs TypedArray Dot Product
- instanceof or constructor check?- ..Add your own? Pull Requests are welcome!
## Extended Configuration
In case you test functions operate on differently typed inputs, you might need to provide distinct initial values and provide a customized comparison function, otherwise it won't pass the soundness check. Here is an example:
```js
benchmark('bigint vs number (addition)', {
initialValues() {
const seed = 1000000 + (Math.random() * 1000) | 0
return {
bigint: BigInt(seed),
number: seed,
}
},
equal(a, b) {
return BigInt(a) === BigInt(b)
}
}, {
bigint(prev) {
return prev + 1n
},
number(prev) {
return prev + 1
}
})
```