{"id":13459046,"url":"https://github.com/evanwashere/mitata","last_synced_at":"2025-05-13T18:09:37.379Z","repository":{"id":38302851,"uuid":"482577442","full_name":"evanwashere/mitata","owner":"evanwashere","description":"benchmark tooling that loves you ❤️","archived":false,"fork":false,"pushed_at":"2025-02-17T04:26:03.000Z","size":1571,"stargazers_count":1904,"open_issues_count":5,"forks_count":26,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-04-25T17:49:57.913Z","etag":null,"topics":["benchmark","bun","cpp","deno","graaljs","javascript","jsc","library","microbenchmark","node","nodejs","performance","single-header","spidermonkey","v8"],"latest_commit_sha":null,"homepage":"","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/evanwashere.png","metadata":{"files":{"readme":".github/readme.gif","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-04-17T16:36:54.000Z","updated_at":"2025-04-25T08:48:32.000Z","dependencies_parsed_at":"2024-01-13T17:57:48.080Z","dependency_job_id":"ae32601d-1505-48d5-87da-033da8bb65e3","html_url":"https://github.com/evanwashere/mitata","commit_stats":{"total_commits":21,"total_committers":3,"mean_commits":7.0,"dds":0.09523809523809523,"last_synced_commit":"3730a784c9d83289b5627ddd961e3248088612aa"},"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evanwashere%2Fmitata","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evanwashere%2Fmitata/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evanwashere%2Fmitata/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evanwashere%2Fmitata/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/evanwashere","download_url":"https://codeload.github.com/evanwashere/mitata/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254000851,"owners_count":21997441,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","bun","cpp","deno","graaljs","javascript","jsc","library","microbenchmark","node","nodejs","performance","single-header","spidermonkey","v8"],"created_at":"2024-07-31T09:01:01.579Z","updated_at":"2025-05-13T18:09:36.328Z","avatar_url":"https://github.com/evanwashere.png","language":"JavaScript","readme":"\u003ch1 align=center\u003emitata\u003c/h1\u003e\n\u003cdiv align=center\u003ebenchmark tooling that loves you ❤️\u003c/div\u003e\n\u003cbr /\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg width=68% src=\"https://raw.githubusercontent.com/evanwashere/mitata/master/.github/readme.gif\"\u003e\u003c/img\u003e\n\u003c/div\u003e\n\n\u003cbr /\u003e\n\n### Install\n\n`bun add mitata`\n\n`npm install mitata`\n\ntry mitata in browser with ai assistant at [https://bolt.new/~/mitata](https://bolt.new/~/mitata)\n\n## Recommendations\n\n- use dedicated hardware for running benchmarks\n- read [writing good benchmarks](#writing-good-benchmarks) \u0026 [LLVM benchmarking tips](https://llvm.org/docs/Benchmarking.html)\n- run with manual garbage collection enabled (e.g. `node --expose-gc ...`)\n- install optional [hardware counters](#hardware-counters) extension to see cpu stats like IPC (instructions per cycle)\n- make sure your runtime has high-resolution timers and other relevant options/permissions enabled\n\n## Quick Start\n\n\u003ctable\u003e\n\u003ctr\u003e\n\u003cth\u003ejavascript\u003c/th\u003e\n\u003cth\u003ec++ single header\u003c/th\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\n\n```js\nimport { run, bench, boxplot, summary } from 'mitata';\n\nfunction fibonacci(n) {\n  if (n \u003c= 1) return n;\n  return fibonacci(n - 1) + fibonacci(n - 2);\n}\n\nbench('fibonacci(40)', () =\u003e fibonacci(40));\n\nboxplot(() =\u003e {\n  summary(() =\u003e {\n    bench('Array.from($size)', function* (state) {\n      const size = state.get('size');\n      yield () =\u003e Array.from({ length: size });\n    }).range('size', 1, 1024);\n  });\n});\n\nawait run();\n```\n  \n\u003c/td\u003e\n\u003ctd\u003e\n\n```cpp\n#include \"src/mitata.hpp\"\n\nint fibonacci(int n) {\n  if (n \u003c= 1) return n;\n  return fibonacci(n - 1) + fibonacci(n - 2);\n}\n\nint main() {\n  mitata::runner runner;\n  runner.bench(\"noop\", []() { });\n\n  runner.summary([\u0026]() {\n    runner.bench(\"empty fn\", []() { });\n    runner.bench(\"fibonacci\", []() { fibonacci(20); });\n  });\n\n  auto stats = runner.run();\n}\n```\n\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n\n\n\n## configure your experience\n\n```js\nimport { run } from 'mitata';\n\nawait run({ format: 'json' }) // output json\nawait run({ filter: /new Array.*/ }) // only run benchmarks that match regex filter\nawait run({ throw: true }); // will immediately throw instead of handling error quietly\nawait run({ format: { mitata: { name: 'fixed' } } }); // benchmarks name column is fixed length\n\n// c++\nauto stats = runner.run({ .colors = true, .format = \"json\", .filter = std::regex(\".*\") });\n```\n\n## automatic garbage collection\n\nOn runtimes that expose gc (e.g. bun, `node --expose-gc ...`), mitata will automatically run garbage collection before each benchmark.\n\nThis behavior can be further customized via the `gc` function on each benchmark:\n\n```js\nbench('lots of allocations', () =\u003e {\n  Array.from({ length: 1024 }, () =\u003e Array.from({ length: 1024 }, () =\u003e new Array(1024)));\n})\n// false | 'once' (default) | 'inner'\n// once runs gc after warmup\n// inner runs gc after warmup and before each (batch-)iteration\n.gc('inner');\n```\n\n## universal compatibility\n\nOut of box mitata can detect engine/runtime it's running on and fall back to using [alternative](https://github.com/evanwashere/mitata/blob/master/src/lib.mjs#L45) non-standard I/O functions. If your engine or runtime is missing support, open an issue or pr requesting for support.\n\n### how to use mitata with engine CLIs like d8, jsc, graaljs, spidermonkey\n\n```bash\n$ xs bench.mjs\n$ quickjs bench.mjs\n$ d8 --expose-gc bench.mjs\n$ spidermonkey -m bench.mjs\n$ graaljs --js.timer-resolution=1 bench.mjs\n$ /System/Library/Frameworks/JavaScriptCore.framework/Versions/Current/Helpers/jsc bench.mjs\n```\n\n```js\n// bench.mjs\n\nimport { print } from './src/lib.mjs';\nimport { run, bench } from './src/main.mjs'; // git clone\nimport { run, bench } from './node_modules/mitata/src/main.mjs'; // npm install\n\nprint('hello world'); // works on every engine\n```\n\n## adding arguments and parameters to your benchmarks has never been so easy\n\nWith other benchmarking libraries, often it's quite hard to easily make benchmarks that go over a range or run the same function with different arguments without writing spaghetti code, but now with mitata converting your benchmark to use arguments is just a function call away.\n\n```js\nimport { bench } from 'mitata';\n\nbench(function* look_mom_no_spaghetti(state) {\n  const len = state.get('len');\n  const len2 = state.get('len2');\n  yield () =\u003e new Array(len * len2);\n})\n\n.args('len', [1, 2, 3])\n.range('len', 1, 1024) // 1, 8, 64, 512...\n.dense_range('len', 1, 100) // 1, 2, 3 ... 99, 100\n.args({ len: [1, 2, 3], len2: ['4', '5', '6'] }) // every possible combination\n```\n\n### computed parameters\n\nFor cases where you need unique copy of value for each iteration, mitata supports creating computed parameters that do not count towards benchmark results *(note: there is no guarantee of recompute time, order, or call count)*:\n\n```js\nbench('deleting $keys from object', function* (state) {\n  const keys = state.get('keys');\n\n  const obj = {};\n  for (let i = 0; i \u003c keys; i++) obj[i] = i;\n\n  yield {\n    [0]() {\n      return { ...obj };\n    },\n\n    bench(p0) {\n      for (let i = 0; i \u003c keys; i++) delete p0[i];\n    },\n  };\n}).args('keys', [1, 10, 100]);\n```\n\n### concurrency\n\n`concurrency` option enables transparent concurrent execution of asynchronous benchmark, providing insights into:\n- scalability of async functions\n- potential bottlenecks in parallel code\n- performance under different levels of concurrency\n\n*(note: concurrent benchmarks may have higher variance due to scheduling, contention, event loop and async overhead)*\n\n```js\nbench('sleepAsync(1000) x $concurrency', function* () {\n  // concurrency inherited from arguments\n  yield async () =\u003e await sleepAsync(1000);\n}).args('concurrency', [1, 5, 10]);\n\nbench('sleepAsync(1000) x 5', function* () {\n  yield {\n    // concurrency is set manually\n    concurrency: 5,\n\n    async bench() {\n      await sleepAsync(1000);\n    },\n  };\n});\n```\n\n## hardware counters\n\n`bun add @mitata/counters`\n\n`npm install @mitata/counters`\n\nsupported on: `macos (apple silicon) | linux (amd64, aarch64)`\n\nmacos:\n- [Apple Silicon CPU optimization guide/handbook](https://developer.apple.com/documentation/apple-silicon/cpu-optimization-guide)\n- Xcode must be installed for complete cpu counters support\n- Instruments.app (CPU Counters) has to be closed during benchmarking\n- Corrupted install of Xcode/Command Line Tools can result in kernel panic (requires Xcode/Command Line Tools reinstall)\n\nBy installing `@mitata/counters` package you can enable collection and displaying of hardware counters for benchmarks.\n\n```rust\n------------------------------------------- -------------------------------\nnew Array(1024)              332.67 ns/iter 337.90 ns   █                  \n                    (295.63 ns … 507.93 ns) 455.66 ns ▂██▇▄▂▂▂▁▂▁▃▃▃▂▂▁▁▁▁▁\n                  2.41 ipc ( 48.66% stalls)  37.89% L1 data cache\n          1.11k cycles   2.69k instructions  33.09% retired LD/ST ( 888.96)\n\nnew URL(google.com)          246.40 ns/iter 245.10 ns       █▃             \n                    (206.01 ns … 841.23 ns) 302.39 ns ▁▁▁▁▂███▇▃▂▂▂▂▂▂▂▁▁▁▁\n                  4.12 ipc (  1.05% stalls)  98.88% L1 data cache\n         856.49 cycles   3.53k instructions  28.65% retired LD/ST (  1.01k)\n```\n\n\n## helpful warnings\n\nFor those who love doing micro-benchmarks, mitata can automatically detect and inform you about optimization passes like dead code elimination without requiring any special engine flags.\n\n```rust\n-------------------------------------- -------------------------------\n1 + 1                   318.63 ps/iter 325.37 ps        ▇  █           !\n                (267.92 ps … 14.28 ns) 382.81 ps ▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁\nempty function          319.36 ps/iter 325.37 ps          █ ▅          !\n                (248.62 ps … 46.61 ns) 382.81 ps ▁▁▁▁▁▁▃▁▁█▁█▇▁▁▁▁▁▁▁▁\n\n! = benchmark was likely optimized out (dead code elimination)\n```\n\n## powerful visualizations right in your terminal\n\nWith mitata’s ascii rendering capabilities, now you can easily visualize samples in barplots, boxplots, lineplots, histograms, and get clear summaries without any additional tools or dependencies.\n\n```js\nimport { summary, barplot, boxplot, lineplot } from 'mitata';\n\n// wrap bench() calls in visualization scope\nbarplot(() =\u003e {\n  bench(...)\n});\n\n                        ┌                                            ┐\n                  1 + 1 ┤■ 318.11 ps \n             Date.now() ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 27.69 ns \n                        └                                            ┘\n\n// scopes can be async\nawait boxplot(async () =\u003e {\n  // ...\n});\n\n                        ┌                                            ┐\n                                        ╷┌─┬─┐                       ╷\n            Bubble Sort                 ├┤ │ ├───────────────────────┤\n                                        ╵└─┴─┘                       ╵\n                        ┬   ╷\n             Quick Sort │───┤\n                        ┴   ╵\n                        ┬\n            Native Sort │\n                        ┴\n                        └                                            ┘\n                        90.88 µs            2.43 ms            4.77 ms\n\n// can combine multiple visualizations\nlineplot(() =\u003e {\n  summary(() =\u003e {\n    // ...\n  });\n\n  // bench() calls here wont be part of summary\n});\n\nsummary\n  new Array($len)\n   5.42…8.33x faster than Array.from($len)\n\n                        ┌                                            ┐\n      Array.from($size)                                            ⢠⠊\n       new Array($size)                                          ⢀⠔⠁ \n                                                                ⡠⠃   \n                                                              ⢀⠎     \n                                                             ⡔⠁      \n                                                           ⡠⠊        \n                                                         ⢀⠜          \n                                                        ⡠⠃           \n                                                       ⡔⠁            \n                                                     ⢀⠎              \n                                                    ⡠⠃               \n                                                  ⢀⠜                 \n                                                 ⢠⠊             ⣀⣀⠤⠤⠒\n                                                ⡰⠁       ⣀⡠⠤⠔⠒⠊⠉     \n                                           ⣀⣀⣀⠤⠜   ⣀⡠⠤⠒⠊⠉            \n                         ⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣤⣔⣒⣒⣊⣉⠭⠤⠤⠤⠤⠤⠒⠊⠉               \n                        └                                            ┘\n```\n\n## give your own code power of mitata\n\nIn case you don’t need all the fluff that comes with mitata or just need raw results, mitata exports its fundamental building blocks to allow you to easily build your own tooling and wrappers without losing any core benefits of using mitata.\n\n```cpp\n#include \"src/mitata.hpp\"\n\nint main() {\n  auto stats = mitata::lib::fn([]() { /***/ })\n}\n```\n\n```js\nimport { B, measure } from 'mitata';\n\n// lowest level for power users\nconst stats = await measure(function* (state) {\n  const size = state.get('x');\n  yield () =\u003e new Array(size);\n}, {\n  args: { x: 1 },\n  batch_samples: 5 * 1024,\n  min_cpu_time: 1000 * 1e6,\n});\n\n// explore how magic happens\nconsole.log(stats.debug) // -\u003e jit optimized source code of benchmark\n\n// higher level api that includes mitata's argument and range features\nconst b = new B('new Array($x)', state =\u003e {\n  const size = state.get('x');\n  for (const _ of state) new Array(size);\n}).args('x', [1, 5, 10]);\n\nconst trial = await b.run();\n```\n\n## accuracy down to picoseconds\n\nBy leveraging the power of javascript JIT compilation, mitata is able to generate zero-overhead measurement loops that provide picoseconds precision in timing measurements. These loops are so precise that they can even be reused to provide additional features like CPU clock frequency estimation and dead code elimination detection, all while staying inside javascript vm sandbox.\n\nWith [computed parameters](#computed-parameters) and [garbage collection tuning](#automatic-garbage-collection), you can tap into mitata's code generation capabilities to further refine the accuracy of your benchmarks. Using computed parameters ensures that parameters computation is moved outside the benchmark, thereby preventing the javascript JIT from performing loop invariant code motion optimization.\n\n```rust\n// node --expose-gc --allow-natives-syntax tools/compare.mjs\nclk: ~2.71 GHz\ncpu: Apple M2 Pro\nruntime: node 23.3.0 (arm64-darwin)\n\nbenchmark                   avg (min … max) p75   p99    (min … top 1%)\n------------------------------------------- -------------------------------\na / b                          4.59 ns/iter   4.44 ns █                    \n                       (4.33 ns … 25.86 ns)   6.91 ns ██▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁\n                  6.70 ipc (  2.17% stalls)    NaN% L1 data cache\n          16.80 cycles  112.52 instructions   0.00% retired LD/ST (   0.00)\n\na / b (computed)               4.23 ns/iter   4.10 ns ▇█                   \n                       (3.88 ns … 30.03 ns)   7.26 ns ██▅▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁\n                  6.40 ipc (  2.10% stalls)    NaN% L1 data cache\n          15.70 cycles  100.53 instructions   0.00% retired LD/ST (   0.00)\n4.59 ns/iter - https://npmjs.com/mitata\n\n// vs other libraries\n\na / b x 90,954,882 ops/sec ±2.13% (92 runs sampled)\n10.99 ns/iter - https://npmjs.com/benchmark\n\n┌─────────┬───────────┬──────────────────────┬─────────────────────┬────────────────────────────┬───────────────────────────┬──────────┐\n│ (index) │ Task name │ Latency average (ns) │ Latency median (ns) │ Throughput average (ops/s) │ Throughput median (ops/s) │ Samples  │\n├─────────┼───────────┼──────────────────────┼─────────────────────┼────────────────────────────┼───────────────────────────┼──────────┤\n│ 0       │ 'a / b'   │ '27.71 ± 0.09%'      │ '41.00'             │ '28239766 ± 0.01%'         │ '24390243'                │ 36092096 │\n└─────────┴───────────┴──────────────────────┴─────────────────────┴────────────────────────────┴───────────────────────────┴──────────┘\n27.71 ns/iter - vitest bench / https://npmjs.com/tinybench\n\na / b x 86,937,932 ops/sec (11 runs sampled) v8-never-optimize=true min..max=(11.32ns...11.62ns)\n11.51 ns/iter - https://npmjs.com/bench-node\n\n╔══════════════╤═════════╤════════════════════╤═══════════╗\n║ Slower tests │ Samples │             Result │ Tolerance ║\n╟──────────────┼─────────┼────────────────────┼───────────╢\n║ Fastest test │ Samples │             Result │ Tolerance ║\n╟──────────────┼─────────┼────────────────────┼───────────╢\n║ a / b        │   10000 │ 14449822.99 op/sec │  ± 4.04 % ║\n╚══════════════╧═════════╧════════════════════╧═══════════╝\n69.20 ns/iter - https://npmjs.com/cronometro\n```\n\n\u003cdetails\u003e\n\u003csummary\u003esame test with v8 jit compiler disabled:\u003c/summary\u003e\n\n```rust\n// node --expose-gc --allow-natives-syntax --jitless tools/compare.mjs\nclk: ~0.06 GHz\ncpu: Apple M2 Pro\nruntime: node 23.3.0 (arm64-darwin)\n\nbenchmark                   avg (min … max) p75   p99    (min … top 1%)\n------------------------------------------- -------------------------------\na / b                         74.52 ns/iter  75.53 ns █                    \n                     (71.96 ns … 104.94 ns)  92.01 ns █▅▇▅▅▃▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁\n                  5.78 ipc (  0.51% stalls)    NaN% L1 data cache\n         261.51 cycles   1.51k instructions   0.00% retired LD/ST (   0.00)\n\na / b (computed)              56.05 ns/iter  57.20 ns █                    \n                      (53.62 ns … 84.69 ns)  73.21 ns █▅▆▅▅▃▃▂▂▁▁▁▁▁▁▁▁▁▁▁▁\n                  5.65 ipc (  0.59% stalls)    NaN% L1 data cache\n         197.74 cycles   1.12k instructions   0.00% retired LD/ST (   0.00)\n74.52 ns/iter - https://npmjs.com/mitata\n\n// vs other libraries\n\na / b x 11,232,032 ops/sec ±0.50% (99 runs sampled)\n89.03 ns/iter - https://npmjs.com/benchmark\n\n┌─────────┬───────────┬──────────────────────┬─────────────────────┬────────────────────────────┬───────────────────────────┬─────────┐\n│ (index) │ Task name │ Latency average (ns) │ Latency median (ns) │ Throughput average (ops/s) │ Throughput median (ops/s) │ Samples │\n├─────────┼───────────┼──────────────────────┼─────────────────────┼────────────────────────────┼───────────────────────────┼─────────┤\n│ 0       │ 'a / b'   │ '215.53 ± 0.08%'     │ '208.00'            │ '4786095 ± 0.01%'          │ '4807692'                 │ 4639738 │\n└─────────┴───────────┴──────────────────────┴─────────────────────┴────────────────────────────┴───────────────────────────┴─────────┘\n215.53 ns/iter - vitest bench / https://npmjs.com/tinybench\n\na / b x 10,311,999 ops/sec (11 runs sampled) v8-never-optimize=true min..max=(95.66ns...97.51ns)\n96.86 ns/iter - https://npmjs.com/bench-node\n\n╔══════════════╤═════════╤═══════════════════╤═══════════╗\n║ Slower tests │ Samples │            Result │ Tolerance ║\n╟──────────────┼─────────┼───────────────────┼───────────╢\n║ Fastest test │ Samples │            Result │ Tolerance ║\n╟──────────────┼─────────┼───────────────────┼───────────╢\n║ a / b        │    2000 │ 4664908.00 op/sec │  ± 0.94 % ║\n╚══════════════╧═════════╧═══════════════════╧═══════════╝\n214.37 ns/iter - https://npmjs.com/cronometro\n```\n\u003c/details\u003e\n\n## writing good benchmarks\n\nCreating accurate and meaningful benchmarks requires careful attention to how modern JavaScript engines optimize code. This covers essential concepts and best practices to ensure your benchmarks measure actual performance characteristics rather than optimization artifacts.\n\n### examples\n- [readme gif](/examples/gif.js)\n- [cpu cache line size](/examples/cacheline.js)\n- [holey vs packed arrays](/examples/holey_array.js)\n\n### dead code elimination\n\nJIT can detect and eliminate code that has no observable effects. To ensure your benchmark code executes as intended, you must create observable side effects.\n\n```js\nimport { do_not_optimize } from 'mitata';\n\nbench(function* () {\n  // ❌ Bad: jit can see that function has zero side-effects\n  yield () =\u003e new Array(0);\n  // will get optimized to:\n  /*\n    yield () =\u003e {};\n  */\n\n  // ✅ Good: do_not_optimize(value) emits code that causes side-effects\n  yield () =\u003e do_not_optimize(new Array(0));\n});\n```\n\n### garbage collection pressure\n\nFor benchmarks involving significant memory allocations, controlling garbage collection frequency can improve results consistency.\n\n```js\n// ❌ Bad: unpredictable gc pauses\nbench(() =\u003e {\n  const bigArray = new Array(1000000);\n});\n\n// ✅ Good: gc before each (batch-)iteration\nbench(() =\u003e {\n  const bigArray = new Array(1000000);\n}).gc('inner'); // run gc before each iteration\n```\n\n### loop invariant code motion optimization\n\nJavaScript engines can optimize away repeated computations by hoisting them out of loops or caching results. Use computed parameters to prevent loop invariant code motion optimization.\n\n```js\nbench(function* (ctx) {\n  const str = 'abc';\n\n  // ❌ Bad: JIT sees that both str and 'c' search value are constants/comptime-known\n  yield () =\u003e str.includes('c');\n  // will get optimized to:\n  /*\n    yield () =\u003e true;\n  */\n\n  // ❌ Bad: JIT sees that computation doesn't depend on anything inside loop\n  const substr = ctx.get('substr');\n  yield () =\u003e str.includes(substr);\n  // will get optimized to:\n  /*\n    const $0 = str.includes(substr);\n    yield () =\u003e $0;\n  */\n\n  // ✅ Good: using computed parameters prevents jit from performing any loop optimizations\n  yield {\n    [0]() {\n      return str;\n    },\n\n    [1]() {\n      return substr;\n    },\n\n    bench(str, substr) {\n      return do_not_optimize(str.includes(substr));\n    },\n  };\n}).args('substr', ['c']);\n```\n\n## License\n\nMIT © [evanwashere](https://github.com/evanwashere)","funding_links":[],"categories":["JavaScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevanwashere%2Fmitata","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fevanwashere%2Fmitata","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevanwashere%2Fmitata/lists"}