{"id":13733903,"url":"https://github.com/p-ranav/criterion","last_synced_at":"2025-08-21T13:32:49.588Z","repository":{"id":43283541,"uuid":"307485081","full_name":"p-ranav/criterion","owner":"p-ranav","description":"Microbenchmarking for Modern C++","archived":false,"fork":false,"pushed_at":"2020-11-03T01:59:44.000Z","size":74389,"stargazers_count":211,"open_issues_count":1,"forks_count":11,"subscribers_count":11,"default_branch":"master","last_synced_at":"2024-11-23T08:22:22.365Z","etag":null,"topics":["benchmarking","console","console-application","cpp17","cpp17-library","criterion","csv","export","header-only","json","library","measurements","microbenchmark","microbenchmarks","mit","modern-cpp","single-header","single-header-lib","single-header-library","table"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/p-ranav.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-10-26T19:38:10.000Z","updated_at":"2024-07-21T20:05:14.000Z","dependencies_parsed_at":"2022-09-03T11:20:13.418Z","dependency_job_id":null,"html_url":"https://github.com/p-ranav/criterion","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/p-ranav%2Fcriterion","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/p-ranav%2Fcriterion/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/p-ranav%2Fcriterion/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/p-ranav%2Fcriterion/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/p-ranav","download_url":"https://codeload.github.com/p-ranav/criterion/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230516183,"owners_count":18238352,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmarking","console","console-application","cpp17","cpp17-library","criterion","csv","export","header-only","json","library","measurements","microbenchmark","microbenchmarks","mit","modern-cpp","single-header","single-header-lib","single-header-library","table"],"created_at":"2024-08-03T03:00:50.764Z","updated_at":"2024-12-20T00:08:36.910Z","avatar_url":"https://github.com/p-ranav.png","language":"C++","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg height=\"90\" src=\"img/logo.png\"/\u003e  \n\u003c/p\u003e \n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"img/demo.gif\"/\u003e  \n\u003c/p\u003e \n\n## Highlights\n\n`Criterion` is a micro-benchmarking library for modern C++.\n\n* Convenient static registration macros for setting up benchmarks\n* Parameterized benchmarks (e.g., vary input size)\n* Statistical analysis across multiple runs\n* Requires compiler support for `C++17` or newer standard\n* Header-only library - single header file version available at `single_include/`\n* MIT License\n\n## Table of Contents\n\n*    [Getting Started](#getting-started)\n     *    [Simple Benchmark](#simple-benchmark)\n     *    [Passing Arguments](#passing-arguments)\n     *    [Passing Arguments (Part 2)](#passing-arguments-part-2)\n     *    [CRITERION_BENCHMARK_MAIN and Command-line Options](#CRITERION_BENCHMARK_MAIN-and-command-line-options)\n     *    [Exporting Results (csv, json etc.)](#exporting-results-csv-json-etc)\n*    [Building Library and Samples](#building-library-and-samples)\n*    [Generating Single Header](#generating-single-header)\n*    [Contributing](#contributing)\n*    [License](#license)\n\n## Getting Started\n\nLet's say we have this merge sort implementation that needs to be benchmarked.\n\n```cpp\ntemplate\u003ctypename RandomAccessIterator, typename Compare\u003e\nvoid merge_sort(RandomAccessIterator first, RandomAccessIterator last,\n                Compare compare, std::size_t size) {\n  if (size \u003c 2) return;\n  auto middle = first + size / 2;\n  merge_sort(first, middle, compare, size / 2);\n  merge_sort(middle, last, compare, size - size/2);\n  std::inplace_merge(first, middle, last, compare);\n}\n```\n\n### Simple Benchmark\n\nInclude `\u003ccriterion/criterion.hpp\u003e` and you're good to go.\n\n* Use the `BENCHMARK` macro to declare a benchmark\n* Use `SETUP_BENCHMARK` and `TEARDOWN_BENCHMARK` to perform setup and teardown tasks\n  - These tasks are not part of the measurement\n\n```cpp\n#include \u003ccriterion/criterion.hpp\u003e\n\nBENCHMARK(MergeSort)\n{\n  SETUP_BENCHMARK(\n    const auto size = 100;\n    std::vector\u003cint\u003e vec(size, 0); // vector of size 100\n  )\n \n  // Code to be benchmarked\n  merge_sort(vec.begin(), vec.end(), std::less\u003cint\u003e(), size);\n  \n  TEARDOWN_BENCHMARK(\n    vec.clear();\n  )\n}\n\nCRITERION_BENCHMARK_MAIN()\n```\n\n\u003cp align=\"center\"\u003e\n  \u003cimg height=\"300\" src=\"img/merge_sort_single.gif\"/\u003e  \n\u003c/p\u003e \n\nWhat if we want to run this benchmark on a variety of sizes?\n\n### Passing Arguments\n\n* The `BENCHMARK` macro can take typed parameters\n* Use `GET_ARGUMENTS(n)` to get the nth argument passed to the benchmark\n* For benchmarks that require arguments, use `INVOKE_BENCHMARK_FOR_EACH` and provide arguments\n\n```cpp\n#include \u003ccriterion/criterion.hpp\u003e\n\nBENCHMARK(MergeSort, std::size_t) // \u003c- one parameter to be passed to the benchmark\n{\n  SETUP_BENCHMARK(\n    const auto size = GET_ARGUMENT(0); // \u003c- get the argument passed to the benchmark\n    std::vector\u003cint\u003e vec(size, 0);\n  )\n \n  // Code to be benchmarked\n  merge_sort(vec.begin(), vec.end(), std::less\u003cint\u003e(), size);\n  \n  TEARDOWN_BENCHMARK(\n    vec.clear();\n  )\n}\n\n// Run the above benchmark for a number of inputs:\n\nINVOKE_BENCHMARK_FOR_EACH(MergeSort,\n  (\"/10\", 10),\n  (\"/100\", 100),\n  (\"/1K\", 1000),\n  (\"/10K\", 10000),\n  (\"/100K\", 100000)\n)\n\nCRITERION_BENCHMARK_MAIN()\n```\n\n\u003cp align=\"center\"\u003e\n  \u003cimg height=\"600\" src=\"img/merge_sort_with_params.gif\"/\u003e  \n\u003c/p\u003e \n\n### Passing Arguments (Part 2)\n\nLet's say we have the following struct and we need to create a `std::shared_ptr` to it.\n\n```cpp\nstruct Song {\n  std::string artist;\n  std::string title;\n  Song(const std::string\u0026 artist_, const std::string\u0026 title_) :\n    artist{ artist_ }, title{ title_ } {}\n};\n```\n\nHere are two implementations for constructing the `std::shared_ptr`:\n\n```cpp\n// Functions to be tested\nauto Create_With_New() { \n  return std::shared_ptr\u003cSong\u003e(new Song(\"Black Sabbath\", \"Paranoid\")); \n}\n\nauto Create_With_MakeShared() { \n  return std::make_shared\u003cSong\u003e(\"Black Sabbath\", \"Paranoid\"); \n}\n```\n\nWe can setup a single benchmark that takes a `std::function\u003c\u003e` and measures performance like below.\n\n```cpp\nBENCHMARK(ConstructSharedPtr, std::function\u003cstd::shared_ptr\u003cSong\u003e()\u003e) \n{\n  SETUP_BENCHMARK(\n    auto test_function = GET_ARGUMENT(0);\n  )\n\n  // Code to be benchmarked\n  auto song_ptr = test_function();\n}\n\nINVOKE_BENCHMARK_FOR_EACH(ConstructSharedPtr, \n  (\"/new\", Create_With_New),\n  (\"/make_shared\", Create_With_MakeShared)\n)\n\nCRITERION_BENCHMARK_MAIN()\n```\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"img/make_shared.gif\"/\u003e  \n\u003c/p\u003e \n\n### CRITERION_BENCHMARK_MAIN and Command-line Options\n\n`CRITERION_BENCHMARK_MAIN()` provides a main function that:\n\n1. Handles command-line arguments,\n2. Runs the registered benchmarks\n3. Exports results to file if requested by user.\n\nHere's the help/man generated by the main function:\n\n```console\nfoo@bar:~$ ./benchmarks -h\n\nNAME\n     ./benchmarks -- Run Criterion benchmarks\n\nSYNOPSIS\n     ./benchmarks\n           [-w,--warmup \u003cnumber\u003e]\n           [-l,--list] [--list_filtered \u003cregex\u003e] [-r,--run_filtered \u003cregex\u003e]\n           [-e,--export_results {csv,json,md,asciidoc} \u003cfilename\u003e]\n           [-q,--quiet] [-h,--help]\nDESCRIPTION\n     This microbenchmarking utility repeatedly executes a list of benchmarks,\n     statistically analyzing and reporting on the temporal behavior of the executed code.\n\n     The options are as follows:\n\n     -w,--warmup number\n          Number of warmup runs (at least 1) to execute before the benchmark (default=3)\n\n     -l,--list\n          Print the list of available benchmarks\n\n     --list_filtered regex\n          Print a filtered list of available benchmarks (based on user-provided regex)\n\n     -r,--run_filtered regex\n          Run a filtered list of available benchmarks (based on user-provided regex)\n\n     -e,--export_results format filename\n          Export benchmark results to file. The following are the supported formats.\n\n          csv       Comma separated values (CSV) delimited text file\n          json      JavaScript Object Notation (JSON) text file\n          md        Markdown (md) text file\n          asciidoc  AsciiDoc (asciidoc) text file\n\n     -q,--quiet\n          Run benchmarks quietly, suppressing activity indicators\n\n     -h,--help\n          Print this help message\n\n```\n\n### Exporting Results (csv, json, etc.)\n\nBenchmarks can be exported to one of a number of formats: `.csv`, `.json`, `.md`, and `.asciidoc`.\n\nUse `--export_results` (or `-e`) to export results to one of the supported formats.\n\n```console\nfoo@bar:~$ ./vector_sort -e json results.json -q # run quietly and export to JSON\n\nfoo@bar:~$ cat results.json\n{\n  \"benchmarks\": [\n    {\n      \"name\": \"VectorSort/100\",\n      \"warmup_runs\": 2,\n      \"iterations\": 2857140,\n      \"mean_execution_time\": 168.70,\n      \"fastest_execution_time\": 73.00,\n      \"slowest_execution_time\": 88809.00,\n      \"lowest_rsd_execution_time\": 84.05,\n      \"lowest_rsd_percentage\": 3.29,\n      \"lowest_rsd_index\": 57278,\n      \"average_iteration_performance\": 5927600.84,\n      \"fastest_iteration_performance\": 13698630.14,\n      \"slowest_iteration_performance\": 11260.12\n    },\n    {\n      \"name\": \"VectorSort/1000\",\n      \"warmup_runs\": 2,\n      \"iterations\": 2254280,\n      \"mean_execution_time\": 1007.70,\n      \"fastest_execution_time\": 640.00,\n      \"slowest_execution_time\": 102530.00,\n      \"lowest_rsd_execution_time\": 647.45,\n      \"lowest_rsd_percentage\": 0.83,\n      \"lowest_rsd_index\": 14098,\n      \"average_iteration_performance\": 992355.48,\n      \"fastest_iteration_performance\": 1562500.00,\n      \"slowest_iteration_performance\": 9753.24\n    },\n    {\n      \"name\": \"VectorSort/10000\",\n      \"warmup_runs\": 2,\n      \"iterations\": 259320,\n      \"mean_execution_time\": 8833.26,\n      \"fastest_execution_time\": 6276.00,\n      \"slowest_execution_time\": 114548.00,\n      \"lowest_rsd_execution_time\": 8374.15,\n      \"lowest_rsd_percentage\": 0.11,\n      \"lowest_rsd_index\": 7905,\n      \"average_iteration_performance\": 113208.45,\n      \"fastest_iteration_performance\": 159337.16,\n      \"slowest_iteration_performance\": 8729.96\n    }\n  ]\n}\n```\n\n## Building Library and Samples\n\n```bash\ncmake -Hall -Bbuild\ncmake --build build\n\n# run `merge_sort` sample\n./build/samples/merge_sort/merge_sort\n```\n\n## Generating Single Header\n\n```bash\npython3 utils/amalgamate/amalgamate.py -c single_include.json -s .\n```\n\n## Contributing\nContributions are welcome, have a look at the [CONTRIBUTING.md](CONTRIBUTING.md) document for more information.\n\n## License\nThe project is available under the [MIT](https://opensource.org/licenses/MIT) license.\n","funding_links":[],"categories":["Benchmarking"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fp-ranav%2Fcriterion","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fp-ranav%2Fcriterion","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fp-ranav%2Fcriterion/lists"}