{"id":20585698,"url":"https://github.com/chronoxor/cppbenchmark","last_synced_at":"2026-02-27T22:16:57.427Z","repository":{"id":34442501,"uuid":"38376789","full_name":"chronoxor/CppBenchmark","owner":"chronoxor","description":"Performance benchmark framework for C++ with nanoseconds measure precision","archived":false,"fork":false,"pushed_at":"2026-02-21T22:32:01.000Z","size":45843,"stargazers_count":325,"open_issues_count":0,"forks_count":51,"subscribers_count":14,"default_branch":"master","last_synced_at":"2026-02-22T02:23:28.244Z","etag":null,"topics":["benchmark-framework","benchmarks","microbenchmarks","performance"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chronoxor.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2015-07-01T14:49:05.000Z","updated_at":"2026-02-21T22:32:04.000Z","dependencies_parsed_at":"2024-05-01T23:11:37.759Z","dependency_job_id":"3f100d90-8d88-4bc2-8b8a-3ab1510992f0","html_url":"https://github.com/chronoxor/CppBenchmark","commit_stats":null,"previous_names":[],"tags_count":6,"template":false,"template_full_name":null,"purl":"pkg:github/chronoxor/CppBenchmark","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chronoxor%2FCppBenchmark","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chronoxor%2FCppBenchmark/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chronoxor%2FCppBenchmark/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chronoxor%2FCppBenchmark/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chronoxor","download_url":"https://codeload.github.com/chronoxor/CppBenchmark/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chronoxor%2FCppBenchmark/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29917288,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-27T19:37:42.220Z","status":"ssl_error","status_checked_at":"2026-02-27T19:37:41.463Z","response_time":57,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark-framework","benchmarks","microbenchmarks","performance"],"created_at":"2024-11-16T07:09:00.334Z","updated_at":"2026-02-27T22:16:57.410Z","avatar_url":"https://github.com/chronoxor.png","language":"C++","readme":"# CppBenchmark\n\n[![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)\n[![Release](https://img.shields.io/github/release/chronoxor/CppBenchmark.svg?sort=semver)](https://github.com/chronoxor/CppBenchmark/releases)\n\u003cbr/\u003e\n[![Linux (clang)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-linux-clang.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-linux-clang.yml)\n[![Linux (gcc)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-linux-gcc.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-linux-gcc.yml)\n[![MacOS](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-macos.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-macos.yml)\n\u003cbr/\u003e\n[![Windows (Cygwin)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-cygwin.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-cygwin.yml)\n[![Windows (MSYS2)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-msys2.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-msys2.yml)\n[![Windows (MinGW)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-mingw.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-mingw.yml)\n[![Windows (Visual Studio)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-vs.yml/badge.svg)](https://github.com/chronoxor/CppBenchmark/actions/workflows/build-windows-vs.yml)\n\nC++ Benchmark Library allows to create performance benchmarks of some code to investigate\naverage/minimal/maximal execution time, items processing processing speed, I/O throughput.\nCppBenchmark library has lots of [features](#features) and allows to make benchmarks for\n[different kind of scenarios](#benchmark-examples) such as micro-benchmarks, benchmarks\nwith fixtures and parameters, threads benchmarks, produsers/consummers pattern.\n\n[CppBenchmark API reference](https://chronoxor.github.io/CppBenchmark/index.html)\n\n# Contents\n  * [Features](#features)\n  * [Requirements](#requirements)\n  * [How to build?](#how-to-build)\n  * [How to create a benchmark?](#how-to-create-a-benchmark)\n  * [Benchmark examples](#benchmark-examples)\n    * [Example 1: Benchmark of a function call](#example-1-benchmark-of-a-function-call)\n    * [Example 2: Benchmark with cancelation](#example-2-benchmark-with-cancelation)\n    * [Example 3: Benchmark with static fixture](#example-3-benchmark-with-static-fixture)\n    * [Example 4: Benchmark with dynamic fixture](#example-4-benchmark-with-dynamic-fixture)\n    * [Example 5: Benchmark with parameters](#example-5-benchmark-with-parameters)\n    * [Example 6: Benchmark class](#example-6-benchmark-class)\n    * [Example 7: Benchmark I/O operations](#example-7-benchmark-io-operations)\n    * [Example 8: Benchmark latency with auto update](#example-8-benchmark-latency-with-auto-update)\n    * [Example 9: Benchmark latency with manual update](#example-9-benchmark-latency-with-manual-update)\n    * [Example 10: Benchmark threads](#example-10-benchmark-threads)\n    * [Example 11: Benchmark threads with fixture](#example-11-benchmark-threads-with-fixture)\n    * [Example 12: Benchmark single producer, single consumer pattern](#example-12-benchmark-single-producer-single-consumer-pattern)\n    * [Example 13: Benchmark multiple producers, multiple consumers pattern](#example-13-benchmark-multiple-producers-multiple-consumers-pattern)\n    * [Example 14: Dynamic benchmarks](#example-14-dynamic-benchmarks)\n  * [Command line options](#command-line-options)\n\n# Features\n* Cross platform (Linux, MacOS, Windows)\n* [Micro-benchmarks](#example-1-benchmark-of-a-function-call)\n* Benchmarks with [static fixtures](#example-3-benchmark-with-static-fixture) and [dynamic fixtures](#example-4-benchmark-with-dynamic-fixture)\n* Benchmarks with [parameters](#example-5-benchmark-with-parameters) (single, pair, triple parameters, ranges, ranges with selectors)\n* [Benchmark infinite run with cancelation](#example-2-benchmark-with-cancelation)\n* [Benchmark items processing speed](#example-6-benchmark-class)\n* [Benchmark I/O throughput](#example-7-benchmark-io-operations)\n* [Benchmark latency](#example-8-benchmark-latency-with-auto-update) with [High Dynamic Range (HDR) Histograms](https://hdrhistogram.github.io/HdrHistogram/)\n* [Benchmark threads](#example-10-benchmark-threads)\n* [Benchmark producers/consumers pattern](#example-12-benchmark-single-producer-single-consumer-pattern)\n* Different reporting formats: console, csv, json\n* Colored console progress and report\n\n![Console colored report](https://github.com/chronoxor/CppBenchmark/raw/master/images/console.png)\n\n# Requirements\n* Linux\n* MacOS\n* Windows\n* [cmake](https://www.cmake.org)\n* [gcc](https://gcc.gnu.org)\n* [git](https://git-scm.com)\n* [gil](https://github.com/chronoxor/gil.git)\n* [python3](https://www.python.org)\n\nOptional:\n* [clang](https://clang.llvm.org)\n* [CLion](https://www.jetbrains.com/clion)\n* [Cygwin](https://cygwin.com)\n* [MSYS2](https://www.msys2.org)\n* [MinGW](https://mingw-w64.org/doku.php)\n* [Visual Studio](https://www.visualstudio.com)\n\n# How to build?\n\n### Install [gil (git links) tool](https://github.com/chronoxor/gil)\n```shell\npip3 install gil\n```\n\n### Setup repository\n```shell\ngit clone https://github.com/chronoxor/CppBenchmark.git\ncd CppBenchmark\ngil update\n```\n\n### Linux\n```shell\ncd build\n./unix.sh\n```\n\n### MacOS\n```shell\ncd build\n./unix.sh\n```\n\n### Windows (Cygwin)\n```shell\ncd build\nunix.bat\n```\n\n### Windows (MSYS2)\n```shell\ncd build\nunix.bat\n```\n\n### Windows (MinGW)\n```shell\ncd build\nmingw.bat\n```\n\n### Windows (Visual Studio)\n```shell\ncd build\nvs.bat\n```\n\n# How to create a benchmark?\n1. [Build CppBenchmark library](#how-to-build)\n2. Create a new *.cpp file\n3. Insert #include \"benchmark/cppbenchmark.h\"\n4. Add benchmark code (examples for different scenarios you can find below)\n5. Insert BENCHMARK_MAIN() at the end\n6. Compile the *.cpp file and link it with CppBenchmark library\n7. Run it (see also possible [command line options](#command-line-options))\n\n# Benchmark examples\n\n## Example 1: Benchmark of a function call\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cmath.h\u003e\n\n// Benchmark sin() call for 5 seconds (by default).\n// Make 5 attemtps (by default) and choose one with the best time result.\nBENCHMARK(\"sin\")\n{\n    sin(123.456);\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: sin()\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: sin()\nAverage time: 6 ns/op\nMinimal time: 6 ns/op\nMaximal time: 6 ns/op\nTotal time: 858.903 ms\nTotal operations: 130842248\nOperations throughput: 152336350 ops/s\n===============================================================================\n```\n\n## Example 2: Benchmark with cancelation\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n// Benchmark rand() call until it returns 0.\n// Benchmark will print operations count required to get 'rand() == 0' case.\n// Make 10 attemtps and choose one with the best time result.\nBENCHMARK(\"rand-till-zero\", Settings().Infinite().Attempts(10))\n{\n    if (rand() == 0)\n        context.Cancel();\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: rand()-till-zero\nAttempts: 10\n-------------------------------------------------------------------------------\nPhase: rand()-till-zero\nAverage time: 15 ns/op\nMinimal time: 15 ns/op\nMaximal time: 92 ns/op\nTotal time: 159.936 mcs\nTotal operations: 10493\nOperations throughput: 65607492 ops/s\n===============================================================================\n```\n\n## Example 3: Benchmark with static fixture\nStatic fixture will be constructed once per each benchmark, will be the same for\neach attempt / operation and will be destructed at the end of the benchmark.\n\n```c++\n#include \"macros.h\"\n\n#include \u003clist\u003e\n#include \u003cvector\u003e\n\ntemplate \u003ctypename T\u003e\nclass ContainerFixture\n{\nprotected:\n    T container;\n\n    ContainerFixture()\n    {\n        for (int i = 0; i \u003c 1000000; ++i)\n            container.push_back(rand());\n    }\n};\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::list\u003cint\u003e\u003e, \"std::list\u003cint\u003e.forward\")\n{\n    for (auto it = container.begin(); it != container.end(); ++it)\n        ++(*it);\n}\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::list\u003cint\u003e\u003e, \"std::list\u003cint\u003e.backward\")\n{\n    for (auto it = container.rbegin(); it != container.rend(); ++it)\n        ++(*it);\n}\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::vector\u003cint\u003e\u003e, \"std::vector\u003cint\u003e.forward\")\n{\n    for (auto it = container.begin(); it != container.end(); ++it)\n        ++(*it);\n}\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::vector\u003cint\u003e\u003e, \"std::vector\u003cint\u003e.backward\")\n{\n    for (auto it = container.rbegin(); it != container.rend(); ++it)\n        ++(*it);\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::list\u003cint\u003e-forward\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::list\u003cint\u003e-forward\nAverage time: 6.332 ms/op\nMinimal time: 6.332 ms/op\nMaximal time: 6.998 ms/op\nTotal time: 4.958 s\nTotal operations: 783\nOperations throughput: 157 ops/s\n===============================================================================\nBenchmark: std::list\u003cint\u003e-backward\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::list\u003cint\u003e-backward\nAverage time: 7.883 ms/op\nMinimal time: 7.883 ms/op\nMaximal time: 8.196 ms/op\nTotal time: 4.911 s\nTotal operations: 623\nOperations throughput: 126 ops/s\n===============================================================================\nBenchmark: std::vector\u003cint\u003e-forward\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::vector\u003cint\u003e-forward\nAverage time: 298.114 mcs/op\nMinimal time: 298.114 mcs/op\nMaximal time: 308.209 mcs/op\nTotal time: 4.852 s\nTotal operations: 16276\nOperations throughput: 3354 ops/s\n===============================================================================\nBenchmark: std::vector\u003cint\u003e-backward\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::vector\u003cint\u003e-backward\nAverage time: 316.412 mcs/op\nMinimal time: 316.412 mcs/op\nMaximal time: 350.224 mcs/op\nTotal time: 4.869 s\nTotal operations: 15390\nOperations throughput: 3160 ops/s\n===============================================================================\n```\n\n## Example 4: Benchmark with dynamic fixture\nDynamic fixture can be used to prepare benchmark before each attempt with\nInitialize() / Cleanup() methods. You can access to the current benchmark\ncontext in dynamic fixture methods.\n\n```c++\n#include \"macros.h\"\n\n#include \u003cdeque\u003e\n#include \u003clist\u003e\n#include \u003cvector\u003e\n\ntemplate \u003ctypename T\u003e\nclass ContainerFixture : public virtual CppBenchmark::Fixture\n{\nprotected:\n    T container;\n\n    void Initialize(CppBenchmark::Context\u0026 context) override { container = T(); }\n    void Cleanup(CppBenchmark::Context\u0026 context) override { container.clear(); }\n};\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::list\u003cint\u003e\u003e, \"std::list\u003cint\u003e.push_back\")\n{\n    container.push_back(0);\n}\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::vector\u003cint\u003e\u003e, \"std::vector\u003cint\u003e.push_back\")\n{\n    container.push_back(0);\n}\n\nBENCHMARK_FIXTURE(ContainerFixture\u003cstd::deque\u003cint\u003e\u003e, \"std::deque\u003cint\u003e.push_back\")\n{\n    container.push_back(0);\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::list\u003cint\u003e.push_back()\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::list\u003cint\u003e.push_back()\nAverage time: 35 ns/op\nMinimal time: 35 ns/op\nMaximal time: 39 ns/op\nTotal time: 2.720 s\nTotal operations: 76213307\nOperations throughput: 28009633 ops/s\n===============================================================================\nBenchmark: std::vector\u003cint\u003e.push_back()\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::vector\u003cint\u003e.push_back()\nAverage time: 5 ns/op\nMinimal time: 5 ns/op\nMaximal time: 5 ns/op\nTotal time: 722.837 ms\nTotal operations: 126890166\nOperations throughput: 175544557 ops/s\n===============================================================================\nBenchmark: std::deque\u003cint\u003e.push_back()\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::deque\u003cint\u003e.push_back()\nAverage time: 12 ns/op\nMinimal time: 12 ns/op\nMaximal time: 12 ns/op\nTotal time: 1.319 s\nTotal operations: 105369784\nOperations throughput: 79858488 ops/s\n===============================================================================\n```\n\n## Example 5: Benchmark with parameters\nAdditional parameters can be provided to benchmark with settings using fluent\nsyntax. Parameters can be single, pair or tripple, provided as a value, as a\nrange, or with a range and selector function. Benchmark will be launched for\neach parameters combination.\n\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003calgorithm\u003e\n#include \u003cvector\u003e\n\nclass SortFixture : public virtual CppBenchmark::Fixture\n{\nprotected:\n    std::vector\u003cint\u003e items;\n\n    void Initialize(CppBenchmark::Context\u0026 context) override\n    {\n        items.resize(context.x());\n        std::generate(items.begin(), items.end(), rand);\n    }\n\n    void Cleanup(CppBenchmark::Context\u0026 context) override\n    {\n        items.clear();\n    }\n};\n\nBENCHMARK_FIXTURE(SortFixture, \"std::sort\", Settings().Param(1000000).Param(10000000))\n{\n    std::sort(items.begin(), items.end());\n    context.metrics().AddItems(items.size());\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::sort\nAttempts: 5\nOperations: 1\n-------------------------------------------------------------------------------\nPhase: std::sort(1000000)\nTotal time: 66.976 ms\nTotal items: 1000000\nItems throughput: 14930626 ops/s\n-------------------------------------------------------------------------------\nPhase: std::sort(10000000)\nTotal time: 644.141 ms\nTotal items: 10000000\nItems throughput: 15524528 ops/s\n===============================================================================\n```\n\n## Example 6: Benchmark class\nYou can also create a benchmark by inheriting from CppBenchmark::Benchmark class\nand implementing Run() method. You can use AddItems() method of a benchmark context\nmetrics to register processed items.\n\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003calgorithm\u003e\n#include \u003cvector\u003e\n\nclass StdSort : public CppBenchmark::Benchmark\n{\npublic:\n    using Benchmark::Benchmark;\n\nprotected:\n    std::vector\u003cint\u003e items;\n\n    void Initialize(CppBenchmark::Context\u0026 context) override\n    {\n        items.resize(context.x());\n        std::generate(items.begin(), items.end(), rand);\n    }\n\n    void Cleanup(CppBenchmark::Context\u0026 context) override\n    {\n        items.clear();\n    }\n\n    void Run(CppBenchmark::Context\u0026 context) override\n    {\n        std::sort(items.begin(), items.end());\n        context.metrics().AddItems(items.size());\n    }\n};\n\nBENCHMARK_CLASS(StdSort, \"std::sort\", Settings().Param(10000000))\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::sort\nAttempts: 5\nOperations: 1\n-------------------------------------------------------------------------------\nPhase: std::sort(10000000)\nTotal time: 648.461 ms\nTotal items: 10000000\nItems throughput: 15421124 ops/s\n===============================================================================\n```\n\n## Example 7: Benchmark I/O operations\nYou can use AddBytes() method of a benchmark context metrics to register processed data.\n\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003carray\u003e\n\nconst int chunk_size_from = 32;\nconst int chunk_size_to = 4096;\n\n// Create settings for the benchmark which will launch for each chunk size\n// scaled from 32 bytes to 4096 bytes (32, 64, 128, 256, 512, 1024, 2048, 4096).\nconst auto settings = CppBenchmark::Settings()\n    .ParamRange(\n        chunk_size_from, chunk_size_to, [](int from, int to, int\u0026 result)\n        {\n            int r = result;\n            result *= 2;\n            return r;\n        }\n    );\n\nclass FileFixture\n{\npublic:\n    FileFixture()\n    {\n        // Open file for binary write\n        file = fopen(\"fwrite.out\", \"wb\");\n    }\n\n    ~FileFixture()\n    {\n        // Close file\n        fclose(file);\n\n        // Delete file\n        remove(\"fwrite.out\");\n    }\n\nprotected:\n    FILE* file;\n    std::array\u003cchar, chunk_size_to\u003e buffer;\n};\n\nBENCHMARK_FIXTURE(FileFixture, \"fwrite\", settings)\n{\n    fwrite(buffer.data(), sizeof(char), context.x(), file);\n    context.metrics().AddBytes(context.x());\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: fwrite()\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: fwrite()(32)\nAverage time: 55 ns/op\nMinimal time: 55 ns/op\nMaximal time: 108 ns/op\nTotal time: 2.821 s\nTotal operations: 50703513\nTotal bytes: 1.523 GiB\nOperations throughput: 17968501 ops/s\nBytes throughput: 548.363 MiB/s\n-------------------------------------------------------------------------------\nPhase: fwrite()(64)\nAverage time: 93 ns/op\nMinimal time: 93 ns/op\nMaximal time: 162 ns/op\nTotal time: 3.820 s\nTotal operations: 40744084\nTotal bytes: 2.438 GiB\nOperations throughput: 10665202 ops/s\nBytes throughput: 650.975 MiB/s\n-------------------------------------------------------------------------------\n...\n-------------------------------------------------------------------------------\nPhase: fwrite()(2048)\nAverage time: 8.805 mcs/op\nMinimal time: 8.805 mcs/op\nMaximal time: 11.895 mcs/op\nTotal time: 3.968 s\nTotal operations: 450686\nTotal bytes: 880.252 MiB\nOperations throughput: 113569 ops/s\nBytes throughput: 221.835 MiB/s\n-------------------------------------------------------------------------------\nPhase: fwrite()(4096)\nAverage time: 19.485 mcs/op\nMinimal time: 19.485 mcs/op\nMaximal time: 20.887 mcs/op\nTotal time: 4.906 s\nTotal operations: 251821\nTotal bytes: 983.692 MiB\nOperations throughput: 51319 ops/s\nBytes throughput: 200.478 MiB/s\n===============================================================================\n```\n\n## Example 8: Benchmark latency with auto update\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cchrono\u003e\n#include \u003cthread\u003e\n\nconst auto settings = CppBenchmark::Settings().Latency(1, 1000000000, 5);\n\nBENCHMARK(\"sleep\", settings)\n{\n    std::this_thread::sleep_for(std::chrono::milliseconds(10));\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: sleep\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: sleep\nLatency (Min): 10.014 ms/op\nLatency (Max): 11.377 ms/op\nLatency (Mean): 1.04928e+07\nLatency (StDv): 364511\nTotal time: 4.985 s\nTotal operations: 571\nOperations throughput: 114 ops/s\n===============================================================================\n```\n\nIf the benchmark is launched with **--histograms=100** parameter then a file\nwith [High Dynamic Range (HDR) Histogram](https://hdrhistogram.github.io/HdrHistogram/)\nwill be created - [sleep.hdr](https://github.com/chronoxor/CppBenchmark/raw/master/images/sleep.hdr)\n\nFinally you can use [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html)\nin order to generate and analyze latency histogram:\n\n![Sleep HDR Histogram](https://github.com/chronoxor/CppBenchmark/raw/master/images/sleep.png)\n\n## Example 9: Benchmark latency with manual update\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cchrono\u003e\n#include \u003climits\u003e\n\nconst auto settings = CppBenchmark::Settings().Operations(10000000).Latency(1, 1000000000, 5, false);\n\nBENCHMARK(\"high_resolution_clock\", settings)\n{\n    static uint64_t minresolution = std::numeric_limits\u003cuint64_t\u003e::max();\n    static uint64_t maxresolution = std::numeric_limits\u003cuint64_t\u003e::min();\n    static auto latency_timestamp = std::chrono::high_resolution_clock::now();\n    static auto resolution_timestamp = std::chrono::high_resolution_clock::now();\n    static uint64_t count = 0;\n\n    // Get the current timestamp\n    auto current = std::chrono::high_resolution_clock::now();\n\n    // Update operations counter\n    ++count;\n\n    // Register latency metrics\n    uint64_t latency = std::chrono::duration_cast\u003cstd::chrono::nanoseconds\u003e(current - latency_timestamp).count();\n    if (latency \u003e 0)\n    {\n        context.metrics().AddLatency(latency / count);\n        latency_timestamp = current;\n        count = 0;\n    }\n\n    // Register resolution metrics\n    uint64_t resolution = std::chrono::duration_cast\u003cstd::chrono::nanoseconds\u003e(current - resolution_timestamp).count();\n    if (resolution \u003e 0)\n    {\n        if (resolution \u003c minresolution)\n        {\n            minresolution = resolution;\n            context.metrics().SetCustom(\"resolution-min\", minresolution);\n        }\n        if (resolution \u003e maxresolution)\n        {\n            maxresolution = resolution;\n            context.metrics().SetCustom(\"resolution-max\", maxresolution);\n        }\n        resolution_timestamp = current;\n    }\n}\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: high_resolution_clock\nAttempts: 5\nOperations: 10000000\n-------------------------------------------------------------------------------\nPhase: high_resolution_clock\nLatency (Min): 38 ns/op\nLatency (Max): 1.037 ms/op\nLatency (Mean): 53.0462\nLatency (StDv): 1136.37\nTotal time: 468.924 ms\nTotal operations: 10000000\nOperations throughput: 21325385 ops/s\nCustom values:\n\tresolution-max: 7262968\n\tresolution-min: 311\n===============================================================================\n```\n\nIf the benchmark is launched with **--histograms=100** parameter then a file\nwith [High Dynamic Range (HDR) Histogram](https://hdrhistogram.github.io/HdrHistogram/)\nwill be created - [clock.hdr](https://github.com/chronoxor/CppBenchmark/raw/master/images/clock.hdr)\n\nFinally you can use [HdrHistogram Plotter](https://hdrhistogram.github.io/HdrHistogram/plotFiles.html)\nin order to generate and analyze latency histogram:\n\n![High resolution clock HDR Histogram](https://github.com/chronoxor/CppBenchmark/raw/master/images/clock.png)\n\n## Example 10: Benchmark threads\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003catomic\u003e\n\n// Create settings for the benchmark which will launch for each\n// set of threads scaled from 1 thread to 8 threads (1, 2, 4, 8).\nconst auto settings = CppBenchmark::Settings()\n    .ThreadsRange(\n        1, 8, [](int from, int to, int\u0026 result)\n        {\n            int r = result;\n            result *= 2;\n            return r;\n        }\n    );\n\nBENCHMARK_THREADS(\"std::atomic++\", settings)\n{\n    static std::atomic\u003cint\u003e counter = 0;\n    counter++;\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::atomic++\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:1)\nAverage time: 19 ns/op\nMinimal time: 19 ns/op\nMaximal time: 20 ns/op\nTotal time: 2.124 s\nTotal operations: 111355461\nOperations throughput: 52425884 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:1).thread\nAverage time: 5 ns/op\nMinimal time: 5 ns/op\nMaximal time: 5 ns/op\nTotal time: 586.191 ms\nTotal operations: 111355461\nOperations throughput: 189964343 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:2)\nAverage time: 20 ns/op\nMinimal time: 20 ns/op\nMaximal time: 24 ns/op\nTotal time: 3.907 s\nTotal operations: 188624150\nOperations throughput: 48270817 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:2).thread\nAverage time: 23 ns/op\nMinimal time: 23 ns/op\nMaximal time: 30 ns/op\nTotal time: 2.179 s\nTotal operations: 94312075\nOperations throughput: 43270119 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:4)\nAverage time: 18 ns/op\nMinimal time: 18 ns/op\nMaximal time: 19 ns/op\nTotal time: 6.875 s\nTotal operations: 365529364\nOperations throughput: 53160207 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:4).thread\nAverage time: 56 ns/op\nMinimal time: 56 ns/op\nMaximal time: 60 ns/op\nTotal time: 5.142 s\nTotal operations: 91382341\nOperations throughput: 17771705 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:8)\nAverage time: 23 ns/op\nMinimal time: 23 ns/op\nMaximal time: 25 ns/op\nTotal time: 7.667 s\nTotal operations: 330867224\nOperations throughput: 43153297 ops/s\n-------------------------------------------------------------------------------\nPhase: std::atomic++(threads:8).thread\nAverage time: 105 ns/op\nMinimal time: 105 ns/op\nMaximal time: 167 ns/op\nTotal time: 4.367 s\nTotal operations: 41358403\nOperations throughput: 9468527 ops/s\n===============================================================================\n```\n\n## Example 11: Benchmark threads with fixture\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003carray\u003e\n#include \u003catomic\u003e\n\n// Create settings for the benchmark which will launch for each\n// set of threads scaled from 1 thread to 8 threads (1, 2, 4, 8).\nconst auto settings = CppBenchmark::Settings()\n    .ThreadsRange(\n        1, 8, [](int from, int to, int\u0026 result)\n        {\n            int r = result;\n            result *= 2;\n            return r;\n        }\n    );\n\nclass Fixture1\n{\nprotected:\n    std::atomic\u003cint\u003e counter;\n};\n\nclass Fixture2 : public virtual CppBenchmark::FixtureThreads\n{\nprotected:\n    std::array\u003cint, 8\u003e counter;\n\n    void InitializeThread(CppBenchmark::ContextThreads\u0026 context) override\n    {\n        counter[CppBenchmark::System::CurrentThreadId() % counter.size()] = 0;\n    }\n\n    void CleanupThread(CppBenchmark::ContextThreads\u0026 context) override\n    {\n        // Thread cleanup code can be placed here...\n    }\n};\n\nBENCHMARK_THREADS_FIXTURE(Fixture1, \"Global counter\", settings)\n{\n    counter++;\n}\n\nBENCHMARK_THREADS_FIXTURE(Fixture2, \"Thread local counter\", settings)\n{\n    counter[CppBenchmark::System::CurrentThreadId() % counter.size()]++;\n}\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: Global counter\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: Global counter(threads:1).thread\nAverage time: 5 ns/op\nMinimal time: 5 ns/op\nMaximal time: 5 ns/op\nTotal time: 629.639 ms\nTotal operations: 119518816\nOperations throughput: 189821077 ops/s\n-------------------------------------------------------------------------------\nPhase: Global counter(threads:2).thread\nAverage time: 18 ns/op\nMinimal time: 18 ns/op\nMaximal time: 24 ns/op\nTotal time: 1.860 s\nTotal operations: 101568823\nOperations throughput: 54581734 ops/s\n-------------------------------------------------------------------------------\nPhase: Global counter(threads:4).thread\nAverage time: 57 ns/op\nMinimal time: 57 ns/op\nMaximal time: 66 ns/op\nTotal time: 4.552 s\nTotal operations: 79503346\nOperations throughput: 17464897 ops/s\n-------------------------------------------------------------------------------\nPhase: Global counter(threads:8).thread\nAverage time: 103 ns/op\nMinimal time: 103 ns/op\nMaximal time: 143 ns/op\nTotal time: 4.601 s\nTotal operations: 44597477\nOperations throughput: 9690967 ops/s\n===============================================================================\nBenchmark: Thread local counter\nAttempts: 5\nDuration: 5 seconds\n-------------------------------------------------------------------------------\nPhase: Thread local counter(threads:1).thread\nAverage time: 4 ns/op\nMinimal time: 4 ns/op\nMaximal time: 4 ns/op\nTotal time: 739.689 ms\nTotal operations: 166432112\nOperations throughput: 225002770 ops/s\n-------------------------------------------------------------------------------\nPhase: Thread local counter(threads:2).thread\nAverage time: 9 ns/op\nMinimal time: 9 ns/op\nMaximal time: 10 ns/op\nTotal time: 1.061 s\nTotal operations: 113102777\nOperations throughput: 106564314 ops/s\n-------------------------------------------------------------------------------\nPhase: Thread local counter(threads:4).thread\nAverage time: 20 ns/op\nMinimal time: 20 ns/op\nMaximal time: 21 ns/op\nTotal time: 1.944 s\nTotal operations: 94786108\nOperations throughput: 48757481 ops/s\n-------------------------------------------------------------------------------\nPhase: Thread local counter(threads:8).thread\nAverage time: 25 ns/op\nMinimal time: 25 ns/op\nMaximal time: 39 ns/op\nTotal time: 1.784 s\nTotal operations: 71185751\nOperations throughput: 39887088 ops/s\n===============================================================================\n```\n\n## Example 12: Benchmark single producer, single consumer pattern\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cmutex\u003e\n#include \u003cqueue\u003e\n\nconst int items_to_produce = 10000000;\n\n// Create settings for the benchmark which will create 1 producer and 1 consumer\n// and launch producer in inifinite loop.\nconst auto settings = CppBenchmark::Settings().Infinite().PC(1, 1);\n\nclass MutexQueueBenchmark : public CppBenchmark::BenchmarkPC\n{\npublic:\n    using BenchmarkPC::BenchmarkPC;\n\nprotected:\n    void Initialize(CppBenchmark::Context\u0026 context) override\n    {\n        _queue = std::queue\u003cint\u003e();\n        _count = 0;\n    }\n\n    void Cleanup(CppBenchmark::Context\u0026 context) override\n    {\n        // Benchmark cleanup code can be placed here...\n    }\n\n    void InitializeProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Producer initialize code can be placed here...\n    }\n\n    void CleanupProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Producer cleanup code can be placed here...\n    }\n\n    void InitializeConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Consumer initialize code can be placed here...\n    }\n\n    void CleanupConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Consumer cleanup code can be placed here...\n    }\n\n    void RunProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n    \tstd::lock_guard\u003cstd::mutex\u003e lock(_mutex);\n\n        // Check if we need to stop production...\n        if (_count \u003e= items_to_produce) {\n            _queue.push(0);\n            context.StopProduce();\n            return;\n        }\n\n        // Produce item\n        _queue.push(++_count);\n    }\n\n    void RunConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n    \tstd::lock_guard\u003cstd::mutex\u003e lock(_mutex);\n\n    \tif (_queue.size() \u003e 0) {\n            // Consume item\n            int value = _queue.front();\n            _queue.pop();\n            // Check if we need to stop consumption...\n            if (value == 0) {\n                context.StopConsume();\n                return;\n            }\n        }\n    }\n\nprivate:\n    std::mutex _mutex;\n    std::queue\u003cint\u003e _queue;\n    int _count;\n};\n\nBENCHMARK_CLASS(MutexQueueBenchmark, \"std::mutex+std::queue\u003cint\u003e\", settings)\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::mutex+std::queue\u003cint\u003e\nAttempts: 5\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1)\nTotal time: 652.176 ms\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1).producer\nAverage time: 50 ns/op\nMinimal time: 50 ns/op\nMaximal time: 53 ns/op\nTotal time: 509.201 ms\nTotal operations: 10000001\nOperations throughput: 19638574 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1).consumer\nAverage time: 64 ns/op\nMinimal time: 64 ns/op\nMaximal time: 67 ns/op\nTotal time: 650.805 ms\nTotal operations: 10124742\nOperations throughput: 15557246 ops/s\n===============================================================================\n```\n\n## Example 13: Benchmark multiple producers, multiple consumers pattern\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cmutex\u003e\n#include \u003cqueue\u003e\n\nconst int items_to_produce = 10000000;\n\n// Create settings for the benchmark which will create 1/2/4/8 producers and 1/2/4/8 consumers\n// and launch all producers in inifinite loop.\nconst auto settings = CppBenchmark::Settings()\n    .Infinite()\n    .PCRange(\n        1, 8, [](int producers_from, int producers_to, int\u0026 producers_result)\n        {\n            int r = producers_result;\n            producers_result *= 2;\n            return r;\n        },\n        1, 8, [](int consumers_from, int consumers_to, int\u0026 consumers_result)\n        {\n            int r = consumers_result;\n            consumers_result *= 2;\n            return r;\n        }\n    );\n\nclass MutexQueueBenchmark : public CppBenchmark::BenchmarkPC\n{\npublic:\n    using BenchmarkPC::BenchmarkPC;\n\nprotected:\n    void Initialize(CppBenchmark::Context\u0026 context) override\n    {\n        _queue = std::queue\u003cint\u003e();\n        _count = 0;\n    }\n\n    void Cleanup(CppBenchmark::Context\u0026 context) override\n    {\n        // Benchmark cleanup code can be placed here...\n    }\n\n    void InitializeProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Producer initialize code can be placed here...\n    }\n\n    void CleanupProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Producer cleanup code can be placed here...\n    }\n\n    void InitializeConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Consumer initialize code can be placed here...\n    }\n\n    void CleanupConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n        // Consumer cleanup code can be placed here...\n    }\n\n    void RunProducer(CppBenchmark::ContextPC\u0026 context) override\n    {\n    \tstd::lock_guard\u003cstd::mutex\u003e lock(_mutex);\n\n        // Check if we need to stop production...\n        if (_count \u003e= items_to_produce) {\n            _queue.push(0);\n            context.StopProduce();\n            return;\n        }\n\n        // Produce item\n        _queue.push(++_count);\n    }\n\n    void RunConsumer(CppBenchmark::ContextPC\u0026 context) override\n    {\n    \tstd::lock_guard\u003cstd::mutex\u003e lock(_mutex);\n\n    \tif (_queue.size() \u003e 0) {\n            // Consume item\n            int value = _queue.front();\n            _queue.pop();\n            // Check if we need to stop consumption...\n            if (value == 0) {\n                context.StopConsume();\n                return;\n            }\n        }\n    }\n\nprivate:\n    std::mutex _mutex;\n    std::queue\u003cint\u003e _queue;\n    int _count;\n};\n\nBENCHMARK_CLASS(MutexQueueBenchmark, \"std::mutex+std::queue\u003cint\u003e\", settings)\n\nBENCHMARK_MAIN()\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: std::mutex+std::queue\u003cint\u003e\nAttempts: 5\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1)\nTotal time: 681.430 ms\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1).producer\nAverage time: 42 ns/op\nMinimal time: 42 ns/op\nMaximal time: 120 ns/op\nTotal time: 427.075 ms\nTotal operations: 10000001\nOperations throughput: 23415052 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:1).consumer\nAverage time: 67 ns/op\nMinimal time: 67 ns/op\nMaximal time: 120 ns/op\nTotal time: 679.235 ms\nTotal operations: 10000001\nOperations throughput: 14722437 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:2)\nTotal time: 623.887 ms\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:2).producer\nAverage time: 58 ns/op\nMinimal time: 58 ns/op\nMaximal time: 103 ns/op\nTotal time: 582.786 ms\nTotal operations: 10000001\nOperations throughput: 17158941 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:1,consumers:2).consumer\nAverage time: 125 ns/op\nMinimal time: 125 ns/op\nMaximal time: 208 ns/op\nTotal time: 622.654 ms\nTotal operations: 4963799\nOperations throughput: 7971989 ops/s\n-------------------------------------------------------------------------------\n...\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:4)\nTotal time: 820.237 ms\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:4).producer\nAverage time: 835 ns/op\nMinimal time: 835 ns/op\nMaximal time: 1.032 mcs/op\nTotal time: 606.745 ms\nTotal operations: 725823\nOperations throughput: 1196256 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:4).consumer\nAverage time: 213 ns/op\nMinimal time: 213 ns/op\nMaximal time: 264 ns/op\nTotal time: 755.649 ms\nTotal operations: 3543116\nOperations throughput: 4688834 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:8)\nTotal time: 824.811 ms\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:8).producer\nAverage time: 485 ns/op\nMinimal time: 485 ns/op\nMaximal time: 565 ns/op\nTotal time: 743.897 ms\nTotal operations: 1533043\nOperations throughput: 2060824 ops/s\n-------------------------------------------------------------------------------\nPhase: std::mutex+std::queue\u003cint\u003e(producers:8,consumers:8).consumer\nAverage time: 489 ns/op\nMinimal time: 489 ns/op\nMaximal time: 648 ns/op\nTotal time: 676.364 ms\nTotal operations: 1382941\nOperations throughput: 2044668 ops/s\n===============================================================================\n```\n\n## Example 14: Dynamic benchmarks\nDynamic benchmarks are usefull when you have some working program and want to benchmark some\ncritical parts and code fragments. In this case just include cppbenchmark.h header and use\nBENCHCODE_SCOPE(), BENCHCODE_START(), BENCHCODE_STOP(), BENCHCODE_REPORT() macro. All of the\nmacro are easy access to methods of the static [Executor](http://chronoxor.github.io/CppBenchmark/class_cpp_benchmark_1_1_executor.html) class\nwhich you may use directly as a singleton. All functionality provided for dynamic benchmarks is\nthread-safe synchronizied with mutex (each call will lose some ns).\n\n```c++\n#include \"benchmark/cppbenchmark.h\"\n\n#include \u003cchrono\u003e\n#include \u003cthread\u003e\n#include \u003cvector\u003e\n\nconst int THREADS = 8;\n\nvoid init()\n{\n    auto benchmark = BENCHCODE_SCOPE(\"Initialization\");\n\n    std::this_thread::sleep_for(std::chrono::seconds(2));\n}\n\nvoid calculate()\n{\n    auto benchmark = BENCHCODE_SCOPE(\"Calculate\");\n\n    for (int i = 0; i \u003c 5; ++i) {\n        auto phase1 = benchmark-\u003eStartPhase(\"Calculate.1\");\n\n        std::this_thread::sleep_for(std::chrono::milliseconds(100));\n\n        phase1-\u003eStopPhase();\n    }\n\n    auto phase2 = benchmark-\u003eStartPhase(\"Calculate.2\");\n    {\n        auto phase21 = benchmark-\u003eStartPhase(\"Calculate.2.1\");\n\n        std::this_thread::sleep_for(std::chrono::milliseconds(200));\n\n        phase21-\u003eStopPhase();\n\n        auto phase22 = benchmark-\u003eStartPhase(\"Calculate.2.2\");\n\n        std::this_thread::sleep_for(std::chrono::milliseconds(300));\n\n        phase22-\u003eStopPhase();\n    }\n    phase2-\u003eStopPhase();\n\n    for (int i = 0; i \u003c 3; ++i) {\n        auto phase3 = benchmark-\u003eStartPhase(\"Calculate.3\");\n\n        std::this_thread::sleep_for(std::chrono::milliseconds(400));\n\n        phase3-\u003eStopPhase();\n    }\n}\n\nvoid cleanup()\n{\n    BENCHCODE_START(\"Cleanup\");\n\n    std::this_thread::sleep_for(std::chrono::seconds(1));\n\n    BENCHCODE_STOP(\"Cleanup\");\n}\n\nint main(int argc, char** argv)\n{\n    // Initialization\n    init();\n\n    // Start parallel calculations\n    std::vector\u003cstd::thread\u003e threads;\n    for (int i = 0; i \u003c THREADS; ++i)\n        threads.push_back(std::thread(calculate));\n\n    // Wait for all threads\n    for (auto\u0026 thread : threads)\n        thread.join();\n\n    // Cleanup\n    cleanup();\n\n    // Report benchmark results\n    BENCHCODE_REPORT();\n\n    return 0;\n}\n```\n\nReport fragment is the following:\n```\n===============================================================================\nBenchmark: Initialization\nAttempts: 1\nOperations: 1\n-------------------------------------------------------------------------------\nPhase: Initialization\nTotal time: 2.002 s\n===============================================================================\nBenchmark: Calculate\nAttempts: 1\nOperations: 1\n-------------------------------------------------------------------------------\nPhase: Calculate\nTotal time: 2.200 s\n-------------------------------------------------------------------------------\nPhase: Calculate.1\nAverage time: 100.113 ms/op\nMinimal time: 93.337 ms/op\nMaximal time: 107.303 ms/op\nTotal time: 500.565 ms\nTotal operations: 5\nOperations throughput: 9 ops/s\n-------------------------------------------------------------------------------\nPhase: Calculate.2\nTotal time: 499.420 ms\n-------------------------------------------------------------------------------\nPhase: Calculate.2.1\nTotal time: 199.514 ms\n-------------------------------------------------------------------------------\nPhase: Calculate.2.2\nTotal time: 299.755 ms\n-------------------------------------------------------------------------------\nPhase: Calculate.3\nAverage time: 399.920 ms/op\nMinimal time: 399.726 ms/op\nMaximal time: 400.365 ms/op\nTotal time: 1.199 s\nTotal operations: 3\nOperations throughput: 2 ops/s\n===============================================================================\nBenchmark: Cleanup\nAttempts: 1\nOperations: 1\n-------------------------------------------------------------------------------\nPhase: Cleanup\nTotal time: 1.007 s\n===============================================================================\n```\n\n# Command line options\nWhen you create and build a benchmark you can run it with the following command line options:\n* **--version**  - Show program's version number and exit\n* **-h, --help** - Show this help message and exit\n* **-f FILTER, --filter=FILTER** - Filter benchmarks by the given regexp pattern\n* **-l, --list** - List all avaliable benchmarks\n* **-o OUTPUT, --output=OUTPUT** - Output format (console, csv, json). Default: console\n* **-q, --quiet** - Launch in quiet mode. No progress will be shown!\n* **-r HISTOGRAMS, --histograms=HISTOGRAMS** - Create High Dynamic Range (HDR) Histogram files with a given resolution. Default: 0\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchronoxor%2Fcppbenchmark","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchronoxor%2Fcppbenchmark","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchronoxor%2Fcppbenchmark/lists"}