{"id":13424594,"url":"https://github.com/iboB/picobench","last_synced_at":"2025-03-15T18:35:28.679Z","repository":{"id":39640914,"uuid":"115119545","full_name":"iboB/picobench","owner":"iboB","description":"A micro microbenchmarking library for C++11 in a single header file","archived":false,"fork":false,"pushed_at":"2024-03-06T19:15:16.000Z","size":179,"stargazers_count":212,"open_issues_count":0,"forks_count":21,"subscribers_count":14,"default_branch":"master","last_synced_at":"2025-03-10T09:50:38.380Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/iboB.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-12-22T13:52:13.000Z","updated_at":"2025-02-28T15:25:38.000Z","dependencies_parsed_at":"2023-11-24T13:29:09.903Z","dependency_job_id":"8d7df1ae-1b84-43c7-a519-5fc0fb5d2f17","html_url":"https://github.com/iboB/picobench","commit_stats":{"total_commits":92,"total_committers":4,"mean_commits":23.0,"dds":0.04347826086956519,"last_synced_commit":"7b5de5ab7dad0a9d6627e63c8757c9c4f0c6b1b3"},"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iboB%2Fpicobench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iboB%2Fpicobench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iboB%2Fpicobench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iboB%2Fpicobench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/iboB","download_url":"https://codeload.github.com/iboB/picobench/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243775891,"owners_count":20346281,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T00:00:56.778Z","updated_at":"2025-03-15T18:35:23.665Z","avatar_url":"https://github.com/iboB.png","language":"C++","readme":"\n# picobench\n[![Language](https://img.shields.io/badge/language-C++-blue.svg)](https://isocpp.org/) [![Standard](https://img.shields.io/badge/C%2B%2B-11-blue.svg)](https://en.wikipedia.org/wiki/C%2B%2B#Standardization) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n\n[![Test](https://github.com/iboB/picobench/actions/workflows/test.yml/badge.svg)](https://github.com/iboB/picobench/actions/workflows/test.yml)\n\npicobench is a tiny (micro) microbenchmarking library in a single header file.\n\nIt's designed to be easy to use and integrate and fast to compile while covering the most common features of a microbenchmarking library.\n\n## Example usage\n\nHere's the complete code of a microbenchmark which compares adding elements to a `std::vector` with and without using `reserve`:\n\n```c++\n#define PICOBENCH_IMPLEMENT_WITH_MAIN\n#include \"picobench/picobench.hpp\"\n\n#include \u003cvector\u003e\n#include \u003ccstdlib\u003e // for rand\n\n// Benchmarking function written by the user:\nstatic void rand_vector(picobench::state\u0026 s)\n{\n    std::vector\u003cint\u003e v;\n    for (auto _ : s)\n    {\n        v.push_back(rand());\n    }\n}\nPICOBENCH(rand_vector); // Register the above function with picobench\n\n// Another benchmarking function:\nstatic void rand_vector_reserve(picobench::state\u0026 s)\n{\n    std::vector\u003cint\u003e v;\n    v.reserve(s.iterations());\n    for (auto _ : s)\n    {\n        v.push_back(rand());\n    }\n}\nPICOBENCH(rand_vector_reserve);\n```\n\nThe output of this benchmark might look like this:\n\n```\n Name (* = baseline)      |   Dim   |  Total ms |  ns/op  |Baseline| Ops/second\n--------------------------|--------:|----------:|--------:|-------:|----------:\n rand_vector *            |       8 |     0.001 |     167 |      - |  5974607.9\n rand_vector_reserve      |       8 |     0.000 |      55 |  0.329 | 18181818.1\n rand_vector *            |      64 |     0.004 |      69 |      - | 14343343.8\n rand_vector_reserve      |      64 |     0.002 |      27 |  0.400 | 35854341.7\n rand_vector *            |     512 |     0.017 |      33 |      - | 30192239.7\n rand_vector_reserve      |     512 |     0.012 |      23 |  0.710 | 42496679.9\n rand_vector *            |    4096 |     0.181 |      44 |      - | 22607850.9\n rand_vector_reserve      |    4096 |     0.095 |      23 |  0.527 | 42891848.9\n rand_vector *            |    8196 |     0.266 |      32 |      - | 30868196.3\n rand_vector_reserve      |    8196 |     0.207 |      25 |  0.778 | 39668749.5\n```\n\n...which tells us that we see a noticeable performance gain when we use `reserve` but the effect gets less prominent for bigger numbers of elements inserted.\n\n## Documentation\n\nTo use picobench, you need to include `picobench.hpp` by either copying it inside your project or adding this repo as a submodule to yours.\n\nIn one compilation unit (.cpp file) in the module (typically the benchmark executable) in which you use picobench, you need to define `PICOBENCH_IMPLEMENT_WITH_MAIN` (or `PICOBENCH_IMPLEMENT` if you want to write your own `main` function).\n\n### Creating benchmarks\n\nA benchmark is a function which you're written with the signature `void (picobench::state\u0026 s)`. You need to register the function with the macro `PICOBENCH(func_name)` where the only argument is the function's name as shown in the example above.\n\nThe library will run the benchmark function several times with different numbers of iterations, to simulate different problem spaces, then collect the results in a report.\n\nTypically a benchmark has a loop. To run the loop, use the `picobench::state` argument in a range-based for loop in your function. The time spent looping is measured for the benchmark. You can have initialization/deinitialization code outside of the loop and it won't be measured.\n\nYou can have multiple benchmarks in multiple files. All of them will be run when the executable starts.\n\nUse `state::iterations` as shown in the example to make initialization based on how many iterations the loop will make.\n\nIf you don't want the automatic time measurement, you can use `state::start_timer` and `state::stop_timer` to manually measure it, or use the RAII class `picobench::scope` for semi-automatic measurement.\n\nHere's an example of a couple of benchmarks, which does not use the range-based for loop for time measurement:\n\n```c++\nvoid my_func(); // Function you want to benchmark\nstatic void benchmark_my_func(picobench::state\u0026 s)\n{\n    s.start_timer(); // Manual start\n    for (int i=0; i\u003cs.iterations(); ++i)\n        my_func();\n    s.stop_timer(); // Manual stop\n}\nPICOBENCH(benchmark_my_func);\n\nvoid my_func2();\nstatic void benchmark_my_func2(picobench::state\u0026 s)\n{\n    custom_init(); // Some user-defined initialization\n    picobench::scope scope(s); // Constructor starts measurement. Destructor stops it\n    for (int i=0; i\u003cs.iterations(); ++i)\n        my_func2();\n}\nPICOBENCH(benchmark_my_func2);\n```\n\n### Custom main function\n\nIf you write your own `main` function, you need to add the following to it in order to run the benchmarks:\n\n```c++\n    picobench::runner runner;\n    // Optionally parse command line\n    runner.parse_cmd_line(argc, argv);\n    return runner.run();\n```\n\nFor even finer control of the run, instead of `run` you might call the functions explicitly:\n\n```c++\n    picobench::runner runner;\n    // Optionally parse command line\n    runner.parse_cmd_line(argc, argv);\n    if (runner.should_run()) // Cmd line may have disabled benchmarks\n    {\n        runner.run_benchmarks();\n        auto report = runner.generate_report();\n        // Then to output the data in the report use\n        report.to_text(std::cout); // Default\n        // or\n        report.to_text_concise(std::cout); // No iterations breakdown\n        // or\n        report.to_csv(std::cout); // Otputs in csv format. Most detailed\n    }\n```\n\nInstead of `std::cout` you may want to use another `std::ostream` instance of your choice.\n\nAs mentioned above `report.to_text_concise(ostream)` outputs a report without the iterations breakdown. With the first example of benchmarking adding elements to a `std::vector`, the output would be this:\n\n```\n Name (* = baseline)      |  ns/op  | Baseline |  Ops/second\n--------------------------|--------:|---------:|-----------:\n rand_vector *            |      36 |        - |  27427782.7\n rand_vector_reserve      |      24 |    0.667 |  40754573.7\n```\n\nNote that in this case the information that the effect of using `reserve` gets less prominent with more elements is lost.\n\n### Suites\n\nYou can optionally create suites of benchmarks. If you don't, all benchmarks in the module are assumed to be in the default suite.\n\nTo create a suite, write `PICOBENCH_SUITE(\"suite name in quotes\");` and then every benchmark below this line will be a part of this suite. You can have benchmakrs in many files in the same suite. Just use the same string for its name.\n\n### Baseline\n\nAll benchmarks in a suite are assumed to be related and one of them is dubbed a \"baseline\". In the report at the end, all others will be compared to it.\n\nBy default the first benchmark added to a suite is the baseline, but you can change this by adding `.baseline()` to the registration like so: `PICOBENCH(my_benchmark).baseline()`.\n\n### Samples\n\nSometimes the code being benchmarked is very sensitive to external factors such as syscalls (which include memory allocation and deallocation). Those external factors can have take greatly different times between runs. In such cases several samples of a benchmark might be needed to more precisely measure the time it takes to complete. By default the library makes two samples of each benchmark, but you can change this by adding `.samples(n)` to the registration like so: `PICOBENCH(my_benchmark).samples(10)`.\n\nNote that the time written to the report is the one of the *fastest* sample.\n\n### Benchmark results\n\nYou can set a result for a benchmark using `state::set_result`. Here is an example of this:\n\n```c++\nvoid my_benchmark(picobench::state\u0026 s)\n{\n    int sum = 0;\n    for (int i : s)\n    {\n        sum += myfunc(i);\n    }\n    s.set_result(sum);\n}\n```\n\nBy default results are not used. You can think of them as data sinks. Optionally however you can use them in two ways.\n\n* Compare across samples: By calling `runner::set_compare_results_across_samples` you will make the library compare results between the different samples of a benchmark and trigger an error if they differ.\n* Compare across benchmarks: By calling `runner::set_compare_results_across_benchmarks` you can make a more complex comparison which will compare the results from all benchmarks in a suite. You can use this if you compare different ways of calculating the same result.\n\nBy default results are compared by simple equality, but you can introduce you own function as an argument to `runner::generate_report` here is an example:\n\n```c++\nvoid my_benchmark(picobench::state\u0026 s)\n{\n    my_vector2 result;\n    for (auto _ : s)\n        result += my_vector_op();\n    s.set_result(\n        // new to preserve value past this function\n        reinterpret_cast\u003cresult_t\u003e(new my_vector2(result))); \n}\n\nbool compare_vectors(result_t a, result_t b)\n{\n    auto v1 = reinterpret_cast\u003cmy_vector2*\u003e(a);\n    auto v2 = reinterpret_cast\u003cmy_vector2*\u003e(b);\n    return v1-\u003ex == v2-\u003ex \u0026\u0026 v1-\u003ey == v2-\u003ey;\n}\n\n...\n\nauto report = runner.generate_report(compare_vectors);\n\n```\n\n\n### Other options\n\nOther characteristics of a benchmark are:\n\n* **Iterations**: (or \"problem spaces\") a vector of integers describing the set of iterations to be made for a benchmark. Set with `.iterations({i1, i2, i3...})`. The default is {8, 64, 512, 4096, 8196}.\n* **Label**: a string which is used for this benchmark in the report instead of the function name. Set with `.label(\"my label\")`\n* **User data**: a user defined number (`uintptr_t`) assinged to a benchmark which can be accessed by `state::user_data`\n\nYou can combine the options by concatenating them like this: `PICOBENCH(my_func).label(\"My Function\").samples(2).iterations({1000, 10000, 50000});`\n\nIf you write your own main function, you can set the default iterations and samples for all benchmarks with `runner::set_default_state_iterations` and `runner::set_default_samples` *before* calling `runner::run_benchmarks`.\n\nIf you parse the command line or use the library-provided `main` function you can also set the iterations and samples with command line args:\n* `--iters=1000,5000,10000` will set the iterations for benchmarks which don't explicitly override them\n* `--samples=5` will set the samples for benchmarks which don't explicitly override them\n\n### Other command line arguments\n\nIf you're using the library-provided `main` function, it will also handle the following command line arguments:\n* `--out-fmt=\u003ctxt|con|csv\u003e` - sets the output report format to either full text, concise text or csv.\n* `--output=\u003cfilename\u003e` - writes the output report to a given file\n* `--compare-results` - will compare results from benchmarks and trigger an error if they don't match.\n\n### Misc\n\n* The runner randomizes the benchmarks. To have the same order on every run and every platform, set an integer seed to `runner::run_benchmarks`.\n\nHere's another example of a custom main function incporporating the above:\n\n```c++\n#define PICOBENCH_IMPLEMENT\n#include \"picobench/picobench.hpp\"\n...\nint main()\n{\n    // User-defined code which makes global initializations\n    custom_global_init();\n\n    picobench::runner runner;\n    // Disregard command-line for simplicity\n\n    // Two sets of iterations\n    runner.set_default_state_iterations({10000, 50000});\n\n    // One sample per benchmark because the huge numbers are expected to compensate\n    // for external factors\n    runner.set_default_samples(1);\n\n    // Run the benchmarks with some seed which guarantees the same order every time\n    auto report = runner.run_benchmarks(123);\n\n    // Output to some file\n    report.to_csv(ofstream(\"my.csv\"));\n\n    return 0;\n}\n```\n\n## Contributing\n\nContributions in the form of issues and pull requests are welcome.\n\n## License\n\nThis software is distributed under the MIT Software License.\n\nSee accompanying file LICENSE.txt or copy [here](https://opensource.org/licenses/MIT).\n\nCopyright \u0026copy; 2017-2024 [Borislav Stanimirov](http://github.com/iboB)\n","funding_links":[],"categories":["Benchmarking","C++","Profiling"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FiboB%2Fpicobench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FiboB%2Fpicobench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FiboB%2Fpicobench/lists"}