{"id":13527517,"url":"https://github.com/benchmark-action/github-action-benchmark","last_synced_at":"2025-04-01T09:31:37.547Z","repository":{"id":39587094,"uuid":"219665796","full_name":"benchmark-action/github-action-benchmark","owner":"benchmark-action","description":"GitHub Action for continuous benchmarking to keep performance","archived":false,"fork":false,"pushed_at":"2024-09-15T18:26:44.000Z","size":5307,"stargazers_count":1011,"open_issues_count":101,"forks_count":152,"subscribers_count":8,"default_branch":"master","last_synced_at":"2024-10-19T07:27:23.436Z","etag":null,"topics":["benchmark","ci","github-action"],"latest_commit_sha":null,"homepage":"https://benchmark-action.github.io/github-action-benchmark/dev/bench/","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/benchmark-action.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-11-05T05:37:42.000Z","updated_at":"2024-10-16T21:56:43.000Z","dependencies_parsed_at":"2024-01-06T07:57:28.101Z","dependency_job_id":"15b4c22b-5882-4fb9-92ea-78be190c0e04","html_url":"https://github.com/benchmark-action/github-action-benchmark","commit_stats":{"total_commits":384,"total_committers":37,"mean_commits":"10.378378378378379","dds":"0.27604166666666663","last_synced_commit":"c3a5e4ec40b6c935e215555c3440abd1b6891164"},"previous_names":["rhysd/github-action-benchmark"],"tags_count":50,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benchmark-action%2Fgithub-action-benchmark","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benchmark-action%2Fgithub-action-benchmark/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benchmark-action%2Fgithub-action-benchmark/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/benchmark-action%2Fgithub-action-benchmark/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/benchmark-action","download_url":"https://codeload.github.com/benchmark-action/github-action-benchmark/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246616154,"owners_count":20806065,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","ci","github-action"],"created_at":"2024-08-01T06:01:50.114Z","updated_at":"2025-04-01T09:31:36.835Z","avatar_url":"https://github.com/benchmark-action.png","language":"TypeScript","readme":"GitHub Action for Continuous Benchmarking\n=========================================\n[![Action Marketplace][release-badge]][marketplace]\n[![Build Status][build-badge]][ci]\n[![codecov][codecov-badge]][codecov]\n\n[This repository][proj] provides a [GitHub Action][github-action] for continuous benchmarking.\nIf your project has some benchmark suites, this action collects data from the benchmark outputs\nand monitor the results on GitHub Actions workflow.\n\n- This action can store collected benchmark results in [GitHub pages][gh-pages] branch and provide\n  a chart view. Benchmark results are visualized on the GitHub pages of your project.\n- This action can detect possible performance regressions by comparing benchmark results. When\n  benchmark results get worse than previous exceeding the specified threshold, it can raise an alert\n  via commit comment or workflow failure.\n\nThis action currently supports the following tools:\n\n- [`cargo bench`][cargo-bench] for Rust projects\n- `go test -bench` for Go projects\n- [benchmark.js][benchmarkjs] for JavaScript/TypeScript projects\n- [pytest-benchmark][] for Python projects with [pytest][]\n- [Google Benchmark Framework][google-benchmark] for C++ projects\n- [Catch2][catch2] for C++ projects\n- [BenchmarkTools.jl][] for Julia packages\n- [Benchmark.Net][benchmarkdotnet] for .Net projects\n- [benchmarkluau](https://github.com/Roblox/luau/tree/master/bench) for Luau projects\n- [JMH][jmh] for Java projects\n- Custom benchmarks where either 'biggerIsBetter' or 'smallerIsBetter'\n\nMultiple languages in the same repository are supported for polyglot projects.\n\n[Japanese Blog post](https://rhysd.hatenablog.com/entry/2019/11/11/131505)\n\n\n\n## Examples\n\nExample projects for each language are in [examples/](./examples) directory. Live example workflow\ndefinitions are in [.github/workflows/](./.github/workflows) directory. Live workflows are:\n\n| Language     | Workflow                                                                                | Example Project                                |\n|--------------|-----------------------------------------------------------------------------------------|------------------------------------------------|\n| Rust         | [![Rust Example Workflow][rust-badge]][rust-workflow-example]                           | [examples/rust](./examples/rust)               |\n| Go           | [![Go Example Workflow][go-badge]][go-workflow-example]                                 | [examples/go](./examples/go)                   |\n| JavaScript   | [![JavaScript Example Workflow][benchmarkjs-badge]][benchmarkjs-workflow-example]       | [examples/benchmarkjs](./examples/benchmarkjs) |\n| Python       | [![pytest-benchmark Example Workflow][pytest-benchmark-badge]][pytest-workflow-example] | [examples/pytest](./examples/pytest)           |\n| C++          | [![C++ Example Workflow][cpp-badge]][cpp-workflow-example]                              | [examples/cpp](./examples/cpp)                 |\n| C++ (Catch2) | [![C++ Catch2 Example Workflow][catch2-badge]][catch2-workflow-example]                 | [examples/catch2](./examples/catch2)           |\n| Julia | [![Julia Example][julia-badge]][julia-workflow-example]                 | [examples/julia](./examples/julia)           |\n| .Net         | [![C# Benchmark.Net Example Workflow][benchmarkdotnet-badge]][benchmarkdotnet-workflow-example] | [examples/benchmarkdotnet](./examples/benchmarkdotnet) |\n| Java         | [![Java Example Workflow][java-badge]][java-workflow-example] | [examples/java](./examples/java) |\n| Luau         | Coming soon | Coming soon |\n\nAll benchmark charts from above workflows are gathered in GitHub pages:\n\nhttps://benchmark-action.github.io/github-action-benchmark/dev/bench/\n\nAdditionally, even though there is no explicit example for them, you can use\n`customBiggerIsBetter` and `customSmallerIsBetter` to use this\naction and create your own graphs from your own benchmark data. The name in\nthese tools define which direction \"is better\" for your benchmarks.\n\nEvery entry in the JSON file you provide only needs to provide `name`, `unit`,\nand `value`. You can also provide optional `range` (results' variance) and\n`extra` (any additional information that might be useful to your benchmark's\ncontext) properties. Like this:\n\n```json\n[\n    {\n        \"name\": \"My Custom Smaller Is Better Benchmark - CPU Load\",\n        \"unit\": \"Percent\",\n        \"value\": 50\n    },\n    {\n        \"name\": \"My Custom Smaller Is Better Benchmark - Memory Used\",\n        \"unit\": \"Megabytes\",\n        \"value\": 100,\n        \"range\": \"3\",\n        \"extra\": \"Value for Tooltip: 25\\nOptional Num #2: 100\\nAnything Else!\"\n    }\n]\n```\n\n## Screenshots\n\n### Charts on GitHub Pages\n\n![page screenshot](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/main.png)\n\nMouseover on data point shows a tooltip. It includes\n\n- Commit hash\n- Commit message\n- Date and committer\n- Benchmark value\n\nClicking data point in chart opens the commit page on a GitHub repository.\n\n![tooltip](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/tooltip.png)\n\nAt bottom of the page, the download button is available for downloading benchmark results as a JSON file.\n\n![download button](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/download.png)\n\n\n### Alert comment on commit page\n\nThis action can raise [an alert comment][alert-comment-example]. to the commit when its benchmark\nresults are worse than previous exceeding a specified threshold.\n\n![alert comment](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/alert-comment.png)\n\n\n\n## Why?\n\nSince performance is important. Writing benchmarks is a popular and correct way to visualize a software\nperformance. Benchmarks help us to keep performance and to confirm the effects of optimizations.\nFor keeping the performance, it's important to monitor the benchmark results along with changes to\nthe software. To notice performance regression quickly, it's useful to monitor benchmarking results\ncontinuously.\n\nHowever, there is no good free tool to watch the performance easily and continuously across languages\n(as far as I looked into). So I built a new tool on top of GitHub Actions.\n\n\n\n## How to use\n\nThis action takes a file that contains benchmark output. And it outputs the results to GitHub Pages\nbranch and/or alert commit comment.\n\n\n### Minimal setup\n\nLet's start with a minimal workflow setup. For explanation, here let's say we have a Go project. But basic\nsetup is the same when you use other languages. For language-specific setup, please read the later section.\n\n```yaml\nname: Minimal setup\non:\n  push:\n    branches:\n      - master\n\njobs:\n  benchmark:\n    name: Performance regression check\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - uses: actions/setup-go@v4\n        with:\n          go-version: \"stable\"\n      # Run benchmark with `go test -bench` and stores the output to a file\n      - name: Run benchmark\n        run: go test -bench 'BenchmarkFib' | tee output.txt\n      # Download previous benchmark result from cache (if exists)\n      - name: Download previous benchmark data\n        uses: actions/cache@v4\n        with:\n          path: ./cache\n          key: ${{ runner.os }}-benchmark\n      # Run `github-action-benchmark` action\n      - name: Store benchmark result\n        uses: benchmark-action/github-action-benchmark@v1\n        with:\n          # What benchmark tool the output.txt came from\n          tool: 'go'\n          # Where the output from the benchmark tool is stored\n          output-file-path: output.txt\n          # Where the previous data file is stored\n          external-data-json-path: ./cache/benchmark-data.json\n          # Workflow will fail when an alert happens\n          fail-on-alert: true\n      # Upload the updated cache file for the next job by actions/cache\n```\n\nThe step which runs `github-action-benchmark` does followings:\n\n1. Extract benchmark result from the output in `output.txt`\n2. Update the downloaded cache file with the extracted result\n3. Compare the result with the previous result. If it gets worse than previous exceeding 200% threshold,\n   the workflow fails and the failure is notified to you\n\nBy default, this action marks the result as performance regression when it is worse than the previous\nexceeding 200% threshold. For example, if the previous benchmark result was 100 iter/ns and this time\nit is 230 iter/ns, it means 230% worse than the previous and an alert will happen. The threshold can\nbe changed by `alert-threshold` input.\n\nA live workflow example is [here](.github/workflows/minimal.yml). And the results of the workflow can\nbe seen [here][minimal-workflow-example].\n\n\n### Commit comment\n\nIn addition to the above setup, GitHub API token needs to be given to enable `comment-on-alert` feature.\n\n```yaml\n- name: Store benchmark result\n  uses: benchmark-action/github-action-benchmark@v1\n  with:\n    tool: 'go'\n    output-file-path: output.txt\n    external-data-json-path: ./cache/benchmark-data.json\n    fail-on-alert: true\n    # GitHub API token to make a commit comment\n    github-token: ${{ secrets.GITHUB_TOKEN }}\n    # Enable alert commit comment\n    comment-on-alert: true\n    # Mention @rhysd in the commit comment\n    alert-comment-cc-users: '@rhysd'\n```\n\n`secrets.GITHUB_TOKEN` is [a GitHub API token automatically generated for each workflow run][help-github-token].\nIt is necessary to send a commit comment when the benchmark result of the commit is detected as possible\nperformance regression.\n\nNow, in addition to making workflow fail, the step leaves a commit comment when it detects performance\nregression [like this][alert-comment-example]. Though `alert-comment-cc-users` input is not mandatory for\nthis, I recommend to set it to make sure you can notice the comment via GitHub notification. Please note\nthat this value must be quoted like `'@rhysd'` because [`@` is an indicator in YAML syntax](https://yaml.org/spec/1.2/spec.html#id2772075).\n\nA live workflow example is [here](.github/workflows/commit-comment.yml). And the results of the workflow\ncan be seen [here][commit-comment-workflow-example].\n\n### Job Summary\n\nSimilar to the [Commit comment](#commit-comment) feature, Github Actions [Job Summaries](https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/) are\nalso supported. In order to use Job Summaries, turn on the `summary-always`\noption.\n\n```yaml\n- name: Store benchmark result\n  uses: benchmark-action/github-action-benchmark@v1\n  with:\n    tool: 'cargo'\n    output-file-path: output.txt\n    external-data-json-path: ./cache/benchmark-data.json\n    fail-on-alert: true\n    # GitHub API token to make a commit comment\n    github-token: ${{ secrets.GITHUB_TOKEN }}\n    # Enable alert commit comment\n    comment-on-alert: true\n    # Enable Job Summary for PRs\n    summary-always: true\n    # Mention @rhysd in the commit comment\n    alert-comment-cc-users: '@rhysd'\n```\n\n### Charts on GitHub Pages\n\nIt is useful to see how the benchmark results changed on each change in time-series charts. This action\nprovides a chart dashboard on GitHub pages.\n\nIt requires some preparations before the workflow setup.\n\nYou need to create a branch for GitHub Pages if you haven't created it yet.\n\n```sh\n# Create a local branch\n$ git checkout --orphan gh-pages\n# Push it to create a remote branch\n$ git push origin gh-pages:gh-pages\n```\n\nNow you're ready for workflow setup.\n\n```yaml\n# Do not run this workflow on pull request since this workflow has permission to modify contents.\non:\n  push:\n    branches:\n      - master\n\npermissions:\n  # deployments permission to deploy GitHub pages website\n  deployments: write\n  # contents permission to update benchmark contents in gh-pages branch\n  contents: write\n\njobs:\n  benchmark:\n    name: Performance regression check\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - uses: actions/setup-go@v4\n        with:\n          go-version: \"stable\"\n      # Run benchmark with `go test -bench` and stores the output to a file\n      - name: Run benchmark\n        run: go test -bench 'BenchmarkFib' | tee output.txt\n      # gh-pages branch is updated and pushed automatically with extracted benchmark data\n      - name: Store benchmark result\n        uses: benchmark-action/github-action-benchmark@v1\n        with:\n          name: My Project Go Benchmark\n          tool: 'go'\n          output-file-path: output.txt\n          # Access token to deploy GitHub Pages branch\n          github-token: ${{ secrets.GITHUB_TOKEN }}\n          # Push and deploy GitHub pages branch automatically\n          auto-push: true\n```\n\nThe step which runs `github-action-benchmark` does followings:\n\n1. Extract benchmark result from the output in `output.txt`\n2. Switch branch to `gh-pages`\n3. Read existing benchmark results from `dev/bench/data.js`\n4. Update `dev/bench/data.js` with the extracted benchmark result\n5. Generate a commit to store the update in `gh-pages` branch\n6. Push `gh-pages` branch to remote\n7. Compare the results with previous results and make an alert if possible performance regression is detected\n\nAfter the first workflow run, you will get the first result on `https://you.github.io/repo/dev/bench`\n[like this][examples-page].\n\nBy default, this action assumes that `gh-pages` is your GitHub Pages branch and that `/dev/bench` is\na path to put the benchmark dashboard page. If they don't fit your use case, please tweak them by\n`gh-pages-branch`, `gh-repository` and `benchmark-data-dir-path` inputs.\n\nThis action merges all benchmark results into one GitHub pages branch. If your workflows have multiple\nsteps to check benchmarks from multiple tools, please give `name` input to each step to make each\nbenchmark results identical.\n\nPlease see the above ['Examples' section](#examples) to see live workflow examples for each language.\n\nIf you don't want to pass GitHub API token to this action, it's still OK.\n\n```yaml\n- name: Store benchmark result\n  uses: benchmark-action/github-action-benchmark@v1\n  with:\n    name: My Project Go Benchmark\n    tool: 'go'\n    output-file-path: output.txt\n    # Set auto-push to false since GitHub API token is not given\n    auto-push: false\n# Push gh-pages branch by yourself\n- name: Push benchmark result\n  run: git push 'https://you:${{ secrets.GITHUB_TOKEN }}@github.com/you/repo-name.git' gh-pages:gh-pages\n```\n\nPlease add a step to push the branch to the remote.\n\n\n### Tool specific setup\n\nPlease read `README.md` files at each example directory. Usually, take stdout from a benchmark tool\nand store it to file. Then specify the file path to `output-file-path` input.\n\n- [`cargo bench` for Rust projects](./examples/rust/README.md)\n- [`go test` for Go projects](./examples/go/README.md)\n- [Benchmark.js for JavaScript/TypeScript projects](./examples/benchmarkjs/README.md)\n- [pytest-benchmark for Python projects with pytest](./examples/pytest/README.md)\n- [Google Benchmark Framework for C++ projects](./examples/cpp/README.md)\n- [catch2 for C++ projects](./examples/cpp/README.md)\n- [BenchmarkTools.jl for Julia projects](./examples/julia/README.md)\n- [Benchmark.Net for .Net projects](./examples/benchmarkdotnet/README.md)\n- [benchmarkluau for Luau projects](#) - Examples for this are still a work in progress.\n\nThese examples are run in workflows of this repository as described in the 'Examples' section above.\n\n\n### Action inputs\n\nInput definitions are written in [action.yml](./action.yml).\n\n#### `name` (Required)\n\n- Type: String\n- Default: `\"Benchmark\"`\n\nName of the benchmark. This value must be identical across all benchmarks in your repository.\n\n#### `tool` (Required)\n\n- Type: String\n- Default: N/A\n\nTool for running benchmark. The value must be one of `\"cargo\"`, `\"go\"`, `\"benchmarkjs\"`, `\"pytest\"`,\n`\"googlecpp\"`, `\"catch2\"`, `\"julia\"`, `\"jmh\"`, `\"benchmarkdotnet\"`,`\"benchmarkluau\"`, `\"customBiggerIsBetter\"`, `\"customSmallerIsBetter\"`.\n\n#### `output-file-path` (Required)\n\n- Type: String\n- Default: N/A\n\nPath to a file which contains the output from benchmark tool. The path can be relative to repository root.\n\n#### `gh-pages-branch` (Required)\n\n- Type: String\n- Default: `\"gh-pages\"`\n\nName of your GitHub pages branch.\n\nNote: If you're using `docs/` directory of `master` branch for GitHub pages, please set `gh-pages-branch`\nto `master` and `benchmark-data-dir-path` to the directory under `docs` like `docs/dev/bench`.\n\n#### `gh-repository`\n\n- Type: String\n\nUrl to an optional different repository to store benchmark results (eg. `github.com/benchmark-action/github-action-benchmark-results`)\n\nNOTE: if you want to auto push to a different repository you need to use a separate Personal Access Token that has a write access to the specified repository.\nIf you are not using the `auto-push` option then you can avoid passing the `gh-token` if your data repository is public\n\n#### `benchmark-data-dir-path` (Required)\n\n- Type: String\n- Default: `\"dev/bench\"`\n\nPath to a directory that contains benchmark files on the GitHub pages branch. For example, when this value\nis set to `\"path/to/bench\"`, `https://you.github.io/repo-name/path/to/bench` will be available as benchmarks\ndashboard page. If it does not contain `index.html`, this action automatically generates it at first run.\nThe path can be relative to repository root.\n\n#### `github-token` (Optional)\n\n- Type: String\n- Default: N/A\n\nGitHub API access token.\n\n#### `ref` (Optional)\n\n- Type: String\n- Default: N/A\n\nRef to use for reporting the commit\n\n#### `auto-push` (Optional)\n\n- Type: Boolean\n- Default: `false`\n\nIf it is set to `true`, this action automatically pushes the generated commit to GitHub Pages branch.\nOtherwise, you need to push it by your own. Please read 'Commit comment' section above for more details.\n\n#### `comment-always` (Optional)\n\n- Type: Boolean\n- Default: `false`\n\nIf it is set to `true`, this action will leave a commit comment comparing the current benchmark with previous.\n`github-token` is necessary as well.\n\n#### `save-data-file` (Optional)\n\n- Type: Boolean\n- Default: `true`\n\nIf it is set to `false`, this action will not save the current benchmark to the external data file.\nYou can use this option to set up your action to compare the benchmarks between PR and base branch.\n\n#### `alert-threshold` (Optional)\n\n- Type: String\n- Default: `\"200%\"`\n\nPercentage value like `\"150%\"`. It is a ratio indicating how worse the current benchmark result is.\nFor example, if we now get `150 ns/iter` and previously got `100 ns/iter`, it gets `150%` worse.\n\nIf the current benchmark result is worse than previous exceeding the threshold, an alert will happen.\nSee `comment-on-alert` and `fail-on-alert` also.\n\n#### `comment-on-alert` (Optional)\n\n- Type: Boolean\n- Default: `false`\n\nIf it is set to `true`, this action will leave a commit comment when an alert happens [like this][alert-comment-example].\n`github-token` is necessary as well. For the threshold, please see `alert-threshold` also.\n\n#### `fail-on-alert` (Optional)\n\n- Type: Boolean\n- Default: `false`\n\nIf it is set to `true`, the workflow will fail when an alert happens. For the threshold for this, please\nsee `alert-threshold` and `fail-threshold` also.\n\n#### `fail-threshold` (Optional)\n\n- Type: String\n- Default: The same value as `alert-threshold`\n\nPercentage value in the same format as `alert-threshold`. If this value is set, the threshold value\nwill be used to determine if the workflow should fail. Default value is set to the same value as\n`alert-threshold` input. **This value must be equal or larger than `alert-threshold` value.**\n\n#### `alert-comment-cc-users` (Optional)\n\n- Type: String\n- Default: N/A\n\nComma-separated GitHub user names mentioned in alert commit comment like `\"@foo,@bar\"`. These users\nwill be mentioned in a commit comment when an alert happens. For configuring alerts, please see\n`alert-threshold` and `comment-on-alert` also.\n\n#### `external-data-json-path` (Optional)\n\n- Type: String\n- Default: N/A\n\nExternal JSON file which contains benchmark results until previous job run. When this value is set,\nthis action updates the file content instead of generating a Git commit in GitHub Pages branch.\nThis option is useful if you don't want to put benchmark results in GitHub Pages branch. Instead,\nyou need to keep the JSON file persistently among job runs. One option is using a workflow cache\nwith `actions/cache` action. Please read 'Minimal setup' section above.\n\n#### `max-items-in-chart` (Optional)\n\n- Type: Number\n- Default: N/A\n\nMax number of data points in a chart for avoiding too busy chart. This value must be unsigned integer\nlarger than zero. If the number of benchmark results for some benchmark suite exceeds this value,\nthe oldest one will be removed before storing the results to file. By default this value is empty\nwhich means there is no limit.\n\n#### `skip-fetch-gh-pages` (Optional)\n\n- Type: Boolean\n- Default: `false`\n\nIf set to `true`, the workflow will skip fetching branch defined with the `gh-pages-branch` variable.\n\n\n### Action outputs\n\nNo action output is set by this action for the parent GitHub workflow.\n\n\n### Caveats\n\n#### Run only on your branches\n\nPlease ensure that your benchmark workflow runs only on your branches. Please avoid running it on\npull requests. If a branch were pushed to GitHub pages branch on a pull request, anyone who creates\na pull request on your repository could modify your GitHub pages branch.\n\nFor this, you can specify a branch that runs your benchmark workflow on `on:` section. Or set the\nproper condition to `if:` section of step which pushes GitHub pages.\n\ne.g. Runs on only `master` branch\n\n```yaml\non:\n  push:\n    branches:\n      - master\n```\n\ne.g. Push when not running for a pull request\n\n```yaml\n- name: Push benchmark result\n  run: git push ...\n  if: github.event_name != 'pull_request'\n```\n\n#### Stability of Virtual Environment\n\nAs far as watching the benchmark results of examples in this repository, the amplitude of the benchmarks\nis about +- 10~20%. If your benchmarks use some resources such as networks or file I/O, the amplitude\nmight be bigger.\n\nIf the amplitude is not acceptable, please prepare a stable environment to run benchmarks.\nGitHub action supports [self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners).\n\n\n### Customizing the benchmarks result page\n\nThis action creates the default `index.html` in the directory specified with `benchmark-data-dir-path`\ninput. By default, every benchmark test case has own chart on the page. Charts are drawn with\n[Chart.js](https://www.chartjs.org/).\n\nIf it does not fit your use case, please modify the HTML file or replace it with your favorite one.\nEvery benchmark data is stored in `window.BENCHMARK_DATA` so you can create your favorite view.\n\n\n### Versioning\n\nThis action conforms semantic versioning 2.0.\n\nFor example, `benchmark-action/github-action-benchmark@v1` means the latest version of `1.x.y`. And\n`benchmark-action/github-action-benchmark@v1.0.2` always uses `v1.0.2` even if a newer version is published.\n\n`master` branch of this repository is for development and does not work as action.\n\n\n### Track updates of this action\n\nTo notice new version releases, please [watch 'release only'][help-watch-release] at [this repository][proj].\nEvery release will appear on your GitHub notifications page.\n\n\n\n## Future work\n\n- Support pull requests. Instead of updating GitHub pages, add a comment to the pull request to explain\n  benchmark results.\n- Add more benchmark tools:\n  - [airspeed-velocity Python benchmarking tool](https://github.com/airspeed-velocity/asv)\n- Allow uploading results to metrics services such as [mackerel](https://en.mackerel.io/)\n- Show extracted benchmark data in the output from this action\n- Add a table view in dashboard page to see all data points in table\n\n\n\n## Related actions\n\n- [lighthouse-ci-action][] is an action for [Lighthouse CI][lighthouse-ci]. If you're measuring performance\n  of your web application, using Lighthouse CI and lighthouse-ci-action would be better than using this\n  action.\n\n\n\n## License\n\n[the MIT License](./LICENSE.txt)\n\n\n\n[build-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/ci.yml/badge.svg\n[ci]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3ACI\n[codecov-badge]: https://codecov.io/gh/benchmark-action/github-action-benchmark/branch/master/graph/badge.svg\n[codecov]: https://app.codecov.io/gh/benchmark-action/github-action-benchmark\n[release-badge]: https://img.shields.io/github/v/release/benchmark-action/github-action-benchmark.svg\n[marketplace]: https://github.com/marketplace/actions/continuous-benchmark\n[proj]: https://github.com/benchmark-action/github-action-benchmark\n[rust-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/rust.yml/badge.svg\n[go-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/go.yml/badge.svg\n[benchmarkjs-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/benchmarkjs.yml/badge.svg\n[pytest-benchmark-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/pytest.yml/badge.svg\n[cpp-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/cpp.yml/badge.svg\n[catch2-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/catch2.yml/badge.svg\n[julia-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/julia.yml/badge.svg\n[java-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/java.yml/badge.svg\n[github-action]: https://github.com/features/actions\n[cargo-bench]: https://doc.rust-lang.org/cargo/commands/cargo-bench.html\n[benchmarkjs]: https://benchmarkjs.com/\n[gh-pages]: https://pages.github.com/\n[examples-page]: https://benchmark-action.github.io/github-action-benchmark/dev/bench/\n[pytest-benchmark]: https://pypi.org/project/pytest-benchmark/\n[pytest]: https://pypi.org/project/pytest/\n[alert-comment-example]: https://github.com/benchmark-action/github-action-benchmark/commit/077dde1c236baba9244caad4d9e82ea8399dae20#commitcomment-36047186\n[rust-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Rust+Example%22\n[go-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Go+Example%22\n[benchmarkjs-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Benchmark.js+Example%22\n[pytest-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Python+Example+with+pytest%22\n[cpp-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22C%2B%2B+Example%22\n[catch2-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Catch2+C%2B%2B+Example%22\n[julia-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Julia+Example+with+BenchmarkTools.jl%22\n[java-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22JMH+Example%22\n[help-watch-release]: https://docs.github.com/en/github/receiving-notifications-about-activity-on-github/watching-and-unwatching-releases-for-a-repository\n[help-github-token]: https://docs.github.com/en/actions/security-guides/automatic-token-authentication\n[minimal-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Example+for+minimal+setup%22\n[commit-comment-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Example+for+alert+with+commit+comment%22\n[google-benchmark]: https://github.com/google/benchmark\n[catch2]: https://github.com/catchorg/Catch2\n[jmh]: https://openjdk.java.net/projects/code-tools/jmh/\n[lighthouse-ci-action]: https://github.com/treosh/lighthouse-ci-action\n[lighthouse-ci]: https://github.com/GoogleChrome/lighthouse-ci\n[BenchmarkTools.jl]: https://github.com/JuliaCI/BaseBenchmarks.jl\n[benchmarkdotnet]: https://benchmarkdotnet.org\n[benchmarkdotnet-badge]: https://github.com/benchmark-action/github-action-benchmark/actions/workflows/benchmarkdotnet.yml/badge.svg\n[benchmarkdotnet-workflow-example]: https://github.com/rhysd/github-action-benchmark/actions?query=workflow%3A%22Benchmark.Net+Example%22\n[job-summaries]: https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/\n","funding_links":[],"categories":["TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenchmark-action%2Fgithub-action-benchmark","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbenchmark-action%2Fgithub-action-benchmark","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenchmark-action%2Fgithub-action-benchmark/lists"}