{"id":13459354,"url":"https://github.com/hatoo/oha","last_synced_at":"2025-05-13T15:04:36.247Z","repository":{"id":38846176,"uuid":"244377430","full_name":"hatoo/oha","owner":"hatoo","description":"Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.","archived":false,"fork":false,"pushed_at":"2025-04-21T02:57:29.000Z","size":9423,"stargazers_count":8169,"open_issues_count":49,"forks_count":233,"subscribers_count":25,"default_branch":"master","last_synced_at":"2025-05-05T22:41:50.244Z","etag":null,"topics":["benchmark","cli","command-line","http","http2","load-generator","load-testing","rust","tui"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hatoo.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"hatoo","patreon":null,"open_collective":null,"ko_fi":"hatoo","tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":null}},"created_at":"2020-03-02T13:26:35.000Z","updated_at":"2025-05-05T14:47:14.000Z","dependencies_parsed_at":"2024-01-30T04:29:28.552Z","dependency_job_id":"f7da8949-9b03-4869-9a33-9813082b2240","html_url":"https://github.com/hatoo/oha","commit_stats":{"total_commits":478,"total_committers":17,"mean_commits":28.11764705882353,"dds":0.07531380753138073,"last_synced_commit":"f342f46d6e3f223f389205144e3a68510460627f"},"previous_names":[],"tags_count":68,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hatoo%2Foha","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hatoo%2Foha/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hatoo%2Foha/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hatoo%2Foha/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hatoo","download_url":"https://codeload.github.com/hatoo/oha/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253968885,"owners_count":21992259,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","cli","command-line","http","http2","load-generator","load-testing","rust","tui"],"created_at":"2024-07-31T09:01:18.041Z","updated_at":"2025-05-13T15:04:36.218Z","avatar_url":"https://github.com/hatoo.png","language":"Rust","readme":"# oha (おはよう)\n\n[![GitHub Actions](https://github.com/hatoo/oha/workflows/CI/badge.svg)](https://github.com/hatoo/oha/actions?query=workflow%3ACI)\n[![Crates.io](https://img.shields.io/crates/v/oha.svg)](https://crates.io/crates/oha)\n[![Arch Linux](https://img.shields.io/archlinux/v/extra/x86_64/oha)](https://archlinux.org/packages/extra/x86_64/oha/)\n[![Homebrew](https://img.shields.io/homebrew/v/oha)](https://formulae.brew.sh/formula/oha)\n\n[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/hatoo)\n\noha is a tiny program that sends some load to a web application and show realtime tui inspired by [rakyll/hey](https://github.com/rakyll/hey).\n\nThis program is written in Rust and powered by [tokio](https://github.com/tokio-rs/tokio) and beautiful tui by [ratatui](https://github.com/ratatui-org/ratatui).\n\n![demo](demo.gif)\n\n# Installation\n\nThis program is built on stable Rust, with both `make` and `cmake` prerequisites to install via cargo.\n\n    cargo install oha\n\nYou can optionally build oha against [native-tls](https://github.com/sfackler/rust-native-tls) instead of [rustls](https://github.com/rustls/rustls).\n\n    cargo install --no-default-features --features rustls oha\n\nYou can enable VSOCK support by enabling `vsock` feature.\n\n    cargo install --features vsock oha\n\n## Download pre-built binary\n\nYou can download pre-built binary from [Release page](https://github.com/hatoo/oha/releases) for each version and from [Publish workflow](https://github.com/hatoo/oha/actions/workflows/release.yml) and [Publish PGO workflow](https://github.com/hatoo/oha/actions/workflows/release-pgo.yml) for each commit.\n\n## On Arch Linux\n\n    pacman -S oha\n\n## On macOS (Homebrew)\n\n    brew install oha\n\n## On Windows (winget)\n\n    winget install hatoo.oha\n\n## On Debian ([Azlux's repository](http://packages.azlux.fr/))\n\n    echo \"deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ stable main\" | sudo tee /etc/apt/sources.list.d/azlux.list\n    sudo wget -O /usr/share/keyrings/azlux-archive-keyring.gpg https://azlux.fr/repo.gpg\n    apt update\n    apt install oha\n\n## X-CMD (Linux, macOS, Windows WSL/GitBash)\n\nYou can install with [x-cmd](https://www.x-cmd.com).\n\n```sh\nx env use oha\n```\n\n## Containerized\n\nYou can also build and create a container image including oha\n\n```sh\ndocker build . -t example.com/hatoo/oha:latest\n```\n\nThen you can use oha directly through the container\n\n```sh\ndocker run -it example.com/hatoo/oha:latest https://example.com:3000\n```\n\n## Profile-Guided Optimization (PGO)\n\nYou can build `oha` with PGO by using the following commands:\n\n```sh\nbun run pgo.js\n```\n\nAnd the binary will be available at `target/[target-triple]/pgo/oha`.\n\n# Platform\n\n- Linux - Tested on Ubuntu 18.04 gnome-terminal\n- Windows 10 - Tested on Windows Powershell\n- MacOS - Tested on iTerm2\n\n# Usage\n\n`-q` option works different from [rakyll/hey](https://github.com/rakyll/hey). It's set overall query per second instead of for each workers.\n\n```sh\nOhayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.\n\nUsage: oha [OPTIONS] \u003cURL\u003e\n\nArguments:\n  \u003cURL\u003e  Target URL or file with multiple URLs.\n\nOptions:\n  -n \u003cN_REQUESTS\u003e\n          Number of requests to run. [default: 200]\n  -c \u003cN_CONNECTIONS\u003e\n          Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50]\n  -p \u003cN_HTTP2_PARALLEL\u003e\n          Number of parallel requests to send on HTTP/2. `oha` will run c * p concurrent workers in total. [default: 1]\n  -z \u003cDURATION\u003e\n          Duration of application to send requests. If duration is specified, n is ignored.\n          On HTTP/1, When the duration is reached, ongoing requests are aborted and counted as \"aborted due to deadline\"\n          You can change this behavior with `-w` option.\n          Currently, on HTTP/2, When the duration is reached, ongoing requests are waited. `-w` option is ignored.\n          Examples: -z 10s -z 3m.\n  -w, --wait-ongoing-requests-after-deadline\n          When the duration is reached, ongoing requests are waited\n  -q \u003cQUERY_PER_SECOND\u003e\n          Rate limit for all, in queries per second (QPS)\n      --burst-delay \u003cBURST_DURATION\u003e\n          Introduce delay between a predefined number of requests.\n          Note: If qps is specified, burst will be ignored\n      --burst-rate \u003cBURST_REQUESTS\u003e\n          Rates of requests for burst. Default is 1\n          Note: If qps is specified, burst will be ignored\n      --rand-regex-url\n          Generate URL by rand_regex crate but dot is disabled for each query e.g. http://127.0.0.1/[a-z][a-z][0-9]. Currently dynamic scheme, host and port with keep-alive do not work well. See https://docs.rs/rand_regex/latest/rand_regex/struct.Regex.html for details of syntax.\n      --urls-from-file\n          Read the URLs to query from a file\n      --max-repeat \u003cMAX_REPEAT\u003e\n          A parameter for the '--rand-regex-url'. The max_repeat parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4]\n      --dump-urls \u003cDUMP_URLS\u003e\n          Dump target Urls \u003cDUMP_URLS\u003e times to debug --rand-regex-url\n      --latency-correction\n          Correct latency to avoid coordinated omission problem. It's ignored if -q is not set.\n      --no-tui\n          No realtime tui\n      --fps \u003cFPS\u003e\n          Frame per second for tui. [default: 16]\n  -m, --method \u003cMETHOD\u003e\n          HTTP method [default: GET]\n  -H \u003cHEADERS\u003e\n          Custom HTTP header. Examples: -H \"foo: bar\"\n      --proxy-header \u003cPROXY_HEADERS\u003e\n          Custom Proxy HTTP header. Examples: --proxy-header \"foo: bar\"\n  -t \u003cTIMEOUT\u003e\n          Timeout for each request. Default to infinite.\n  -A \u003cACCEPT_HEADER\u003e\n          HTTP Accept Header.\n  -d \u003cBODY_STRING\u003e\n          HTTP request body.\n  -D \u003cBODY_PATH\u003e\n          HTTP request body from file.\n  -T \u003cCONTENT_TYPE\u003e\n          Content-Type.\n  -a \u003cBASIC_AUTH\u003e\n          Basic authentication (username:password), or AWS credentials (access_key:secret_key)\n      --aws-session \u003cAWS_SESSION\u003e\n          AWS session token\n      --aws-sigv4 \u003cAWS_SIGV4\u003e\n          AWS SigV4 signing params (format: aws:amz:region:service)\n  -x \u003cPROXY\u003e\n          HTTP proxy\n      --proxy-http-version \u003cPROXY_HTTP_VERSION\u003e\n          HTTP version to connect to proxy. Available values 0.9, 1.0, 1.1, 2.\n      --proxy-http2\n          Use HTTP/2 to connect to proxy. Shorthand for --proxy-http-version=2\n      --http-version \u003cHTTP_VERSION\u003e\n          HTTP version. Available values 0.9, 1.0, 1.1, 2.\n      --http2\n          Use HTTP/2. Shorthand for --http-version=2\n      --host \u003cHOST\u003e\n          HTTP Host header\n      --disable-compression\n          Disable compression.\n  -r, --redirect \u003cREDIRECT\u003e\n          Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10]\n      --disable-keepalive\n          Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2.\n      --no-pre-lookup\n          *Not* perform a DNS lookup at beginning to cache it\n      --ipv6\n          Lookup only ipv6.\n      --ipv4\n          Lookup only ipv4.\n      --cacert \u003cCACERT\u003e\n          (TLS) Use the specified certificate file to verify the peer. Native certificate store is used even if this argument is specified.\n      --cert \u003cCERT\u003e\n          (TLS) Use the specified client certificate file. --key must be also specified\n      --key \u003cKEY\u003e\n          (TLS) Use the specified client key file. --cert must be also specified\n      --insecure\n          Accept invalid certs.\n      --connect-to \u003cCONNECT_TO\u003e\n          Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443'\n          Note: if used several times for the same host:port:target_host:target_port, a random choice is made\n      --disable-color\n          Disable the color scheme.\n      --unix-socket \u003cUNIX_SOCKET\u003e\n          Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs.\n      --stats-success-breakdown\n          Include a response status code successful or not successful breakdown for the time histogram and distribution statistics\n      --db-url \u003cDB_URL\u003e\n          Write succeeded requests to sqlite database url E.G test.db\n      --debug\n          Perform a single request and dump the request and response\n  -o, --output \u003cOUTPUT\u003e\n          Output file to write the results to. If not specified, results are written to stdout.\n      --output-format \u003cOUTPUT_FORMAT\u003e\n          Output format, either 'text', 'json' or 'csv'. [default 'text']\n  -h, --help\n          Print help\n  -V, --version\n          Print version\n```\n\n# Performance\n\n`oha` uses faster implementation when `--no-tui` option is set and both `-q` and `--burst-delay` are not set because it can avoid overhead to gather data realtime.\n\n# Output\n\nBy default `oha` outputs a text summary of the results.\n\n`oha` prints JSON summary output when `--output-format json` option is set.\nThe schema of JSON output is defined in [schema.json](./schema.json).\n\nWhen `--output-format csv` is used result of each request is printed as a line of comma separated values.\n\n# Tips\n\n## Stress test in more realistic condition\n\n`oha` uses default options inherited from [rakyll/hey](https://github.com/rakyll/hey) but you may need to change options to stress test in more realistic condition.\n\nI suggest to run `oha` with following options.\n\n```sh\noha \u003c-z or -n\u003e -c \u003cnumber of concurrent connections\u003e -q \u003cquery per seconds\u003e --latency-correction --disable-keepalive \u003ctarget-address\u003e\n```\n\n- --disable-keepalive\n\n    In real, user doesn't query same URL using [Keep-Alive](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive). You may want to run without `Keep-Alive`.\n- --latency-correction\n\n    You can avoid `Coordinated Omission Problem` by using `--latency-correction`.\n\n## Burst feature\n\nYou can use `--burst-delay` along with `--burst-rate` option to introduce delay between a defined number of requests.\n\n```sh\noha -n 10 --burst-delay 2s --burst-rate 4\n```\n\nIn this particular scenario, every 2 seconds, 4 requests will be processed, and after 6s the total of 10 requests will be processed.\n*NOTE: If you don't set `--burst-rate` option, the amount is default to 1*\n\n## Dynamic url feature\n\nYou can use `--rand-regex-url` option to generate random url for each connection.\n\n```sh\noha --rand-regex-url http://127.0.0.1/[a-z][a-z][0-9]\n```\n\nEach Urls are generated by [rand_regex](https://github.com/kennytm/rand_regex) crate but regex's dot is disabled since it's not useful for this purpose and it's very inconvenient if url's dots are interpreted as regex's dot.\n\nOptionally you can set `--max-repeat` option to limit max repeat count for each regex. e.g http://127.0.0.1/[a-z]* with `--max-repeat 4` will generate url like http://127.0.0.1/[a-z]{0,4}\n\nCurrently dynamic scheme, host and port with keep-alive are not works well.\n\n## URLs from file feature\n\nYou can use `--urls-from-file` to read the target URLs from a file. Each line of this file needs to contain one valid URL as in the example below.\n\n```\nhttp://domain.tld/foo/bar\nhttp://domain.tld/assets/vendors-node_modules_highlight_js_lib_index_js-node_modules_tanstack_react-query_build_modern-3fdf40-591fb51c8a6e.js\nhttp://domain.tld/images/test.png\nhttp://domain.tld/foo/bar?q=test\nhttp://domain.tld/foo\n```\n\nSuch a file can for example be created from an access log to generate a more realistic load distribution over the different pages of a server. \n\nWhen this type of URL specification is used, every request goes to a random URL given in the file.\n\n# Contribution\n\nFeel free to help us!\n\nHere are some areas which need improving.\n\n- Write tests\n- Improve tui design.\n  - Show more information?\n- Improve speed\n  - I'm new to tokio. I think there are some space to optimize query scheduling.\n","funding_links":["https://github.com/sponsors/hatoo","https://ko-fi.com/hatoo"],"categories":["Rust","Table of Contents","\u003ca name=\"performance\"\u003e\u003c/a\u003eperformance","Uncategorized","cli","rust","💻 Apps","\u003ca name=\"networking\"\u003e\u003c/a\u003eNetworking","http","Benchmark Configuration"],"sub_categories":["Uncategorized","🌐 Networking and Internet"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhatoo%2Foha","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhatoo%2Foha","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhatoo%2Foha/lists"}