{"id":13466258,"url":"https://github.com/minio/warp","last_synced_at":"2026-01-16T19:26:04.601Z","repository":{"id":37797050,"uuid":"209657662","full_name":"minio/warp","owner":"minio","description":"S3 benchmarking tool","archived":false,"fork":false,"pushed_at":"2025-06-05T23:10:26.000Z","size":2360,"stargazers_count":642,"open_issues_count":8,"forks_count":132,"subscribers_count":22,"default_branch":"master","last_synced_at":"2025-06-06T04:20:07.660Z","etag":null,"topics":["benchmark","benchmark-runs","clocks","csv-data","request-statistics","s3-benchmarking","warp"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/minio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-09-19T22:16:16.000Z","updated_at":"2025-06-03T17:29:24.000Z","dependencies_parsed_at":"2023-11-18T21:33:48.195Z","dependency_job_id":"7d3ce0a8-1fbd-4c39-9ec2-44cb7ea37f52","html_url":"https://github.com/minio/warp","commit_stats":null,"previous_names":[],"tags_count":90,"template":false,"template_full_name":null,"purl":"pkg:github/minio/warp","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/minio%2Fwarp","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/minio%2Fwarp/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/minio%2Fwarp/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/minio%2Fwarp/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/minio","download_url":"https://codeload.github.com/minio/warp/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/minio%2Fwarp/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259366861,"owners_count":22846799,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","benchmark-runs","clocks","csv-data","request-statistics","s3-benchmarking","warp"],"created_at":"2024-07-31T15:00:41.599Z","updated_at":"2026-01-16T19:26:04.584Z","avatar_url":"https://github.com/minio.png","language":"Go","readme":"![warp](https://raw.githubusercontent.com/minio/warp/master/warp_logo.png)\n\nS3 benchmarking tool.\n\n# Download\n\n## From binary\n[Download Binary Releases](https://github.com/minio/warp/releases) for various platforms.\n\n## Build with source\n\nWarp requires minimum Go `go1.21`, please ensure you have compatible version for this build. \n\nYou can follow easy step below to build project\n- Clone project\n```\nλ git clone https://github.com/minio/warp.git\n```\n- Change directory and build\n```\nλ cd warp \u0026\u0026 go build\n```\n- To run a test, please run\n```\nλ ./warp [options]\n```\n# Configuration\n\nWarp can be configured either using commandline parameters or environment variables. \nThe S3 server to use can be specified on the commandline using `--host`, `--access-key`, \n`--secret-key` and optionally `--tls` and `--region` to specify TLS and a custom region.\n\nIt is also possible to set the same parameters using the `WARP_HOST`, `WARP_ACCESS_KEY`, \n`WARP_SECRET_KEY`, `WARP_REGION` and `WARP_TLS` environment variables.\n\nThe credentials must be able to create, delete and list buckets and upload files and perform the operation requested.\n\nBy default operations are performed on a bucket called `warp-benchmark-bucket`. \nThis can be changed using the `--bucket` parameter. \n\n\u003e [!WARNING]\n\u003e Note the bucket will be *completely wiped* before and after each run, so it should **not** contain any data.\n\nIf you are [running TLS](https://docs.min.io/docs/how-to-secure-access-to-minio-server-with-tls.html), \nyou can enable [server-side-encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html) \nof objects using `--encrypt`. A random key will be generated and used for objects.\nTo use [SSE-S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) encryption use the `--sse-s3-encrypt` flag.\n\nIf your server is incompatible with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) the older v2 signatures can be used with `--signature=S3V2`.\n\n# Usage\n\n`λ warp command [options]`\n\nExample running a mixed type benchmark against 8 servers named `s3-server-1` to `s3-server-8` \non port 9000 with the provided keys: \n\n`λ warp mixed --host=s3-server{1...8}:9000 --access-key=minio --secret-key=minio123 --autoterm`\n\nThis will run the benchmark for up to 5 minutes and print the results.\n\n## YAML configuration\n\nAs an alternative configuration option you can use an on-disk YAML configuration file.\n\nSee [yml-samples](https://github.com/minio/warp/tree/master/yml-samples) for a collection of\nconfiguration files for each benchmark type.\n\nTo run a benchmark use `λ warp run \u003cfile.yml\u003e`.\n\nValues can be injected from the commandline using one or multiple `-var VarName=Value`. \nThese values can be referenced inside YAML files with `{{.VarName}}`. \nGo [text templates](https://pkg.go.dev/text/template) are used for this.\n\n# Benchmarks\n\nAll benchmarks operate concurrently. By default, 20 operations will run concurrently.\nThis can however also be tweaked using the `--concurrent` parameter.\n\nTweaking concurrency can have an impact on performance, especially if latency to the server is tested. \nMost benchmarks will also use different prefixes for each \"thread\" running.\n\nBy default all benchmarks save all request details to a file named `warp-operation-yyyy-mm-dd[hhmmss]-xxxx.csv.zst`. \nA custom file name can be specified using the `--benchdata` parameter. \nThe raw data is [zstandard](https://facebook.github.io/zstd/) compressed CSV data.\n\n## Multiple Hosts\n\nMultiple S3 hosts can be specified as comma-separated values, for instance \n`--host=10.0.0.1:9000,10.0.0.2:9000` will switch between the specified servers.\n\nAlternatively numerical ranges can be specified using `--host=10.0.0.{1...10}:9000` which will add \n`10.0.0.1` through `10.0.0.10`. This syntax can be used for any part of the host name and port.\n\nA file with newline separated hosts can also be specified using `file:` prefix and a file name.\nFor distributed tests the file will be read locally and sent to each client.\n\nBy default a host is chosen between the hosts that have the least number of requests running \nand with the longest time since the last request finished. This will ensure that in cases where \nhosts operate at different speeds that the fastest servers will get the most requests. \nIt is possible to choose a simple round-robin algorithm by using the `--host-select=roundrobin` parameter. \nIf there is only one host this parameter has no effect.\n\nWhen benchmarks are done per host averages will be printed out. \nFor further details, the `--analyze.v` parameter can also be used.\n\n# Distributed Benchmarking\n\n![distributed](https://raw.githubusercontent.com/minio/warp/master/arch_warp.png)\n\nIt is possible to coordinate several warp instances automatically.\nThis can be useful for testing performance of a cluster from several clients at once.\n\nFor reliable benchmarks, clients should have synchronized clocks.\nWarp checks whether clocks are within one second of the server,\nbut ideally, clocks should be synchronized with [NTP](http://www.ntp.org/) or a similar service.\n\nTo use Kubernetes see [Running warp on kubernetes](https://github.com/minio/warp/blob/master/k8s/README.md).\n\n## Client Setup\n\nWARNING: Never run warp clients on a publicly exposed port. Clients have the potential to DDOS any service.\n\nClients are started with\n\n```\nλ warp client [listenaddress:port]\n```\n\n`warp client` Only accepts an optional host/ip to listen on, but otherwise no specific parameters.\nBy default warp will listen on `127.0.0.1:7761`.\n\nOnly one server can be connected at the time.\nHowever, when a benchmark is done, the client can immediately run another one with different parameters.\n\nThere will be a version check to ensure that clients are compatible with the server,\nbut it is always recommended to keep warp versions the same.\n\n## Server Setup\n\nAny benchmark can be run in server mode.\nWhen warp is invoked as a server no actual benchmarking will be done on the server.\nEach client will execute the benchmark.\n\nThe server will coordinate the benchmark runs and make sure they are run correctly.\n\nWhen the benchmark has finished, the combined benchmark info will be collected, merged and saved/displayed.\nEach client will also save its own data locally.\n\nEnabling server mode is done by adding `--warp-client=client-{1...10}:7761` \nor a comma separated list of warp client hosts.\nFinally, a file with newline separated hosts can also be specified using `file:` prefix and a file name.\nIf no host port is specified the default is added.\n\nExample:\n\n```\nλ warp get --duration=3m --warp-client=client-{1...10} --host=minio-server-{1...16} --access-key=minio --secret-key=minio123\n```\n\nNote that parameters apply to *each* client. \nSo if `--concurrent=8` is specified each client will run with 8 concurrent operations. \nIf a warp server is unable to connect to a client the entire benchmark is aborted.\n\nIf the warp server looses connection to a client during a benchmark run an error will \nbe displayed and the server will attempt to reconnect. \nIf the server is unable to reconnect, the benchmark will continue with the remaining clients.\n\n### Manually Distributed Benchmarking\n\nWhile it is highly recommended to use the automatic distributed benchmarking warp can also\nbe run manually on several machines at once. \n\nWhen running benchmarks on several clients, it is possible to synchronize \ntheir start time using the `--syncstart` parameter. \nThe time format is 'hh:mm' where hours are specified in 24h format, \nand parsed as local computer time. \n\nUsing this will make it more reliable to [merge benchmarks](https://github.com/minio/warp#merging-benchmarks)\nfrom the clients for total result.\nThis will combine the data as if it was run on the same client. \nOnly the time segments that was actually overlapping will be considered. \n\nWhen running benchmarks on several clients it is likely a good idea to specify the `--noclear` parameter \nso clients don't accidentally delete each others data on startup.\n\n## Benchmark Data\n\nBy default warp uploads random data.\n\n### Object Size\n\n#### Fixed File Size\n\nMost benchmarks use the `--obj.size` parameter to decide the size of objects to upload.\n\nDifferent benchmark types will have different default values.\n\n#### Random File Sizes\n\nIt is possible to randomize object sizes by specifying  `--obj.randsize` \nand files will have a \"random\" size up to `--obj.size`.\nHowever, there are some things to consider \"under the hood\".\n\nWe use log2 to distribute objects sizes.\nThis means that objects will be distributed in equal number for each doubling of the size.\nThis means that `obj.size/64` -\u003e `obj.size/32` will have the same number of objects as `obj.size/2` -\u003e `obj.size`.\n\nExample of objects (horizontally) and their sizes, 100MB max:\n\n![objects (horizontally) and their sizes](https://user-images.githubusercontent.com/5663952/71828619-83381480-3057-11ea-9d6c-ff03607a66a7.png)\n\nTo see segmented request statistics, use the `--analyze.v` parameter.\n\n```\nλ warp analyze --analyze.op=GET --analyze.v warp-get-2020-08-18[190338]-6Nha.csv.zst\n\nOperation: GET (78188). Concurrency: 32.\n\nRequests considered: 78123. Multiple sizes, average 1832860 bytes:\n\nRequest size 1 B -\u003e 10 KiB. Requests - 10836:\n * Throughput: Average: 1534.6KiB/s, 50%: 1571.9KiB/s, 90%: 166.0KiB/s, 99%: 6.6KiB/s, Fastest: 9.7MiB/s, Slowest: 1124.8B/s\n * First Byte: Average: 3ms, Median: 2ms, Best: 1ms, Worst: 39ms\n\nRequest size 10KiB -\u003e 1MiB. Requests - 38219:\n * Throughput: Average: 73.5MiB/s, 50%: 66.4MiB/s, 90%: 27.0MiB/s, 99%: 13.6MiB/s, Fastest: 397.6MiB/s, Slowest: 3.1MiB/s\n * First Byte: Average: 3ms, Median: 2ms, Best: 1ms, Worst: 41ms\n\nRequest size 1MiB -\u003e 10MiB. Requests - 33091:\n * Throughput: Average: 162.1MiB/s, 50%: 159.4MiB/s, 90%: 114.3MiB/s, 99%: 80.3MiB/s, Fastest: 505.4MiB/s, Slowest: 22.4MiB/s\n * First Byte: Average: 3ms, Median: 2ms, Best: 1ms, Worst: 40ms\n\nThroughput:\n* Average: 4557.04 MiB/s, 2604.96 obj/s (29.901s, starting 19:03:41 CEST)\n\nThroughput, split into 29 x 1s:\n * Fastest: 4812.4MiB/s, 2711.62 obj/s (1s, starting 19:03:41 CEST)\n * 50% Median: 4602.6MiB/s, 2740.27 obj/s (1s, starting 19:03:56 CEST)\n * Slowest: 4287.0MiB/s, 2399.84 obj/s (1s, starting 19:03:53 CEST)\n```\n\nThe average object size will be close to `--obj.size` multiplied by 0.179151. \n\nTo get a value for `--obj.size` multiply the desired average object size by 5.582 to get a maximum value. \n\n#### Bucketed File Size\n\nThe `--obj.size` parameter accepts a string value whose format can describe size buckets.\nThe usage of that format activates the options of bucketed file sizes and preempts a possible activation\nof random files sizes via `--obj.randsize`.\n\nThe format of the string is a coma-separated of colon-separated pairs, describing buckets and their respective weights.\nWithin each bucket, the size distribution is uniform.\n\nE.g.: the value `4096:10740,8192:1685,16384:1623` will trigger objects whose size will be chosen\nbetween 0 and 4096 with a weight of 10740, between 4096 and 8192 with a weight of 1685,\nor between 8192 and 16384 with a weight of 1623.\n\n\n## Automatic Termination\nAdding `--autoterm` parameter will enable automatic termination when results are considered stable. \nTo detect a stable setup, warp continuously downsample the current data to \n25 data points stretched over the current timeframe.\n\nFor a benchmark to be considered \"stable\", the last 7 of 25 data points must be within a specified percentage. \nLooking at the throughput over time, it could look like this:\n\n![stable](https://user-images.githubusercontent.com/5663952/72053512-0df95900-327c-11ea-8bc5-9b4064fa595f.png)\n\nThe red frame shows the window used to evaluate stability. \nThe height of the box is determined by the threshold percentage of the current speed. \nThis percentage is user configurable through `--autoterm.pct`, default 7.5%. \nThe metric used for this is either MiB/s or obj/s depending on the benchmark type.\n\nTo make sure there is a good sample data, a minimum duration of the 7 of 25 samples is set. \nThis is configurable `--autoterm.dur`. This specifies the minimum time length the benchmark must have been stable.\n\nIf the benchmark doesn't autoterminate it will continue until the duration is reached. \nThis cannot be used when benchmarks are running remotely.\n\nA permanent 'drift' in throughput will prevent automatic termination, \nif the drift is more than the specified percentage.\nThis is by design since this should be recorded.\n\nWhen using automatic termination be aware that you should not compare average speeds, \nsince the length of the benchmark runs will likely be different. \nInstead 50% medians are a much better metrics.\n\n## Mixed\n\nMixed mode benchmark will test several operation types at once. \nThe benchmark will upload `--objects` objects of size `--obj.size` and use these objects as a pool for the benchmark. \nAs new objects are uploaded/deleted they are added/removed from the pool.\n\nThe distribution of operations can be adjusted with the `--get-distrib`, `--stat-distrib`,\n `--put-distrib` and `--delete-distrib` parameters.  \n The final distribution will be determined by the fraction of each value of the total. \n Note that `put-distrib` must be bigger or equal to `--delete-distrib` to not eventually run out of objects.  \n To disable a type, set its distribution to 0.\n\nExample:\n```\nλ warp mixed --duration=1m\n[...]\nMixed operations.\n\nOperation: GET\n * 632.28 MiB/s, 354.78 obj/s (59.993s, starting 07:44:05 PST) (45.0% of operations)\n\nOperation: STAT\n * 236.38 obj/s (59.966s, starting 07:44:05 PST) (30.0% of operations)\n\nOperation: PUT\n * 206.11 MiB/s, 118.23 obj/s (59.994s, starting 07:44:05 PST) (15.0% of operations)\n\nOperation: DELETE\n * 78.91 obj/s (59.927s, starting 07:44:05 PST) (10.0% of operations)\n```\n\n\nA similar benchmark is called `versioned` which operates on versioned objects.\n\n## GET\nBenchmarking get operations will attempt to download as many objects it can within `--duration`.\n\nBy default, `--objects` objects of size `--obj.size` are uploaded before doing the actual bench.\nObjects will be uploaded with `--concurrent` different prefixes, except if `--noprefix` is specified.\n\nUsing `--list-existing` will list at most `--objects` from the bucket and download them instead\nof uploading random objects (set it to 0 to use all object from the listing).\nListing is restricted to `--prefix` if it is set and recursive listing can be disabled by setting `--list-flat`\n\nIf versioned listing should be tested, it is possible by setting `--versions=n` (default 1),\nwhich will add multiple versions of each object and request individual versions.\n\nWhen downloading, objects are chosen randomly between all uploaded data and the benchmark\nwill attempt to run `--concurrent` concurrent downloads.\n\nThe analysis will include the upload stats as `PUT` operations and the `GET` operations.\n\n```\nOperation: GET\n* Average: 94.10 MiB/s, 9866.97 obj/s\n\nThroughput, split into 299 x 1s:\n * Fastest: 99.8MiB/s, 10468.54 obj/s\n * 50% Median: 94.4MiB/s, 9893.37 obj/s\n * Slowest: 69.4MiB/s, 7279.03 obj/s\n```\n\nThe `GET` operations will contain the time until the first byte was received.\nThis can be accessed using the `--analyze.v` parameter.\n\nIt is possible to test speed of partial file requests using the `--range` option.\nThis will start reading each object at a random offset and read a random number of bytes.\nUsing this produces output similar to `--obj.randsize` - and they can even be combined. \n\n## PUT\n\nBenchmarking put operations will upload objects of size `--obj.size` until `--duration` time has elapsed.\n\nObjects will be uploaded with `--concurrent` different prefixes, except if `--noprefix` is specified.\n\n```\nOperation: PUT\n* Average: 10.06 MiB/s, 1030.01 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 11.3MiB/s, 1159.69 obj/s\n * 50% Median: 10.3MiB/s, 1059.06 obj/s\n * Slowest: 6.7MiB/s, 685.26 obj/s\n```\n\nIt is possible by forcing md5 checksums on data by using the `--md5` option. \n\nTo test [POST Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) operations use `-post` parameter.\n\nTo add a [checksum](https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html) to the uploaded objects, use `--checksum` parameter.\nThe following checksums are supported: `CRC32` (composite), `CRC32-FO` (full object), `CRC32C`, `CRC32-FO`, `CRC32C`, `SHA1`, `SHA256` and `CRC64NVME`.\nAdding a checksum will always disable MD5 checksums.\n\n## DELETE\n\nBenchmarking delete operations will attempt to delete as many objects it can within `--duration`.\n\nBy default, `--objects` objects of size `--obj.size` are uploaded before doing the actual bench.\n\nThe delete operations are done in `--batch` objects per request in `--concurrent` concurrently running requests.\n\nIf there are no more objects left the benchmark will end.\n\nUsing `--list-existing` will list at most `--objects` from the bucket and delete them instead of\ndeleting random objects (set it to 0 to use all objects from the listing).\nListing is restricted to `--prefix` if it is set and recursive listing can be disabled by setting `--list-flat`.\n\nThe analysis will include the upload stats as `PUT` operations and the `DELETE` operations.\n\n```\nOperation: DELETE\n* Average: 10.06 MiB/s, 1030.01 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 11.3MiB/s, 1159.69 obj/s\n * 50% Median: 10.3MiB/s, 1059.06 obj/s\n * Slowest: 6.7MiB/s, 685.26 obj/s\n```\n\n## LIST\n\nBenchmarking list operations will upload `--objects` objects of size `--obj.size` with `--concurrent` prefixes. \nThe list operations are done per prefix.\n\nIf versioned listing should be tested, it is possible by setting `--versions=N` (default 1), \nwhich will add multiple versions of each object and use `ListObjectVersions` for listing.\n\nThe analysis will include the upload stats as `PUT` operations and the `LIST` operations separately. \nThe time from request start to first object is recorded as well and can be accessed using the `--analyze.v` parameter.\n\n```\nOperation: LIST\n* Average: 10.06 MiB/s, 1030.01 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 11.3MiB/s, 1159.69 obj/s\n * 50% Median: 10.3MiB/s, 1059.06 obj/s\n * Slowest: 6.7MiB/s, 685.26 obj/s\n```\n\n## STAT\n\nBenchmarking [stat object](https://docs.min.io/docs/golang-client-api-reference#StatObject) operations \nwill upload `--objects` objects of size `--obj.size` with `--concurrent` prefixes.\n\nIf versioned listing should be tested, it is possible by setting `--versions=n` (default 1),\nwhich will add multiple versions of each object and request information for individual versions.\n\nThe main benchmark will do individual requests to get object information for the uploaded objects.\n\nSince the object size is of little importance, only objects per second is reported.\n\nExample:\n```\nλ warp stat --autoterm\n[...]\n-------------------\nOperation: STAT\n* Average: 10.06 MiB/s, 1030.01 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 11.3MiB/s, 1159.69 obj/s\n * 50% Median: 10.3MiB/s, 1059.06 obj/s\n * Slowest: 6.7MiB/s, 685.26 obj/s\n```\n\n## RETENTION\n\nBenchmarking [PutObjectRetention](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html) operations\nwill upload `--objects` objects of size `--obj.size` with `--concurrent` prefixes and `--versions` versions on each object.\n\nExample:\n```\nλ warp retention --objects=2500 --duration=1m\n[...]\n----------------------------------------\nOperation: RETENTION\n* Average: 169.50 obj/s\n\nThroughput by host:\n * http://192.168.1.78:9001: Avg: 85.01 obj/s\n * http://192.168.1.78:9002: Avg: 84.56 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 203.45 obj/s\n * 50% Median: 169.45 obj/s\n * Slowest: 161.73 obj/s\n```\n\nNote that since object locking can only be specified when creating a bucket, it may be needed to recreate the bucket. \nWarp will attempt to do that automatically.\n\n## MULTIPART\n\nMultipart benchmark will upload parts to a *single* object, and afterwards test download speed of parts.\n\nWhen running in distributed mode each client will upload the number of parts specified.\n\nOnly `--concurrent` uploads will be started by each client, \nso having `--parts` be a multiple of `--concurrent` is recommended, but not required. \n\n```\nλ warp multipart --parts=500 --part.size=10MiB\nwarp: Benchmark data written to \"warp-remote-2022-07-15[190649]-bRtD.csv.zst\"\n\n----------------------------------------\nOperation: PUT\n* Average: 470.88 MiB/s, 47.09 obj/s\n\nThroughput, split into 15 x 1s:\n * Fastest: 856.9MiB/s, 85.69 obj/s\n * 50% Median: 446.7MiB/s, 44.67 obj/s\n * Slowest: 114.1MiB/s, 11.41 obj/s\n\n----------------------------------------\nOperation: GET\n* Average: 1532.79 MiB/s, 153.28 obj/s\n\nThroughput, split into 9 x 1s:\n * Fastest: 1573.7MiB/s, 157.37 obj/s\n * 50% Median: 1534.1MiB/s, 153.41 obj/s\n * Slowest: 1489.5MiB/s, 148.95 obj/s\nwarp: Cleanup done.\n```\n\n## MULTIPART PUT\n\nMultipart put benchmark tests upload speed of parts. It creates multipart upload, uploads `--parts` parts of\n`--part.size` size each and completes multipart upload when all parts are uploaded.\n\nMultipart put test runs `--concurrent` separate multipart uploads. Each of those uploads split up to\n`--part.concurrent` concurrent upload threads. So total concurrency is a `--concurrent`\nmultiplied by `--part.concurrent`.\n\n```\nλ warp multipart-put --parts 100 --part.size 5MiB\n╭─────────────────────────────────╮\n│ WARP S3 Benchmark Tool by MinIO │\n╰─────────────────────────────────╯\n                                                                       \nBenchmarking: Press 'q' to abort benchmark and print partial results...\n                                                                       \n λ █████████████████████████████████████████████████████████████████████████ 100%\n                                                                                       \nReqs: 15867, Errs:0, Objs:15867, Bytes: 1983.4MiB                                      \n -   PUTPART Average: 266 Obj/s, 33.2MiB/s; Current 260 Obj/s, 32.5MiB/s, 1193.7 ms/req\n                                                                                       \nReport: PUTPART. Concurrency: 400. Ran: 58s\n * Average: 33.36 MiB/s, 266.85 obj/s\n * Reqs: Avg: 1262.5ms, 50%: 935.3ms, 90%: 2773.8ms, 99%: 4395.2ms, Fastest: 53.6ms, Slowest: 6976.4ms, StdDev: 1027.5ms\n\nThroughput, split into 58 x 1s:\n * Fastest: 37.9MiB/s, 302.87 obj/s\n * 50% Median: 34.3MiB/s, 274.10 obj/s\n * Slowest: 19.8MiB/s, 158.41 obj/s\n\n\nCleanup Done\n```\n\n## APPEND (S3 Express)\n\nBenchmarks S3 Express One Zone [Append Object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-append.html) operations.\n\nWARP will upload `--obj.size` objects for each `--concurrent` and append up to 10,000 parts to these.\nEach append operation will be one part and the size of each part will be `--part.size` - a new object will be created when the part limit is reached. \n\nIf no `--checksum` is specified, the CRC64NVME checksum will be used. The checksum type must support full object checksums (CRC32, CRC32C, CRC64NVME).\n\nExample:\n\n```\nλ warp append -duration=1m -obj.size=1MB\n╭─────────────────────────────────╮\n│ WARP S3 Benchmark Tool by MinIO │\n╰─────────────────────────────────╯\n\nBenchmarking: Press 'q' to abort benchmark and print partial results...\n\n λ ████████████████████████████████████████████████████████████████████████░  99%\n\nReqs: 4997, Errs:0, Objs:4997, Bytes: 4765.5MiB\n -    APPEND Average: 84 Obj/s, 80.4MiB/s; Current 88 Obj/s, 84.4MiB/s, 280.7 ms/req\n\n\nReport: APPEND. Concurrency: 20. Ran: 58s\n * Average: 80.15 MiB/s, 84.04 obj/s\n * Reqs: Avg: 234.6ms, 50%: 203.9ms, 90%: 354.1ms, 99%: 711.3ms, Fastest: 58.3ms, Slowest: 1213.9ms, StdDev: 109.5ms\n\nThroughput, split into 58 x 1s:\n * Fastest: 123.8MiB/s, 129.80 obj/s\n * 50% Median: 80.1MiB/s, 83.97 obj/s\n * Slowest: 23.6MiB/s, 24.74 obj/s\n```\n\nThe \"obj/s\" indicates the number of append operations per second.\n\n## ZIP\n\nThe `zip` command benchmarks the MinIO [s3zip](https://blog.min.io/small-file-archives/) extension\nthat allows \n\nThis will upload a single zip file with 10000 individual files (change with `--files`) of 10KiB each (changed with `--obj.size`).\n\nThe benchmark will then download individual files concurrently and present the result as a GET benchmark.\n\nExample:\n```\nλ warp zip --obj.size=1MiB -duration=1m\nwarp: Benchmark data written to \"warp-zip-2022-12-02[150109]-xmXj.csv.zst\"\n\n----------------------------------------\nOperation: GET\n* Average: 728.78 MiB/s, 728.78 obj/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 757.0MiB/s, 756.96 obj/s\n * 50% Median: 732.7MiB/s, 732.67 obj/s\n * Slowest: 662.7MiB/s, 662.65 obj/s\n```\n\nThis will only work on recent MinIO versions, from 2022 and going forward.\n\n## SNOWBALL\n\nThe Snowball benchmark will test uploading a \"snowball\" TAR file with multiple files inside that are extracted as individual objects.\n\nParameters:\n\n* `--obj.size=N` controls the size of each object inside the TAR file that is uploaded. Default is 512KiB.\n* `--objs.per=N` controls the number of objects per TAR file. Default is 50.\n* `--compress` will compress the TAR file before upload. Object data will be duplicated inside each TAR. This limits `--obj.size` to 10MiB.\n\nSince TAR operations are done in-memory the total size is limited to 1GiB.\n\nThis is calculated as `--obj.size` * `--concurrent`. \nIf `--compress` is NOT specified this is also multiplied by `--objs.per`. \n\nExamples:\n\nBenchmark using default parameters. 50 x 512KiB duplicated objects inside each TAR file. Compressed.\n```\nλ warp snowball --duration=30s --compress\nwarp: Benchmark data written to \"warp-snowball-2023-04-06[115116]-9S9Z.csv.zst\"\n\n----------------------------------------\nOperation: PUT\n* Average: 223.90 MiB/s, 447.80 obj/s\n\nThroughput, split into 26 x 1s:\n * Fastest: 261.0MiB/s, 522.08 obj/s\n * 50% Median: 237.7MiB/s, 475.32 obj/s\n * Slowest: 151.6MiB/s, 303.27 obj/s\nwarp: Cleanup Done.\n```\n\nTest 1000 unique 1KB objects inside each snowball, with 2 concurrent uploads running:\n```\nλ warp snowball --duration=60s --obj.size=1K --objs.per=1000 --concurrent=2\nwarp: Benchmark data written to \"warp-snowball-2023-04-06[114915]-W3zw.csv.zst\"\n\n----------------------------------------\nOperation: PUT\n* Average: 0.93 MiB/s, 975.72 obj/s\n\nThroughput, split into 56 x 1s:\n * Fastest: 1051.9KiB/s, 1077.12 obj/s\n * 50% Median: 1010.0KiB/s, 1034.26 obj/s\n * Slowest: 568.2KiB/s, 581.84 obj/s\nwarp: Cleanup Done.\n```\n\nThe analysis throughput represents the object count and sizes as they are written when extracted.\n\nRequest times shown with `--analyze.v` represents request time for each snowball.\n\n## FANOUT\n\nThe Fanout benchmark will test uploading a single object that is copied to multiple individual objects.\nThis feature is only available on a recent MinIO server.\n\nParameters:\n\n* `--obj.size=N` controls the size of each object that is uploaded. Default is 1MiB.\n* `--copies=N` controls the number of object copies per request. Default is 100.\n\nSize is calculated as `--obj.size` * `--copies`.\n\nExample: Use 8 concurrent uploads to copy a 512KB objects to 50 locations. \n\n```\nλ warp fanout --copies=50 --obj.size=512KiB --concurrent=8\nwarp: Benchmark data written to \"warp-fanout-2023-06-15[105151]-j3qb.csv.zst\"\n\n----------------------------------------\nOperation: POST\n* Average: 113.06 MiB/s, 226.12 obj/s\n\nThroughput, split into 57 x 1s:\n * Fastest: 178.4MiB/s, 356.74 obj/s\n * 50% Median: 113.9MiB/s, 227.76 obj/s\n * Slowest: 56.3MiB/s, 112.53 obj/s\nwarp: Cleanup Done.\n```\n\nThe analysis throughput represents the object count and sizes as they are written when extracted.\n\nRequest times shown with `--analyze.v` represents request time for each fan-out call.\n\n\n# Analysis\n\nWhen benchmarks have finished all request data will be saved to a file and an analysis will be shown.\n\nThe saved data can be re-evaluated by running `warp analyze (filename)`.\n\n## Analysis Data\n\nAll analysis will be done on a reduced part of the full data. \nThe data aggregation will *start* when all threads have completed one request\n and the time segment will *stop* when the last request of a thread is initiated.\n\nThis is to exclude variations due to warm-up and threads finishing at different times.\nTherefore the analysis time will typically be slightly below the selected benchmark duration.\n\nExample:\n```\nOperation: GET\n* Average: 92.05 MiB/s, 9652.01 obj/s\n```\n\nThe benchmark run is then divided into fixed duration *segments* specified by `-analyze.dur`. \nFor each segment the throughput is calculated across all threads.\n\nThe analysis output will display the fastest, slowest and 50% median segment.\n```\nThroughput, split into 59 x 1s:\n * Fastest: 97.9MiB/s, 10269.68 obj/s\n * 50% Median: 95.1MiB/s, 9969.63 obj/s\n * Slowest: 66.3MiB/s, 6955.70 obj/s\n```\n\n### Analysis Parameters\n\nBeside the important `--analyze.dur` which specifies the time segment size for \naggregated data there are some additional parameters that can be used.\n\nSpecifying `--analyze.v` will output time aggregated data per host instead of just averages. \nFor instance:\n\n```\nThroughput by host:\n * http://127.0.0.1:9001: Avg: 81.48 MiB/s, 81.48 obj/s (4m59.976s)\n        - Fastest: 86.46 MiB/s, 86.46 obj/s (1s)\n        - 50% Median: 82.23 MiB/s, 82.23 obj/s (1s)\n        - Slowest: 68.14 MiB/s, 68.14 obj/s (1s)\n * http://127.0.0.1:9002: Avg: 81.48 MiB/s, 81.48 obj/s (4m59.968s)\n        - Fastest: 87.36 MiB/s, 87.36 obj/s (1s)\n        - 50% Median: 82.28 MiB/s, 82.28 obj/s (1s)\n        - Slowest: 68.40 MiB/s, 68.40 obj/s (1s)\n```\n\n\n`--analyze.op=GET` will only analyze GET operations.\n\nSpecifying `--analyze.host=http://127.0.0.1:9001` will only consider data from this specific host.\n\nWarp will automatically discard the time taking the first and last request of all threads to finish.\nHowever, if you would like to discard additional time from the aggregated data,\nthis is possible. For instance `analyze.skip=10s` will skip the first 10 seconds of data for each operation type.\n\nNote that skipping data will not always result in the exact reduction in time for the aggregated data\nsince the start time will still be aligned with requests starting.\n\n### Per Request Statistics\n\nBy adding the `--analyze.v` parameter it is possible to display per request statistics.\n\nThis is not enabled by default, since it is assumed the benchmarks are throughput limited,\nbut in certain scenarios it can be useful to determine problems with individual hosts for instance.\n\nExample:\n\n```\nOperation: GET (386413). Ran 1m0s. Concurrency: 20. Hosts: 2.\n\nRequests considered: 386334:\n * Avg: 3ms, 50%: 3ms, 90%: 4ms, 99%: 8ms, Fastest: 1ms, Slowest: 504ms\n * TTFB: Avg: 3ms, Best: 1ms, 25th: 3ms, Median: 3ms, 75th: 3ms, 90th: 4ms, 99th: 8ms, Worst: 504ms\n * First Access: Avg: 3ms, 50%: 3ms, 90%: 4ms, 99%: 10ms, Fastest: 1ms, Slowest: 18ms\n * First Access TTFB: Avg: 3ms, Best: 1ms, 25th: 3ms, Median: 3ms, 75th: 3ms, 90th: 4ms, 99th: 10ms, Worst: 18ms\n * Last Access: Avg: 3ms, 50%: 3ms, 90%: 4ms, 99%: 7ms, Fastest: 2ms, Slowest: 10ms\n * Last Access TTFB: Avg: 3ms, Best: 1ms, 25th: 3ms, Median: 3ms, 75th: 3ms, 90th: 4ms, 99th: 7ms, Worst: 10ms\n\nRequests by host:\n * http://127.0.0.1:9001 - 193103 requests:\n        - Avg: 3ms Fastest: 1ms Slowest: 504ms 50%: 3ms 90%: 4ms\n        - First Byte: Avg: 3ms, Best: 1ms, 25th: 3ms, Median: 3ms, 75th: 3ms, 90th: 4ms, 99th: 8ms, Worst: 504ms\n * http://127.0.0.1:9002 - 193310 requests:\n        - Avg: 3ms Fastest: 1ms Slowest: 88ms 50%: 3ms 90%: 4ms\n        - First Byte: Avg: 3ms, Best: 1ms, 25th: 3ms, Median: 3ms, 75th: 3ms, 90th: 4ms, 99th: 8ms, Worst: 88ms\n\nThroughput:\n* Average: 1.57 MiB/s, 6440.36 obj/s\n\nThroughput by host:\n * http://127.0.0.1:9001:\n        - Average:  0.79 MiB/s, 3218.47 obj/s\n        - Fastest: 844.5KiB/s\n        - 50% Median: 807.9KiB/s\n        - Slowest: 718.9KiB/s\n * http://127.0.0.1:9002:\n        - Average:  0.79 MiB/s, 3221.85 obj/s\n        - Fastest: 846.8KiB/s\n        - 50% Median: 811.0KiB/s\n        - Slowest: 711.1KiB/s\n\nThroughput, split into 59 x 1s:\n * Fastest: 1688.0KiB/s, 6752.22 obj/s (1s, starting 12:31:40 CET)\n * 50% Median: 1621.9KiB/s, 6487.60 obj/s (1s, starting 12:31:17 CET)\n * Slowest: 1430.5KiB/s, 5721.95 obj/s (1s, starting 12:31:59 CET)\n```\n\n* `TTFB` is the time from request was sent to the first byte was received.\n* `First Access` is the first access per object.\n* `Last Access` is the last access per object.\n\nThe fastest and slowest request times are shown, as well as selected \npercentiles and the total amount is requests considered.\n\nNote that different metrics are used to select the number of requests per host and for the combined, \nso there will likely be differences.\n\n### Time Series CSV Output\n\nIt is possible to output the CSV data of analysis using `--analyze.out=filename.csv` \nwhich will write the CSV data to the specified file.\n\nThese are the data fields exported:\n\n| Header              | Description                                                                                       |\n|---------------------|---------------------------------------------------------------------------------------------------|\n| `index`             | Index of the segment                                                                              |\n| `op`                | Operation executed                                                                                |\n| `host`              | If only one host, host name, otherwise empty                                                      |\n| `duration_s`        | Duration of the segment in seconds                                                                |\n| `objects_per_op`    | Objects per operation                                                                             |\n| `bytes`             | Total bytes of operations (*distributed*)                                                         |\n| `full_ops`          | Operations completely contained within segment                                                    |\n| `partial_ops`       | Operations that either started or ended outside the segment, but was also executed during segment |\n| `ops_started`       | Operations started within segment                                                                 |\n| `ops_ended`         | Operations ended within the segment                                                               |\n| `errors`            | Errors logged on operations ending within the segment                                             |\n| `mb_per_sec`        | MiB/s of operations within the segment (*distributed*)                                            |\n| `ops_ended_per_sec` | Operations that ended within the segment per second                                               |\n| `objs_per_sec`      | Objects per second processed in the segment (*distributed*)                                       |\n| `start_time`        | Absolute start time of the segment                                                                |\n| `end_time`          | Absolute end time of the segment                                                                  |\n\nSome of these fields are *distributed*. \nThis means that the data of partial operations have been distributed across the segments they occur in. \nThe bigger a percentage of the operation is within a segment the larger part of it has been attributed there.\n\nThis is why there can be a partial object attributed to a segment, \nbecause only a part of the operation took place in the segment.\n\n## Comparing Benchmarks\n\nIt is possible to compare two recorded runs using the `warp cmp (file-before) (file-after)` to\nsee the differences between before and after.\nThere is no need for 'before' to be chronologically before 'after', but the differences will be shown\nas change from 'before' to 'after'.\n\nAn example:\n```\nλ warp cmp warp-get-2019-11-29[125341]-7ylR.csv.zst warp-get-2019-202011-29[124533]-HOhm.csv.zst\n-------------------\nOperation: PUT\nDuration: 1m4s -\u003e 1m2s\n* Average: +2.63% (+1.0 MiB/s) throughput, +2.63% (+1.0) obj/s\n* Fastest: -4.51% (-4.1) obj/s\n* 50% Median: +3.11% (+1.1 MiB/s) throughput, +3.11% (+1.1) obj/s\n* Slowest: +1.66% (+0.4 MiB/s) throughput, +1.66% (+0.4) obj/s\n-------------------\nOperation: GET\nOperations: 16768 -\u003e 171105\nDuration: 30s -\u003e 5m0s\n* Average: +2.10% (+11.7 MiB/s) throughput, +2.10% (+11.7) obj/s\n* First Byte: Average: -405.876µs (-2%), Median: -2.1µs (-0%), Best: -998.1µs (-50%), Worst: +41.0014ms (+65%)\n* Fastest: +2.35% (+14.0 MiB/s) throughput, +2.35% (+14.0) obj/s\n* 50% Median: +2.81% (+15.8 MiB/s) throughput, +2.81% (+15.8) obj/s\n* Slowest: -10.02% (-52.0) obj/s\n```\n\nAll relevant differences are listed. This is two `warp get` runs.\nDifferences in parameters will be shown.\n\nThe usual analysis parameters can be applied to define segment lengths.\n\n## Merging Benchmarks\n\nIt is possible to merge runs from several clients using the `λ warp merge (file1) (file2) [additional files...]` command.\n\nThe command will output a combined data file with all data that overlap in time.\n\nThe combined output will effectively be the same as having run a single benchmark with a higher concurrency setting.\nThe main reason for running the benchmark on several clients would be to help eliminate client bottlenecks.\n\nIt is important to note that only data that strictly overlaps in absolute time will be considered for analysis.\n\n\n## InfluxDB Output\n\nWarp allows realtime statistics to be pushed to InfluxDB v2 or later.\n\nThis can be combined with the `--stress` parameter, which will allow to have long-running tests without consuming memory and still get access to performance numbers.\n\nWarp does not provide any analysis on the data sent to InfluxDB. \n\n### Configuring\n\nInfluxDB is enabled via a the `--influxdb` parameter. Alternatively the parameter can be set in the `WARP_INFLUXDB_CONNECT` environment variable.\n\nThe value must be formatted like a URL: `\u003cschema\u003e://\u003ctoken\u003e@\u003chostname\u003e:\u003cport\u003e/\u003cbucket\u003e/\u003corg\u003e?\u003ctag=value\u003e`\n\n| Part          |                                                                               |\n|---------------|-------------------------------------------------------------------------------|\n| `\u003cschema\u003e`    | Connection type. Replace with `http` or `https`                               |\n| `\u003ctoken\u003e`     | Replace with the token needed to access the server                            |\n| `\u003chostname\u003e`  | Replace with the host name or IP address of your server                       |\n| `\u003cport\u003e`      | Replace with the port of your server                                          |\n| `\u003cbucket\u003e`    | Replace with the bucket in which to place the data                            |\n| `\u003corg\u003e`       | Replace with the organization to which the data should be associated (if any) |\n| `\u003ctag=value\u003e` | One or more tags to add to each data point                                    |\n\nEach parameter can be URL encoded.\n\nExample:\n\n`--influxdb \"http://shmRUvVjk0Ig2J9qU0_g349PF6l-GB1dmwXUXDh5qd19n1Nda_K7yvSIi9tGpax9jyOsmP2dUd-md8yPOoDNHg==@127.0.0.1:8086/mybucket/myorg?mytag=myvalue\"`\n\nThis will connect to port 8086 on 127.0.0.1 using the provided token `shmRU...`.\n\nData will be placed in `mybucket` and associated with `myorg`. An additional tag `mytag` will be set to `myvalue` on all data points.\n\nFor distributed benchmarking all clients will be sending data, so hosts like localhost and 127.0.0.1 should not be used.\n\n### Data\n\nAll in-run measurements are of type `warp`.\n\n| Tag        | Value                                                                                                                                                         |\n|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `warp_id`  | Contains a random string value, unique per client.\u003cbr/\u003eThis can be used to identify individual runs or single warp clients when using distributed benchmarks. |\n| `op`       | Contains the operation type, for example GET, PUT, DELETE, etc.                                                                                               |\n| `endpoint` | Endpoint is the endpoint to which the operation was sent.\u003cbr/\u003eMeasurements without this value is total for the warp client.                                   |\n\n\nFields are sent as accumulated totals per run per operation type.\n\nNew metrics are sent as each operation (request) completes. There is no inter-operation progress logged.\nThis means that bigger objects (meaning less requests) will create bigger fluctuations. That is important to note when analyzing. \n\n| Field                     | Value                                                                          |\n|---------------------------|--------------------------------------------------------------------------------|\n| `requests`                | Total number of requests performed                                             |\n| `objects`                 | Total number of objects affected                                               |\n| `bytes_total`             | Total number of bytes affected                                                 |\n| `errors`                  | Total errors encountered                                                       |\n| `request_total_secs`      | Total request time in seconds                                                  |\n| `request_ttfb_total_secs` | Total time to first byte in seconds for relevant operations                    |\n\nThe statistics provided means that to get \"rates over time\" the numbers must be calculated as differences (increase/positive derivatives). \n\n### Summary\n\nWhen a run has finished a summary will be sent. This will be a `warp_run_summary` measurement type. \nIn addition to the fields above it will contain:\n\n| Field                   | Value                             |\n|-------------------------|-----------------------------------|\n| `request_avg_secs`      | Average Request Time              |\n| `request_max_secs`      | Longest Request Time              |\n| `request_min_secs`      | Shortest Request Time             |\n| `request_ttfb_avg_secs` | Average Time To First Byte (TTFB) |\n| `request_ttfb_max_secs` | Longest TTFB                      |\n| `request_ttfb_min_secs` | Shortest TTFB                     |\n\nAll times are in float point seconds.\n\nThe summary will be sent for each host and operation type. \n\n# Server Profiling\n\nWhen running against a MinIO server it is possible to enable profiling while the benchmark is running.\n\nThis is done by adding `--serverprof=type` parameter with the type of profile you would like. \nThis requires that the credentials allows admin access for the first host.\n\n| Type    | Description                                                                                                                                |\n|---------|--------------------------------------------------------------------------------------------------------------------------------------------|\n| `cpu`   | CPU profile determines where a program spends its time while actively consuming CPU cycles (as opposed while sleeping or waiting for I/O). |\n| `mem`   | Heap profile reports the currently live allocations; used to monitor current memory usage or check for memory leaks.                       |\n| `block` | Block profile show where goroutines block waiting on synchronization primitives (including timer channels).                                |\n| `mutex` | Mutex profile reports the lock contentions. When you think your CPU is not fully utilized due to a mutex contention, use this profile.     |\n| `trace` | A detailed trace of execution of the current program. This will include information about goroutine scheduling and garbage collection.     |\n\nProfiles for all cluster members will be downloaded as a zip file. \n\nAnalyzing the profiles requires the Go tools to be installed. \nSee [Profiling Go Programs](https://blog.golang.org/profiling-go-programs) for basic usage of the profile tools \nand an introduction to the [Go execution tracer](https://blog.gopheracademy.com/advent-2017/go-execution-tracer/) \nfor more information.\n","funding_links":[],"categories":["Go"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fminio%2Fwarp","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fminio%2Fwarp","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fminio%2Fwarp/lists"}