{"id":20396971,"url":"https://github.com/bb-qq/uas","last_synced_at":"2026-03-02T03:04:11.276Z","repository":{"id":204247599,"uuid":"711420783","full_name":"bb-qq/uas","owner":"bb-qq","description":"DSM Driver for UASP supported USB storage devices","archived":false,"fork":false,"pushed_at":"2024-02-25T02:31:10.000Z","size":499,"stargazers_count":25,"open_issues_count":4,"forks_count":2,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-04-12T12:57:53.300Z","etag":null,"topics":["driver","kernel-module","synology","synology-nas","synology-package"],"latest_commit_sha":null,"homepage":"","language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bb-qq.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-10-29T08:01:18.000Z","updated_at":"2025-03-13T01:40:58.000Z","dependencies_parsed_at":"2023-11-19T10:23:48.241Z","dependency_job_id":"ca5f1676-c4ac-4686-9c60-d1aced0f2c05","html_url":"https://github.com/bb-qq/uas","commit_stats":null,"previous_names":["bb-qq/uas"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/bb-qq/uas","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bb-qq%2Fuas","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bb-qq%2Fuas/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bb-qq%2Fuas/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bb-qq%2Fuas/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bb-qq","download_url":"https://codeload.github.com/bb-qq/uas/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bb-qq%2Fuas/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29991299,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-02T01:47:34.672Z","status":"online","status_checked_at":"2026-03-02T02:00:07.342Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["driver","kernel-module","synology","synology-nas","synology-package"],"created_at":"2024-11-15T04:10:50.112Z","updated_at":"2026-03-02T03:04:11.259Z","avatar_url":"https://github.com/bb-qq.png","language":"C","readme":"# DSM Driver for UASP-supported USB storage devices\n\nThis is the USB Attached SCSI kernel module for Synology NASes.\n\nIn case you have external USB hard disks that support UASP connected to your Synology NAS, this driver will improve the read/write performance.\n\nYou may also be interested in my other projects:\n* [RTL8152/RTL8153/RTL8156(2.5Gbps or 1.0Gbps ethernet) driver package for Synology NASes](https://github.com/bb-qq/r8152)\n* [AQC111U(5Gbps ethernet) driver package for Synology NASes](https://github.com/bb-qq/aqc111)\n\n## What is USB Attached SCSI protocol?\n\n**USB Attached SCSI (UAS)** or **USB Attached SCSI Protocol (UASP)** is a protocol used to move data to and from USB storage devices such as hard drives (HDDs), solid-state drives (SSDs), and thumb drives. UAS depends on the USB protocol and uses the standard SCSI command set. The use of UAS generally provides faster transfers compared to the older USB Mass Storage Bulk-Only Transport (BOT) drivers. \n\nWith UAS, the read/write speed of mainly fine-grained files, so-called IOPS, is dramatically higher than with a conventional BOT. UAS also takes advantage of USB 3's ability to operate in a full duplex, which is advantageous when mixing both read and write operations on SSDs.\n\nFor simple sequential read/write access, there is little difference.\n\nSee [Wikipedia for a detailed explanation](https://en.wikipedia.org/wiki/USB_Attached_SCSI) and the actual benchmark described below.\n\n## Supported NAS platform\n\n* DSM 7.2\n* apollolake based products\n    * DS918+ (confirmed working)\n    * DS620slim\n    * DS1019+\n    * DS718+\n    * DS418play\n    * DS218+\n\nYou can download drivers including other platforms from the [Release page](https://github.com/bb-qq/uas/releases) and determine a proper driver for your model from [this page](https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Compatibility_Peripherals/What_kind_of_CPU_does_my_NAS_have), but you might encounter some issues with unconfirmed platforms. \n\n**Beta notice**: Currently this driver only supports apollolake and geminilake.\n\n## Supported devices\n\nThis driver supports all UASP-enabled storage devices. However, **many UASP interoperability issues have been reported in Linux**. So the number of devices that work reliably may be limited.\n\n### Devices confirmed\n\nThese devices have been proven to perform better with UASP and operate stably for longer periods of time.\n\n* [Kuroutoshikou GW3.5AM-SU3G2P](https://amzn.to/3QwcaRH) (JMS580, Japan only)\n\n### Devices expected to work\n\nThese devices support UASP and are equipped with relatively new chips. Theoretically, they work, but we are collecting reports on their stability.\n\n* [UGREEN External Hard Drive Enclosure](https://amzn.to/46PeR6L) (ASM225CM)\n* [ORICO USB 3.0 External Hard Drive Enclosure](https://amzn.to/3QdZFJ5) (JMS578)\n* [StarTech.com USB 3.1 - 10Gbps - Hard Drive Adapter Cable (USB312SAT3)](https://amzn.to/49frucI) (ASM1153E)\n\n[Other UASP-compatible devices can be found here](https://amzn.to/3MkiQjA).\n\n## How to install\n\n### Notice\n\nPlease note that **this driver will not be loaded when USB storage devices are mounted** for safety. To ensure the successful installation and execution of the driver, please [eject all USB storage devices from the control panel](https://kb.synology.com/en-us/DSM/help/DSM/AdminCenter/system_externaldevice_devicelist) before proceeding.\n\nThe reason is that this UAS driver replaces the stock USB storage driver. Therefore, the stock USB storage driver will be unloaded once before the UAS driver is loaded. To avoid unintended disconnection of the USB storage, the Run/Stop operation of the driver is not performed while the USB storage is mounted.\n\n### Preparation\n\n[Enable SSH](https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General_Setup/How_to_login_to_DSM_with_root_permission_via_SSH_Telnet) and login your NAS.\n\n### Installation\n\n1. Go to \"Package Center\"\n2. Press \"Manual Install\"\n3. Choose a driver package downloaded from the [release page](https://github.com/bb-qq/uas/releases).\n4. [DSM7] The installation will fail the first time. After that, run the following command from the SSH terminal:\n   * `sudo install -m 4755 -o root -D /var/packages/uas/target/uas/spk_su /opt/sbin/spk_su`\n5. [DSM7] Retry installation. \n   * You don't need the above DSM7-specific steps at the next time.\n\nhttps://www.synology.com/en-us/knowledgebase/SRM/help/SRM/PkgManApp/install_buy\n\n### How to check whether UAS is enabled\n\nRun `lsusb -i` to verify that the *uas* driver is used instead of the *usb-storage* driver for the desired storage device.\n\nThe usb-storage (BOT) driver is used.\n```\n|__usb2          1d6b:0003:0404 09  3.00 5000MBit/s 0mA 1IF  (Linux 4.4.302+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub\n  |__2-2         152d:0580:7501 00  3.20 5000MBit/s 8mA 1IF  (Kuroutoshikou GW3.5AM-SU3G2P 37518000XXXX)\n  2-2:1.0         (IF) 08:06:50 2EPs () usb-storage host17 (sdq)\n```\n\nThe uas (UAS) driver is used.\n```\n|__usb2          1d6b:0003:0404 09  3.00 5000MBit/s 0mA 1IF  (Linux 4.4.302+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub\n  |__2-2         152d:0580:7501 00  3.20 5000MBit/s 8mA 1IF  (Kuroutoshikou GW3.5AM-SU3G2P 37518000XXXX)\n  2-2:1.0         (IF) 08:06:62 4EPs () uas host25 (sdq)\n```\n\nAlso, make sure that the device is connected via USB 3 by `3.20 5000MBit/s`.\n\n## Tips\n\n### S.M.A.R.T diagnostics\n\nThanks to UASP can issue native commands, S.M.A.R.T can be used as is. However, DSM cannot perform S.M.A.R.T diagnostics for external storage, so `smartctl` must be used. For this, use the following command.\n\n```\nsmartctl -a -d sat /dev/sdq1\n```\n\n## Known issues\n\n### Automatic driver loading at system startup is useless.\n\nUSB storage is attached and mounted using the stock USB storage driver at system startup. The UAS driver is then loaded afterward.\n\nBecause of this order, the loading process of the UAS driver is skipped even if the UAS driver package is set to auto-run.\n\nAs a workaround, eject the USB storage and restart the driver package manually when the system is rebooted.\n\n## Performance test\n\n### Environment\n\n* DS918+ (16 GB RAM, USB 3.2 Gen1x1 5Gbps)\n* DSM 7.2-64570 Update 3\n* [Kuroutoshikou GW3.5AM-SU3G2P](https://amzn.to/3QwcaRH) (JMicron JMS580 / USB 3.2 Gen2x1 10Gbps)\n* [Seagate IronWolf 16TB ST16000VN001](https://amzn.to/45OQ2qi) (210 MB/s, 256 MB cache, 7200 rpm)\n* [fio](https://github.com/axboe/fio) 3.29 installed by [Entware](https://github.com/Entware/Entware) opkg\n\n\n### Scenario\n\nFor the parameters given to fio, I used those used to [measure the performance of persistent disks on Google Cloud Platform](https://cloud.google.com/compute/docs/disks/benchmarking-pd-performance).\n\n\u003cdetails\u003e\n  \u003csummary\u003ebenchmark.sh\u003c/summary\u003e\n\n```\n#!/bin/sh\nset -eux\n\nfio --name=write_throughput --directory=. --numjobs=8 \\\n--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \\\n--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \\\n--group_reporting=1 --iodepth_batch_submit=64 \\\n--iodepth_batch_complete_max=64\n\nfio --name=write_iops --directory=. --size=10G \\\n--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \\\n--verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1  \\\n--iodepth_batch_submit=256  --iodepth_batch_complete_max=256\n\nfio --name=read_throughput --directory=. --numjobs=8 \\\n--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \\\n--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \\\n--group_reporting=1 \\\n--iodepth_batch_submit=64 --iodepth_batch_complete_max=64\necho -----\n\nfio --name=read_iops --directory=. --size=10G \\\n--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \\\n--verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 \\\n--iodepth_batch_submit=256  --iodepth_batch_complete_max=256\n\nfio --name=rw_iops --directory=. --size=10G \\\n--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \\\n--verify=0 --bs=4K --iodepth=256 --rw=randrw --group_reporting=1 \\\n--iodepth_batch_submit=256 --iodepth_batch_complete_max=256\n```\n\n\u003c/details\u003e\n\n\n### Result\n\n#### Summary\n\n* **write_throughput**: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n  * **BOT**: IOPS=182, BW=190MiB/s (200MB/s)(11.4GiB/61487msec); 0 zone resets\n  * **UAS**: IOPS=187, BW=196MiB/s (206MB/s)(11.8GiB/61512msec); 0 zone resets\n* **write_iops**: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\n  * **BOT**: IOPS=727, BW=2926KiB/s (2997kB/s)(172MiB/60238msec); 0 zone resets\n  * **UAS**: IOPS=747, BW=3005KiB/s (3078kB/s)(176MiB/60084msec); 0 zone resets\n* **read_throughput**: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n  * **BOT**: IOPS=182, BW=190MiB/s (200MB/s)(11.4GiB/61115msec)\n  * **UAS**: IOPS=178, BW=186MiB/s (195MB/s)(11.4GiB/62641msec)\n* **read_iops**: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\n  * **BOT**: IOPS=192, BW=786KiB/s (805kB/s)(46.6MiB/60765msec)  \n  * **UAS**: IOPS=608, BW=2452KiB/s (2510kB/s)(144MiB/60306msec)\n* **rw_iops**: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\n  * **BOT**:\n    * read: IOPS=141, BW=574KiB/s (588kB/s)(33.9MiB/60439msec)\n    * write: IOPS=140, BW=570KiB/s (584kB/s)(33.7MiB/60439msec); 0 zone resets\n  * **UAS**: \n    * read: IOPS=290, BW=1169KiB/s (1197kB/s)(68.9MiB/60412msec)\n    * write: IOPS=289, BW=1168KiB/s (1196kB/s)(68.9MiB/60412msec); 0 zone resets\n\n#### Raw log\n\n\u003cdetails\u003e\n  \u003csummary\u003eBOT\u003c/summary\u003e\n\n```\n+ fio --name=write_throughput --directory=. --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write --group_reporting=1 --iodepth_batch_submit=64 --iodepth_batch_complete_max=64\nwrite_throughput: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n...\nfio-3.29\nStarting 8 threads\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nwrite_throughput: Laying out IO file (1 file / 10240MiB)\nJobs: 6 (f=6): [W(6),_(2)][35.1%][w=82.0MiB/s][w=82 IOPS][eta 02m:00s]\nwrite_throughput: (groupid=0, jobs=8): err= 0: pid=28746: Sat Oct 28 17:04:29 2023\n  write: IOPS=182, BW=190MiB/s (200MB/s)(11.4GiB/61487msec); 0 zone resets\n    slat (usec): min=207, max=3019.6k, avg=1903467.45, stdev=683391.87\n    clat (usec): min=10, max=3112.3k, avg=511995.83, stdev=786204.39\n     lat (msec): min=137, max=5893, avg=2380.43, stdev=749.29\n    clat percentiles (usec):\n     |  1.00th=[     12],  5.00th=[     14], 10.00th=[     16],\n     | 20.00th=[     18], 30.00th=[     19], 40.00th=[     21],\n     | 50.00th=[     26], 60.00th=[ 156238], 70.00th=[ 608175],\n     | 80.00th=[1061159], 90.00th=[1837106], 95.00th=[2466251],\n     | 99.00th=[2801796], 99.50th=[2868904], 99.90th=[3003122],\n     | 99.95th=[3036677], 99.99th=[3036677]\n   bw (  KiB/s): min=936101, max=1032192, per=100.00%, avg=994801.94, stdev=2734.42, samples=186\n   iops        : min=  912, max= 1008, avg=971.13, stdev= 2.70, samples=186\n  lat (usec)   : 20=37.01%, 50=16.08%, 250=0.51%, 500=1.55%, 750=0.48%\n  lat (msec)   : 20=0.04%, 50=0.07%, 100=0.46%, 250=4.49%, 500=6.45%\n  lat (msec)   : 750=6.19%, 1000=5.84%, 2000=12.88%, \u003e=2000=8.21%\n  cpu          : usr=0.26%, sys=0.22%, ctx=3555, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=8.0%, 8=20.6%, 16=22.9%, 32=44.6%, \u003e=64=1.1%\n     submit    : 0=0.0%, 4=7.2%, 8=7.2%, 16=14.4%, 32=20.3%, 64=50.8%, \u003e=64=0.0%\n     complete  : 0=0.0%, 4=2.0%, 8=0.5%, 16=1.0%, 32=0.5%, 64=96.0%, \u003e=64=0.0%\n     issued rwts: total=0,11193,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\n\nRun status group 0 (all jobs):\n  WRITE: bw=190MiB/s (200MB/s), 190MiB/s-190MiB/s (200MB/s-200MB/s), io=11.4GiB (12.3GB), run=61487-61487msec\n\nDisk stats (read/write):\n  sdq: ios=0/105756, merge=0/1378, ticks=0/10813223, in_queue=10819660, util=99.76%\n\n+ fio --name=write_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nwrite_iops: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nwrite_iops: Laying out IO file (1 file / 10240MiB)\nJobs: 1 (f=1): [w(1)][100.0%][w=3335KiB/s][w=833 IOPS][eta 00m:00s]\nwrite_iops: (groupid=0, jobs=1): err= 0: pid=29850: Sat Oct 28 17:05:32 2023\n  write: IOPS=727, BW=2926KiB/s (2997kB/s)(172MiB/60238msec); 0 zone resets\n    slat (usec): min=17, max=1100.1k, avg=148673.95, stdev=140363.23\n    clat (usec): min=10, max=1769.8k, avg=193891.44, stdev=224609.75\n     lat (msec): min=3, max=1937, avg=342.52, stdev=251.71\n    clat percentiles (usec):\n     |  1.00th=[     26],  5.00th=[     34], 10.00th=[     36],\n     | 20.00th=[     44], 30.00th=[  38536], 40.00th=[ 117965],\n     | 50.00th=[ 160433], 60.00th=[ 166724], 70.00th=[ 208667],\n     | 80.00th=[ 325059], 90.00th=[ 484443], 95.00th=[ 658506],\n     | 99.00th=[1035994], 99.50th=[1132463], 99.90th=[1384121],\n     | 99.95th=[1484784], 99.99th=[1686111]\n   bw (  KiB/s): min=  856, max= 5752, per=100.00%, avg=3049.65, stdev=852.07, samples=115\n   iops        : min=  214, max= 1438, avg=762.25, stdev=213.04, samples=115\n  lat (usec)   : 20=0.21%, 50=23.81%, 100=2.08%, 250=0.13%, 500=0.18%\n  lat (usec)   : 750=0.09%, 1000=0.01%\n  lat (msec)   : 2=0.02%, 4=0.17%, 10=0.51%, 20=0.44%, 50=6.37%\n  lat (msec)   : 100=5.48%, 250=32.96%, 500=19.27%, 750=4.81%, 1000=2.58%\n  lat (msec)   : 2000=1.17%\n  cpu          : usr=0.24%, sys=1.21%, ctx=1552, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=99.9%\n     submit    : 0=0.0%, 4=8.1%, 8=4.6%, 16=6.5%, 32=7.4%, 64=11.7%, \u003e=64=61.7%\n     complete  : 0=0.0%, 4=9.6%, 8=0.3%, 16=0.3%, 32=0.0%, 64=0.3%, \u003e=64=89.6%\n     issued rwts: total=0,43813,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n  WRITE: bw=2926KiB/s (2997kB/s), 2926KiB/s-2926KiB/s (2997kB/s-2997kB/s), io=172MiB (181MB), run=60238-60238msec\n\nDisk stats (read/write):\n  sdq: ios=0/46086, merge=0/3425, ticks=0/8986953, in_queue=9016725, util=99.90%\n\n+ fio --name=read_throughput --directory=. --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read --group_reporting=1 --iodepth_batch_submit=64 --iodepth_batch_complete_max=64\nread_throughput: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n...\nfio-3.29\nStarting 8 threads\nJobs: 8 (f=8): [R(8)][14.2%][r=242MiB/s][r=242 IOPS][eta 06m:28s]\nread_throughput: (groupid=0, jobs=8): err= 0: pid=5321: Sat Oct 28 17:12:57 2023\n  read: IOPS=182, BW=190MiB/s (200MB/s)(11.4GiB/61115msec)\n    slat (usec): min=291, max=2813.2k, avg=1915921.31, stdev=647683.73\n    clat (usec): min=10, max=2882.9k, avg=534491.06, stdev=820259.87\n     lat (msec): min=163, max=5360, avg=2416.82, stdev=716.99\n    clat percentiles (usec):\n     |  1.00th=[     12],  5.00th=[     14], 10.00th=[     15],\n     | 20.00th=[     17], 30.00th=[     18], 40.00th=[     20],\n     | 50.00th=[     23], 60.00th=[ 170918], 70.00th=[ 666895],\n     | 80.00th=[1115685], 90.00th=[1988101], 95.00th=[2600469],\n     | 99.00th=[2701132], 99.50th=[2734687], 99.90th=[2835350],\n     | 99.95th=[2835350], 99.99th=[2868904]\n   bw (  KiB/s): min=973368, max=1008366, per=100.00%, avg=990495.48, stdev=1324.00, samples=184\n   iops        : min=  948, max=  984, avg=967.00, stdev= 1.32, samples=184\n  lat (usec)   : 20=40.52%, 50=16.03%, 500=0.51%\n  lat (msec)   : 50=0.06%, 100=0.19%, 250=4.10%, 500=6.95%, 750=3.78%\n  lat (msec)   : 1000=4.50%, 2000=13.99%, \u003e=2000=9.60%\n  cpu          : usr=0.01%, sys=0.23%, ctx=3163, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=4.6%, 8=24.7%, 16=22.4%, 32=40.9%, \u003e=64=4.0%\n     submit    : 0=0.0%, 4=7.1%, 8=7.4%, 16=13.0%, 32=22.4%, 64=50.1%, \u003e=64=0.0%\n     complete  : 0=0.0%, 4=3.5%, 8=0.5%, 16=0.0%, 32=0.0%, 64=96.0%, \u003e=64=0.0%\n     issued rwts: total=11123,0,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\n\nRun status group 0 (all jobs):\n   READ: bw=190MiB/s (200MB/s), 190MiB/s-190MiB/s (200MB/s-200MB/s), io=11.4GiB (12.2GB), run=61115-61115msec\n\nDisk stats (read/write):\n  sdq: ios=104417/4, merge=0/1, ticks=9288673/45, in_queue=9295130, util=99.99%\n\n+ fio --name=read_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nread_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nread_iops: Laying out IO file (1 file / 10240MiB)\nJobs: 1 (f=1): [r(1)][0.5%][r=1025KiB/s][r=256 IOPS][eta 03h:44m:39s]\nread_iops: (groupid=0, jobs=1): err= 0: pid=7302: Sat Oct 28 17:14:50 2023\n  read: IOPS=192, BW=786KiB/s (805kB/s)(46.6MiB/60765msec)\n    slat (msec): min=279, max=743, avg=499.59, stdev=168.61\n    clat (usec): min=16, max=2878.0k, avg=779571.39, stdev=616567.02\n     lat (msec): min=279, max=3339, avg=1278.97, stdev=611.45\n    clat percentiles (usec):\n     |  1.00th=[     19],  5.00th=[     36], 10.00th=[     45],\n     | 20.00th=[     67], 30.00th=[ 333448], 40.00th=[ 624952],\n     | 50.00th=[ 666895], 60.00th=[ 935330], 70.00th=[1035994],\n     | 80.00th=[1333789], 90.00th=[1652556], 95.00th=[1954546],\n     | 99.00th=[2264925], 99.50th=[2332034], 99.90th=[2399142],\n     | 99.95th=[2432697], 99.99th=[2768241]\n   bw (  KiB/s): min=  768, max= 1034, per=100.00%, avg=1016.53, stdev=45.32, samples=93\n   iops        : min=  192, max=  258, avg=253.98, stdev=11.32, samples=93\n  lat (usec)   : 20=1.26%, 50=16.80%, 100=2.40%\n  lat (msec)   : 500=16.18%, 750=23.07%, 1000=7.32%, 2000=30.78%, \u003e=2000=3.29%\n  cpu          : usr=0.06%, sys=0.12%, ctx=493, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=100.8%\n     submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=32.8%, \u003e=64=67.2%\n     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=100.0%\n     issued rwts: total=11682,0,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n   READ: bw=786KiB/s (805kB/s), 786KiB/s-786KiB/s (805kB/s-805kB/s), io=46.6MiB (48.9MB), run=60765-60765msec\n\nDisk stats (read/write):\n  sdq: ios=12317/4, merge=1/1, ticks=9018565/163, in_queue=9023575, util=99.90%\n\n+ fio --name=rw_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randrw --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nrw_iops: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nJobs: 1 (f=1): [m(1)][0.7%][r=264KiB/s,w=248KiB/s][r=66,w=62 IOPS][eta 02h:34m:14s]\nrw_iops: (groupid=0, jobs=1): err= 0: pid=29297: Sun Oct 29 12:00:26 2023\n  read: IOPS=141, BW=574KiB/s (588kB/s)(33.9MiB/60439msec)\n    slat (usec): min=46, max=1184.6k, avg=405646.76, stdev=229004.84\n    clat (usec): min=13, max=2429.1k, avg=489209.95, stdev=482026.99\n     lat (msec): min=87, max=2893, avg=894.68, stdev=516.64\n    clat percentiles (usec):\n     |  1.00th=[     26],  5.00th=[     36], 10.00th=[     37],\n     | 20.00th=[     47], 30.00th=[  90702], 40.00th=[ 333448],\n     | 50.00th=[ 396362], 60.00th=[ 455082], 70.00th=[ 683672],\n     | 80.00th=[ 868221], 90.00th=[1216349], 95.00th=[1468007],\n     | 99.00th=[1887437], 99.50th=[1971323], 99.90th=[2197816],\n     | 99.95th=[2332034], 99.99th=[2432697]\n   bw (  KiB/s): min=  320, max= 1162, per=100.00%, avg=650.14, stdev=239.00, samples=105\n   iops        : min=   80, max=  290, avg=162.48, stdev=59.71, samples=105\n  write: IOPS=140, BW=570KiB/s (584kB/s)(33.7MiB/60439msec); 0 zone resets\n    slat (usec): min=56, max=1184.6k, avg=396779.43, stdev=219027.80\n    clat (usec): min=12, max=2429.1k, avg=480735.80, stdev=482024.67\n     lat (msec): min=71, max=2893, avg=877.38, stdev=518.27\n    clat percentiles (usec):\n     |  1.00th=[     23],  5.00th=[     36], 10.00th=[     37],\n     | 20.00th=[     46], 30.00th=[  89654], 40.00th=[ 320865],\n     | 50.00th=[ 387974], 60.00th=[ 446694], 70.00th=[ 666895],\n     | 80.00th=[ 859833], 90.00th=[1199571], 95.00th=[1468007],\n     | 99.00th=[1887437], 99.50th=[2004878], 99.90th=[2197816],\n     | 99.95th=[2231370], 99.99th=[2432697]\n   bw (  KiB/s): min=  408, max= 1136, per=100.00%, avg=648.52, stdev=233.11, samples=105\n   iops        : min=  102, max=  284, avg=162.06, stdev=58.23, samples=105\n  lat (usec)   : 20=0.32%, 50=24.88%, 100=3.35%, 250=0.45%\n  lat (msec)   : 50=0.11%, 100=2.55%, 250=4.85%, 500=27.70%, 750=9.60%\n  lat (msec)   : 1000=11.73%, 2000=14.82%, \u003e=2000=0.41%\n  cpu          : usr=0.09%, sys=0.22%, ctx=661, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=100.7%\n     submit    : 0=0.0%, 4=0.5%, 8=11.6%, 16=9.5%, 32=3.0%, 64=8.0%, \u003e=64=67.3%\n     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=100.0%\n     issued rwts: total=8545,8493,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n   READ: bw=574KiB/s (588kB/s), 574KiB/s-574KiB/s (588kB/s-588kB/s), io=33.9MiB (35.5MB), run=60439-60439msec\n  WRITE: bw=570KiB/s (584kB/s), 570KiB/s-570KiB/s (584kB/s-584kB/s), io=33.7MiB (35.3MB), run=60439-60439msec\n\nDisk stats (read/write):\n  sdq: ios=8941/8957, merge=0/13, ticks=4548190/4406598, in_queue=9016782, util=99.90%\n\n```\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003eUAS\u003c/summary\u003e\n\n````\n+ fio --name=write_throughput --directory=. --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write --group_reporting=1 --iodepth_batch_submit=64 --iodepth_batch_complete_max=64\nwrite_throughput: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n...\nfio-3.29\nStarting 8 threads\nJobs: 4 (f=4): [_(3),W(1),_(1),W(3)][21.2%][w=266MiB/s][w=266 IOPS][eta 04m:01s]\nwrite_throughput: (groupid=0, jobs=8): err= 0: pid=21679: Sun Oct 29 10:58:16 2023\n  write: IOPS=187, BW=196MiB/s (206MB/s)(11.8GiB/61512msec); 0 zone resets\n    slat (usec): min=124, max=3321.9k, avg=1524123.99, stdev=732852.58\n    clat (usec): min=12, max=3324.5k, avg=824422.18, stdev=973069.16\n     lat (msec): min=303, max=5113, avg=2342.02, stdev=984.54\n    clat percentiles (usec):\n     |  1.00th=[     15],  5.00th=[     16], 10.00th=[     17],\n     | 20.00th=[     19], 30.00th=[     23], 40.00th=[   1188],\n     | 50.00th=[ 350225], 60.00th=[ 692061], 70.00th=[1484784],\n     | 80.00th=[1870660], 90.00th=[2231370], 95.00th=[2667578],\n     | 99.00th=[3170894], 99.50th=[3204449], 99.90th=[3338666],\n     | 99.95th=[3338666], 99.99th=[3338666]\n   bw (  KiB/s): min=612992, max=999424, per=100.00%, avg=794826.45, stdev=11157.01, samples=243\n   iops        : min=  598, max=  976, avg=775.95, stdev=10.91, samples=243\n  lat (usec)   : 20=25.12%, 50=10.75%, 100=0.07%, 250=1.77%, 500=0.60%\n  lat (usec)   : 750=0.59%, 1000=1.27%\n  lat (msec)   : 2=5.52%, 4=2.58%, 250=0.38%, 500=6.24%, 750=5.91%\n  lat (msec)   : 1000=1.79%, 2000=23.54%, \u003e=2000=14.97%\n  cpu          : usr=0.25%, sys=0.14%, ctx=1135, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=2.8%, 8=1.7%, 16=26.0%, 32=67.6%, \u003e=64=2.8%\n     submit    : 0=0.0%, 4=6.9%, 8=6.2%, 16=16.4%, 32=28.7%, 64=41.8%, \u003e=64=0.0%\n     complete  : 0=0.0%, 4=0.0%, 8=0.4%, 16=0.4%, 32=2.0%, 64=97.2%, \u003e=64=0.0%\n     issued rwts: total=0,11550,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\n\nRun status group 0 (all jobs):\n  WRITE: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=11.8GiB (12.6GB), run=61512-61512msec\n\nDisk stats (read/write):\n  sdq: ios=0/24806, merge=0/24, ticks=0/9194404, in_queue=9221649, util=99.63%\n+ fio --name=write_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nwrite_iops: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nJobs: 1 (f=1): [w(1)][100.0%][w=3011KiB/s][w=752 IOPS][eta 00m:00s]\nwrite_iops: (groupid=0, jobs=1): err= 0: pid=22682: Sun Oct 29 10:59:18 2023\n  write: IOPS=747, BW=3005KiB/s (3078kB/s)(176MiB/60084msec); 0 zone resets\n    slat (usec): min=57, max=1145.1k, avg=141624.61, stdev=142026.06\n    clat (usec): min=7, max=1660.8k, avg=189729.16, stdev=214179.69\n     lat (msec): min=4, max=1826, avg=331.83, stdev=249.23\n    clat percentiles (usec):\n     |  1.00th=[     28],  5.00th=[     35], 10.00th=[     36],\n     | 20.00th=[  14091], 30.00th=[  82314], 40.00th=[ 152044],\n     | 50.00th=[ 156238], 60.00th=[ 160433], 70.00th=[ 183501],\n     | 80.00th=[ 304088], 90.00th=[ 350225], 95.00th=[ 566232],\n     | 99.00th=[1115685], 99.50th=[1182794], 99.90th=[1384121],\n     | 99.95th=[1484784], 99.99th=[1619002]\n   bw (  KiB/s): min=  896, max= 6144, per=100.00%, avg=3358.26, stdev=941.82, samples=107\n   iops        : min=  224, max= 1536, avg=839.46, stdev=235.49, samples=107\n  lat (usec)   : 10=0.01%, 20=0.01%, 50=18.61%, 100=0.57%, 250=0.09%\n  lat (msec)   : 4=0.13%, 10=0.14%, 20=1.09%, 50=5.36%, 100=4.89%\n  lat (msec)   : 250=44.91%, 500=18.98%, 750=1.68%, 1000=1.73%, 2000=2.11%\n  cpu          : usr=0.22%, sys=1.00%, ctx=1531, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=100.4%\n     submit    : 0=0.0%, 4=5.5%, 8=2.8%, 16=2.5%, 32=9.5%, 64=13.0%, \u003e=64=66.7%\n     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=100.0%\n     issued rwts: total=0,44889,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n  WRITE: bw=3005KiB/s (3078kB/s), 3005KiB/s-3005KiB/s (3078kB/s-3078kB/s), io=176MiB (185MB), run=60084-60084msec\n\nDisk stats (read/write):\n  sdq: ios=0/46020, merge=0/21, ticks=0/8699270, in_queue=8998243, util=99.90%\n+ fio --name=read_throughput --directory=. --numjobs=8 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read --group_reporting=1 --iodepth_batch_submit=64 --iodepth_batch_complete_max=64\nread_throughput: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64\n...\nfio-3.29\nStarting 8 threads\nJobs: 7 (f=7): [R(3),_(1),R(4)][52.8%][r=177MiB/s][r=177 IOPS][eta 00m:58s]\nread_throughput: (groupid=0, jobs=8): err= 0: pid=23660: Sun Oct 29 11:00:24 2023\n  read: IOPS=178, BW=186MiB/s (195MB/s)(11.4GiB/62641msec)\n    slat (usec): min=89, max=2904.8k, avg=1610604.80, stdev=640530.74\n    clat (usec): min=13, max=2835.1k, avg=829009.40, stdev=959964.01\n     lat (msec): min=658, max=5113, avg=2402.85, stdev=912.91\n    clat percentiles (usec):\n     |  1.00th=[     15],  5.00th=[     16], 10.00th=[     17],\n     | 20.00th=[     18], 30.00th=[     20], 40.00th=[    562],\n     | 50.00th=[   1045], 60.00th=[ 725615], 70.00th=[2088764],\n     | 80.00th=[2122318], 90.00th=[2164261], 95.00th=[2164261],\n     | 99.00th=[2231370], 99.50th=[2264925], 99.90th=[2298479],\n     | 99.95th=[2298479], 99.99th=[2835350]\n   bw (  KiB/s): min=620884, max=945086, per=100.00%, avg=787378.69, stdev=6960.73, samples=238\n   iops        : min=  606, max=  922, avg=768.35, stdev= 6.79, samples=238\n  lat (usec)   : 20=31.23%, 50=5.32%, 100=0.22%, 250=0.47%, 500=2.55%\n  lat (usec)   : 750=5.94%, 1000=4.27%\n  lat (msec)   : 2=1.87%, 500=0.39%, 750=10.91%, 1000=1.20%, 2000=4.63%\n  lat (msec)   : \u003e=2000=31.52%\n  cpu          : usr=0.01%, sys=0.16%, ctx=833, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.6%, 8=0.0%, 16=31.5%, 32=63.7%, \u003e=64=0.6%\n     submit    : 0=0.0%, 4=3.9%, 8=11.1%, 16=11.8%, 32=28.3%, 64=44.8%, \u003e=64=0.0%\n     complete  : 0=0.0%, 4=0.0%, 8=0.4%, 16=1.2%, 32=1.6%, 64=96.8%, \u003e=64=0.0%\n     issued rwts: total=11159,0,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=64\n\nRun status group 0 (all jobs):\n   READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=11.4GiB (12.2GB), run=62641-62641msec\n\nDisk stats (read/write):\n  sdq: ios=23281/3, merge=0/1, ticks=9275850/212, in_queue=9300474, util=99.93%\n  \n+ fio --name=read_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nread_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nJobs: 1 (f=1): [r(1)][1.5%][r=2562KiB/s][r=640 IOPS][eta 01h:11m:20s]\nread_iops: (groupid=0, jobs=1): err= 0: pid=24696: Sun Oct 29 11:01:26 2023\n  read: IOPS=608, BW=2452KiB/s (2510kB/s)(144MiB/60306msec)\n    slat (usec): min=33, max=290520, avg=177304.73, stdev=62677.13\n    clat (usec): min=21, max=1080.7k, avg=234872.83, stdev=188632.97\n     lat (msec): min=124, max=1363, avg=412.15, stdev=185.81\n    clat percentiles (usec):\n     |  1.00th=[     25],  5.00th=[     35], 10.00th=[     36],\n     | 20.00th=[     46], 30.00th=[ 175113], 40.00th=[ 191890],\n     | 50.00th=[ 202376], 60.00th=[ 221250], 70.00th=[ 283116],\n     | 80.00th=[ 400557], 90.00th=[ 467665], 95.00th=[ 608175],\n     | 99.00th=[ 792724], 99.50th=[ 826278], 99.90th=[ 910164],\n     | 99.95th=[1010828], 99.99th=[1061159]\n   bw (  KiB/s): min= 1603, max= 3080, per=99.85%, avg=2448.36, stdev=501.86, samples=120\n   iops        : min=  398, max=  770, avg=612.07, stdev=125.50, samples=120\n  lat (usec)   : 50=22.64%, 100=1.20%, 250=0.09%\n  lat (msec)   : 100=0.04%, 250=43.60%, 500=24.16%, 750=7.34%, 1000=1.23%\n  lat (msec)   : 2000=0.06%\n  cpu          : usr=0.17%, sys=0.39%, ctx=1276, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=99.7%\n     submit    : 0=0.0%, 4=0.5%, 8=0.0%, 16=0.0%, 32=6.1%, 64=19.7%, \u003e=64=73.7%\n     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=100.0%\n     issued rwts: total=36706,0,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n   READ: bw=2452KiB/s (2510kB/s), 2452KiB/s-2452KiB/s (2510kB/s-2510kB/s), io=144MiB (151MB), run=60306-60306msec\n\nDisk stats (read/write):\n  sdq: ios=38091/0, merge=2/0, ticks=8879246/0, in_queue=8886403, util=99.90%\n\n+ fio --name=rw_iops --directory=. --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=256 --rw=randrw --group_reporting=1 --iodepth_batch_submit=256 --iodepth_batch_complete_max=256\nrw_iops: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256\nfio-3.29\nStarting 1 thread\nJobs: 1 (f=1): [m(1)][1.4%][r=1773KiB/s,w=1809KiB/s][r=443,w=452 IOPS][eta 01h:14m:46s]\nrw_iops: (groupid=0, jobs=1): err= 0: pid=15910: Sun Oct 29 15:14:46 2023\n  read: IOPS=290, BW=1169KiB/s (1197kB/s)(68.9MiB/60412msec)\n    slat (usec): min=18, max=999896, avg=186673.75, stdev=162647.74\n    clat (usec): min=14, max=1723.5k, avg=280686.89, stdev=260561.18\n     lat (msec): min=34, max=1906, avg=467.41, stdev=295.39\n    clat percentiles (usec):\n     |  1.00th=[     32],  5.00th=[     37], 10.00th=[     39],\n     | 20.00th=[  92799], 30.00th=[ 168821], 40.00th=[ 191890],\n     | 50.00th=[ 212861], 60.00th=[ 244319], 70.00th=[ 337642],\n     | 80.00th=[ 413139], 90.00th=[ 599786], 95.00th=[ 876610],\n     | 99.00th=[1199571], 99.50th=[1350566], 99.90th=[1518339],\n     | 99.95th=[1619002], 99.99th=[1702888]\n   bw (  KiB/s): min=  400, max= 2525, per=100.00%, avg=1285.28, stdev=413.35, samples=109\n   iops        : min=  100, max=  631, avg=321.17, stdev=103.31, samples=109\n  write: IOPS=289, BW=1168KiB/s (1196kB/s)(68.9MiB/60412msec); 0 zone resets\n    slat (usec): min=47, max=999899, avg=184999.48, stdev=159017.06\n    clat (usec): min=10, max=1550.6k, avg=209625.84, stdev=250546.54\n     lat (msec): min=26, max=1793, avg=394.66, stdev=287.77\n    clat percentiles (usec):\n     |  1.00th=[     23],  5.00th=[     32], 10.00th=[     37],\n     | 20.00th=[     39], 30.00th=[     70], 40.00th=[ 105382],\n     | 50.00th=[ 179307], 60.00th=[ 198181], 70.00th=[ 231736],\n     | 80.00th=[ 346031], 90.00th=[ 467665], 95.00th=[ 750781],\n     | 99.00th=[1166017], 99.50th=[1283458], 99.90th=[1501561],\n     | 99.95th=[1518339], 99.99th=[1551893]\n   bw (  KiB/s): min=  384, max= 2605, per=100.00%, avg=1287.28, stdev=446.63, samples=109\n   iops        : min=   96, max=  651, avg=321.67, stdev=111.62, samples=109\n  lat (usec)   : 20=0.16%, 50=21.56%, 100=1.55%, 250=0.08%\n  lat (msec)   : 20=0.14%, 50=1.11%, 100=5.29%, 250=38.07%, 500=21.22%\n  lat (msec)   : 750=5.41%, 1000=2.88%, 2000=2.89%\n  cpu          : usr=0.18%, sys=0.44%, ctx=1219, majf=0, minf=0\n  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=99.4%\n     submit    : 0=0.0%, 4=0.7%, 8=0.0%, 16=4.4%, 32=12.6%, 64=14.3%, \u003e=64=67.9%\n     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=100.0%\n     issued rwts: total=17527,17506,0,0 short=0,0,0,0 dropped=0,0,0,0\n     latency   : target=0, window=0, percentile=100.00%, depth=256\n\nRun status group 0 (all jobs):\n   READ: bw=1169KiB/s (1197kB/s), 1169KiB/s-1169KiB/s (1197kB/s-1197kB/s), io=68.9MiB (72.3MB), run=60412-60412msec\n  WRITE: bw=1168KiB/s (1196kB/s), 1168KiB/s-1168KiB/s (1196kB/s-1196kB/s), io=68.9MiB (72.3MB), run=60412-60412msec\n\nDisk stats (read/write):\n  sdq: ios=18351/18401, merge=2/13, ticks=4993597/3839753, in_queue=8854692, util=99.90%\n\n```\n\u003c/details\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbb-qq%2Fuas","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbb-qq%2Fuas","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbb-qq%2Fuas/lists"}