{"id":13816803,"url":"https://github.com/hyperdivision/fast-async-zlib","last_synced_at":"2025-05-15T18:32:33.397Z","repository":{"id":57232881,"uuid":"311351822","full_name":"hyperdivision/fast-async-zlib","owner":"hyperdivision","description":"Speed up zlib operations by running them using the sync APIs but in a Worker","archived":false,"fork":false,"pushed_at":"2020-11-09T13:45:24.000Z","size":5,"stargazers_count":29,"open_issues_count":1,"forks_count":0,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-10-29T01:27:34.079Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hyperdivision.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-11-09T13:44:21.000Z","updated_at":"2024-01-04T16:52:07.000Z","dependencies_parsed_at":"2022-08-31T14:11:03.188Z","dependency_job_id":null,"html_url":"https://github.com/hyperdivision/fast-async-zlib","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperdivision%2Ffast-async-zlib","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperdivision%2Ffast-async-zlib/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperdivision%2Ffast-async-zlib/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperdivision%2Ffast-async-zlib/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hyperdivision","download_url":"https://codeload.github.com/hyperdivision/fast-async-zlib/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":225368845,"owners_count":17463462,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-04T06:00:21.804Z","updated_at":"2024-11-19T14:31:13.989Z","avatar_url":"https://github.com/hyperdivision.png","language":"JavaScript","readme":"# fast-async-zlib\n\nSpeed up zlib operations by running them using the sync APIs but in a [Worker](https://nodejs.org/api/worker_threads.html).\n\n```\nnpm install fast-async-zlib\n```\n\n## Usage\n\nWorks similar to the core zlib module, except it uses a Worker to batch pending zips\nwhich can be quite faster than using the normal `zlib.gzip(data, cb)` API.\n\n``` js\nconst ZLibWorker = require('fast-async-zlib')\n\nconst z = new ZLibWorker({\n  maxBatchBytes: 1024 * 1024 // how large a batch buffer should be used? (1 MB default)\n})\n\nconst buf = await z.gzip('some data')\nconsole.log('gzipped:', buf)\n```\n\nThere is a small bench included that benches three approaches to zipping 100k ~1kb strings.\nOn my laptop it produces the following result:\n\n```\nrunning bench\nusing core sync: 3.383s\nusing core async: 4.640s\nusing worker: 2.870s\nre-running bench\nusing core sync: 3.873s\nusing core async: 4.843s\nusing worker: 2.929s\n```\n\nIe. `worker.gzip` is ~10% faster than `zlib.gzipSync` and ~40% faster than `zlib.gzip(data, cb)`.\n\n## API\n\n#### `const z = new ZLibWorker([options])`\n\nCreate a new worker instance. Will use a Worker thread in the background to run the actual gzip, using a SharedArrayBuffer to pass data back and fourth.\nOptions include:\n\n```\n{\n  maxBatch: 512, // how many entries to max batch to the worker\n  maxBatchBytes: 1MB // how much memory to use for the shared array buffer\n}\n```\n\nNote that `maxBatchBytes` must be larger than largest payload you pass to `z.gzip(payload)`,\notherwise that method will throw an exception.\n\nIf this is a big problem to you, open an issue and we'll see if can make the buffer autogrow easily.\n\n#### `const buf = await z.gzip(inp)`\n\nGzip a string or buffer using the worker.\n\n#### `z.destroy()`\n\nFully destroy the worker. Only needed if you for some reason want to get rid of it while the program is running.\n\n#### `const pool = ZLibWorker.pool(size, [options])`\n\nMake a simple worker pool of the given size.\nHas the same API as the `ZLibWorker` but will use `size` workers behind the scenes to spread out the load.\n\n## Future\n\nIf you have a need for gunzip, inflate, deflate etc open an issue and we'll see about adding it.\n\n## License\n\nMIT\n","funding_links":[],"categories":["Packages"],"sub_categories":["Others"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyperdivision%2Ffast-async-zlib","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhyperdivision%2Ffast-async-zlib","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyperdivision%2Ffast-async-zlib/lists"}