{"id":47615546,"url":"https://github.com/tuberq/tuber","last_synced_at":"2026-04-13T10:01:29.286Z","repository":{"id":343888215,"uuid":"1178548038","full_name":"tuberq/tuber","owner":"tuberq","description":"A fast job queue server with unique jobs, concurrency controls, and job group pipelines. One binary, zero dependencies.","archived":false,"fork":false,"pushed_at":"2026-04-11T09:02:54.000Z","size":1656,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-11T10:27:35.266Z","etag":null,"topics":["beanstalkd","beanstalkd-compatible","job-queue","message-queue","queue","rust","tuber","work-queue"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tuberq.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGES.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-11T05:59:55.000Z","updated_at":"2026-04-11T09:02:58.000Z","dependencies_parsed_at":"2026-04-13T10:01:16.037Z","dependency_job_id":null,"html_url":"https://github.com/tuberq/tuber","commit_stats":null,"previous_names":["dkam/tuber","tuberq/tuber"],"tags_count":33,"template":false,"template_full_name":null,"purl":"pkg:github/tuberq/tuber","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tuberq%2Ftuber","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tuberq%2Ftuber/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tuberq%2Ftuber/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tuberq%2Ftuber/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tuberq","download_url":"https://codeload.github.com/tuberq/tuber/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tuberq%2Ftuber/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31747175,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-13T09:16:15.125Z","status":"ssl_error","status_checked_at":"2026-04-13T09:16:05.023Z","response_time":93,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["beanstalkd","beanstalkd-compatible","job-queue","message-queue","queue","rust","tuber","work-queue"],"created_at":"2026-04-01T21:21:43.912Z","updated_at":"2026-04-13T10:01:29.280Z","avatar_url":"https://github.com/tuberq.png","language":"Rust","readme":"# tuber\n\nAn experimental, simple, fast job queue server. One binary, zero dependencies.\n\nTuber is a re-write of Beanstalkd in Rust, it brings along priority queues, delayed jobs, job reservations, named tubes — and adds unique jobs, concurrency control, job group pipelines, batch operations and weighted queues.\n\n[![tuber-tui](screenshots/tui.png)](https://github.com/tuberq/tuber-rs)\n*[tuber-tui](https://github.com/tuberq/tuber-rs) — a terminal dashboard compatible with both Tuber and Beanstalkd.*\n\n## How was this built?\n\nEvery line of Rust in this project was written by Claude Opus 4.6. The architecture, testing strategy, and design decisions were human-driven.  I program in Ruby, I have programmed in C, Java and PHP - I have never programmed in Rust.\n\nI used Beanstalkd's C source code and tests as the foundation, first building a minimal working version, duplicating the tests, then incrementally adding the new extensions.\n\nThe docs/ directory contains the working files we used to plan and describe the implementation.\n\nI use Beanstalkd in [Booko](https://booko.au) in several places requiring queues.  It's working very well.  Claude Code can use the tuber-cli to interact with the queue, finding buried (failed) jobs, which helps with debugging. \n\nRead more about it [on my blog](https://da.nmilne.com/shipping-a-job-queue-system-without-reading-the-source-code/).\n\n## Quick Start\n\n```bash\n# Start the server\ntuber server\n\n# Put a job\ntuber put \"echo hello world\"\n\n# Put jobs from stdin (one per line)\necho -e \"job1\\njob2\\njob3\" | tuber put\n\n# Process jobs (reserves and runs each job body as a shell command)\ntuber work\n\n# List tubes\ntuber tubes\n\n# Check stats\ntuber stats\n```\n\n## Why Tuber?\n\nRedis-backed queues are popular and performant, but Redis isn't a natural fit for job queues. You're bolting priorities, delays, reservations, and timeouts onto a general-purpose data structure server — complexity that grows with every edge case.\n\nSQLite-backed queues are simple and fast, but limited to a single host. PostgreSQL and MySQL-backed queues can scale beyond one host, but a job queue should be separate from your application database for capacity planning — which means another instance to manage with connection pooling, tuning, vacuuming, backups, and restores.\n\nTuber is purpose-built for this. A single binary, easy to deploy in Docker, with optional write-ahead log for durability. No capacity planning, no tuning, no surprises. Workers wait efficiently at any scale.\n\nTuber is wire-compatible with [beanstalkd](https://github.com/beanstalkd/beanstalkd), so [dozens of client libraries](https://github.com/beanstalkd/beanstalkd/wiki/Client-Libraries) already work out of the box. For Tuber's extended features (idempotency, job groups, concurrency keys), see the [beaneater tuber fork](https://github.com/tuberq/beaneater/tree/tuber) for Ruby.\n\n### Feature Comparison\n\n| Feature | Tuber | Beanstalkd | Sidekiq + Redis | GoodJob | Solid Queue | RabbitMQ |\n|---|---|---|---|---|---|---|\n| **Unique / idempotent jobs** | Yes | — | Enterprise only ¹ | Yes | Yes ² | — ³ |\n| **Concurrency control** | Yes (per-key) | — | Enterprise only ¹ | Yes | Yes | — |\n| **Job groups / DAG pipelines** | Yes | — | Pro only ¹ | Batches only ⁴ | — | — |\n| **Weighted queues** | Yes | — | Yes | — | — | — |\n| **Per-job priority** | Yes (numeric) | Yes (numeric) | — ⁵ | Yes | Yes | Yes |\n| **Delayed jobs** | Yes | Yes | Yes | Yes | Yes | Via plugin |\n| **Batch reserve / delete** | Yes | — | — | — | — | Prefetch |\n| **Memory backpressure** | Yes ¹⁰ | — | Redis `maxmemory` ¹¹ | DB limits | DB limits | Memory alarms ¹² |\n| **Processing time stats** | EWMA + p50/p95/p99 | — | Histogram ⁷ | In DB ⁸ | — | — |\n| **Queue latency stats** | EWMA + min/max | — | Oldest only ⁹ | In DB ⁸ | — | — |\n| **Persistence** | WAL (optional) | WAL (optional) | Redis RDB/AOF | PostgreSQL | DB ⁶ | Durable queues |\n| **Infrastructure** | None (single binary) | None (single binary) | Redis | PostgreSQL | DB ⁶ | Erlang runtime |\n\n\u003csub\u003e¹ Sidekiq's unique jobs, concurrency controls, and batches require [paid tiers](https://sidekiq.org) (Pro/Enterprise). The OSS version has queue weights and basic job processing.\u003cbr\u003e\n² Solid Queue unique jobs available from Rails 7.2+.\u003cbr\u003e\n³ RabbitMQ has a community deduplication plugin, but no built-in uniqueness.\u003cbr\u003e\n⁴ GoodJob batches support single-level fan-out/fan-in (enqueue N jobs, fire a callback when all complete). Multi-stage pipelines require manually chaining batches inside callbacks.\u003cbr\u003e\n⁵ Sidekiq uses queue-level ordering (strict or weighted), not per-job numeric priority.\u003cbr\u003e\n⁶ Solid Queue supports SQLite, PostgreSQL, or MySQL.\u003cbr\u003e\n⁷ Sidekiq 7+ tracks execution time per job class in exponential histogram buckets. No percentiles without external APM.\u003cbr\u003e\n⁸ GoodJob stores timestamps in PostgreSQL — you can query for percentiles with SQL, but nothing is computed or displayed by default.\u003cbr\u003e\n⁹ Sidekiq's `Queue#latency` returns the age of the oldest job only, not a distribution. SQS has a similar `ApproximateAgeOfOldestMessage`.\u003cbr\u003e\n¹⁰ Tuber's `--max-jobs-size` rejects new `put` commands with `OUT_OF_MEMORY` when the budget is full, but workers can always reserve, release, bury, kick, and delete — the queue keeps draining even at capacity.\u003cbr\u003e\n¹¹ Redis `maxmemory` with an eviction policy can drop data silently. With `noeviction`, writes fail but Sidekiq has no built-in handling — workers stall on Redis errors.\u003cbr\u003e\n¹² RabbitMQ blocks publishers when memory or disk alarms fire, but also blocks consumers on the same connection — a full queue can prevent workers from ACKing messages, causing a deadlock. Tuber's design avoids this by only gating `put`.\u003c/sub\u003e\n\n## What Can You Do With It?\n\n### Background jobs\n\nOffload slow work from your web app. Your request handler queues a job and returns immediately — workers process it in the background.\n\nThese examples use shell commands for clarity, but in practice you'd typically interact with the queue programmatically via a client library.\n\n```bash\ntuber put -t emails \"send-welcome user@example.com\"\ntuber put -t thumbnails \"resize /uploads/photo.jpg 200x200\"\n\n# Workers process jobs in the background\ntuber work -t emails -j 4 \u0026\ntuber work -t thumbnails -j 2 \u0026\n```\n\n### Task pipelines\n\nChain stages together with job groups. Import rows in parallel, then fire a follow-up when they're all done.\n\n```bash\ntuber put -g import \"./import.sh row1\"\ntuber put -g import \"./import.sh row2\"\ntuber put -g import \"./import.sh row3\"\ntuber put --aft import \"./send-summary.sh\"\n```\n\n### Rate-limited processing\n\nUse concurrency keys to ensure only one job per key is processed at a time. Different keys run in parallel.\n\n```bash\n# One deploy per host at a time, but different hosts in parallel\ntuber put -c \"web1\" \"./deploy.sh web1\"\ntuber put -c \"web1\" \"./deploy.sh web1\"   # queued until first finishes\ntuber put -c \"web2\" \"./deploy.sh web2\"   # runs in parallel — different key\n```\n\n### Distributed cron\n\nRunning the same cron on multiple hosts? Idempotency keys prevent duplicate work.\n\n```bash\n# Safe to call from multiple cron hosts — only one job created\ntuber put -i \"nightly-report:300\" \"./generate-report.sh\"\n```\n\n### Shell task runner\n\n`tuber put` + `tuber work` is a simple distributed task runner. Queue shell commands, and workers execute them.\n\n```bash\ntuber server \u0026\ntuber work -j 4 \u0026\n\ntuber put \"echo 'hello world'\"\ntuber put \"curl -s https://example.com/api/webhook -d '{\\\"event\\\": \\\"done\\\"}'\"\ntuber put -i \"transcode-42\" \"ffmpeg -i /data/video-42.raw -c:v libx264 /data/video-42.mp4\"\n```\n\n## Features\n\nAll the great hits — priority queues, delayed jobs, TTR, named tubes, bury \u0026 kick — plus idempotency, concurrency keys, job groups, weighted reserve, and batch operations.\n\n### Core\n\n- **Priority queues** — lower priority number = more urgent. Jobs with priority \u003c 1024 are considered \"urgent\".\n- **Delayed jobs** — submit a job now, make it available after a delay.\n- **Time-to-run (TTR)** — if a worker doesn't finish within TTR seconds, the job goes back to ready.\n- **Named tubes** — organise jobs into separate queues. Default tube is `default`.\n- **Bury \u0026 kick** — set aside problem jobs for later inspection, then kick them back to ready.\n- **Peek** — inspect jobs without reserving them. Peek by ID, or peek at the next ready/delayed/buried job.\n- **Pause** — temporarily stop a tube from serving jobs.\n- **Persistence** — optional write-ahead log (`-b` flag) for crash recovery.\n- **Memory budget** — `--max-jobs-size` caps the total in-memory footprint of all jobs. PUT returns `OUT_OF_MEMORY` when the budget is exhausted, giving producers an explicit backpressure signal instead of a silent OOM kill. Workers can always reserve, release, bury, kick, and delete — state transitions never fail due to the budget. Stats (`current-jobs-size`, `max-jobs-size`) and Prometheus gauges (`tuber_jobs_size_bytes`, `tuber_jobs_size_limit_bytes`) let you alert before the budget fills up. The budget also applies at startup: if the WAL is larger than the configured limit (e.g. after tightening the limit on a previously-unbounded server), tuber aborts with a diagnostic error instead of OOMing mid-replay.\n- **Prometheus metrics** — expose a `/metrics` endpoint for monitoring. See [Statistics Reference](docs/statistics.md#prometheus-metrics).\n\n### Statistics\n\nMost job queue systems treat performance monitoring as the application's problem. Tuber tracks it at the broker, per tube, with no external tooling required:\n\n- **Processing time** — EWMA, min, max, and sample count for how long workers take to complete jobs (reserve-to-delete).\n- **Dual EWMA** — jobs are automatically split at a 100ms threshold into fast and slow buckets, each with its own EWMA. This surfaces bimodal distributions (e.g. idempotent fast-exits vs real work) that a single average would hide.\n- **Percentiles** — p50, p95, p99 from the last 1000 samples. Uses slow-job samples when available, falls back to fast-job samples for tubes where all jobs are quick.\n- **Queue time (time-in-queue)** — EWMA, min, and max of how long jobs waited from `put` to `reserve`. Growing queue time means you need more workers — and you'll know before your users do.\n- **Bury rate** — fraction of reserves that ended in a bury, for quick failure monitoring.\n\nAll stats are available via `stats-tube`, the Prometheus `/metrics` endpoint, and [tuber-tui](https://github.com/tuberq/tuber-tui). See the full [Statistics Reference](docs/statistics.md) for field details.\n\n### Weighted Reserve\n\nBy default, `reserve` picks the highest-priority job across all watched tubes. Two weighted modes let you distribute work across tubes:\n\n**`weighted`** — a tube is chosen randomly in proportion to its weight, then the highest-priority job from that tube is returned:\n\n```text\nwatch email\nwatch notifications 2\nwatch another-tube 6\nreserve-mode weighted\nreserve\n```\n\nTubes default to weight 1. Here, `another-tube` is selected 3x as often as `notifications` and 6x as often as `email`.\n\n**`weighted-fair`** — like `weighted`, but adjusts for processing time so that **worker-time** (not job count) is allocated proportional to weights. Each tube's effective weight is `weight / processing_time_ewma`:\n\n```text\nreserve-mode weighted-fair\n```\n\nThis prevents slow tubes from starving fast ones. For example, if `alerter` jobs take 0.1s and `fetcher` jobs take 10s, standard `weighted` with equal weights would lock workers on `fetcher` 99% of the time. With `weighted-fair`, selection compensates for the processing time difference so both tubes get an equal share of worker capacity. Tubes with no processing history yet fall back to raw weights.\n\n### Unique Jobs (Idempotency)\n\nPrevent duplicate jobs with an `idp:` key on `put`. If a job with the same key already exists in the tube, the original job ID is returned along with the existing job's state:\n\n```text\nput 100 0 30 5 idp:my-key\n\u003cbody\u003e\n→ INSERTED 1\n\nput 100 0 30 5 idp:my-key\n\u003cbody\u003e\n→ INSERTED 1 READY       (dedup hit — job is ready)\n```\n\n#### Priority Escalation\n\nIf a duplicate `put` arrives with a higher priority (lower number) than the existing job, the job's priority is upgraded and the new priority is included in the response:\n\n```text\nput 100 0 30 5 idp:my-key\n\u003cbody\u003e\n→ INSERTED 1\n\nput 50 0 30 5 idp:my-key\n\u003cbody\u003e\n→ INSERTED 1 READY 50    (dedup hit — priority upgraded to 50)\n\nput 200 0 30 5 idp:my-key\n\u003cbody\u003e\n→ INSERTED 1 READY       (dedup hit — priority NOT downgraded)\n```\n\nPriority can only increase (lower number), never decrease — this prevents flapping when multiple producers disagree. The upgrade applies regardless of job state (ready, reserved, delayed, or buried); for non-ready jobs, the new priority takes effect on the next state transition.\n\nThe response state tells you exactly what happened to the original job:\n\n| Response | Meaning |\n|---|---|\n| `INSERTED \u003cid\u003e` | Fresh insert, new job created |\n| `INSERTED \u003cid\u003e READY` | Dedup hit — original job is waiting to be reserved |\n| `INSERTED \u003cid\u003e READY \u003cpri\u003e` | Dedup hit — priority upgraded to `\u003cpri\u003e` |\n| `INSERTED \u003cid\u003e RESERVED` | Dedup hit — original job is being processed |\n| `INSERTED \u003cid\u003e RESERVED \u003cpri\u003e` | Dedup hit — priority upgraded (applies on release) |\n| `INSERTED \u003cid\u003e DELAYED` | Dedup hit — original job is delayed |\n| `INSERTED \u003cid\u003e DELAYED \u003cpri\u003e` | Dedup hit — priority upgraded (applies when ready) |\n| `INSERTED \u003cid\u003e BURIED` | Dedup hit — original job is buried |\n| `INSERTED \u003cid\u003e BURIED \u003cpri\u003e` | Dedup hit — priority upgraded (applies on kick) |\n| `INSERTED \u003cid\u003e DELETED` | Dedup hit during TTL cooldown (see below) |\n\nThe state suffix only appears on dedup hits — a `put` without `idp:` always returns plain `INSERTED \u003cid\u003e`, keeping the response fully backwards-compatible with standard beanstalkd clients.\n\nThe key is scoped to the tube and cleared when the job is deleted, so the same key can be reused afterwards.\n\n#### Cooldown TTL\n\nBy default, the idempotency key is removed as soon as the job is deleted. Add a TTL with `idp:key:N` to keep deduplicating for N seconds after deletion:\n\n```text\nput 0 0 30 5 idp:report:300\n\u003cbody\u003e\n→ INSERTED 1\n\n(reserve → delete job 1)\n\nput 0 0 30 5 idp:report:300\n\u003cbody\u003e\n→ INSERTED 1 DELETED     (still deduped — within 300s cooldown)\n```\n\nAfter the cooldown expires, the key is freed and a new job will be created. `idp:key` (no TTL) keeps the original behaviour — key removed immediately on delete.\n\n### Job Groups (Fan-out / Fan-in)\n\nGroup related jobs together with `grp:` and chain dependent work with `aft:`. After-jobs are held until every job in the group they depend on has been deleted:\n\n```text\nput 0 0 30 11 grp:import\nimport-row-1\nput 0 0 30 11 grp:import\nimport-row-2\nput 0 0 60 14 aft:import\nsend-summary\n```\n\nThe `send-summary` job stays held until both `import` group jobs are deleted. Buried jobs block group completion — kick them to let the group finish. If an `aft:` job isn't running and you're not sure why, use `stats-group \u003cname\u003e` to check whether the group still has pending or buried members.\n\nChain stages together by combining `aft:` and `grp:` on the same job — the job waits on one group while belonging to another:\n\n```text\nput 0 0 30 5 grp:extract\nrow-1\nput 0 0 30 5 grp:extract\nrow-2\nput 0 0 30 5 aft:extract grp:transform\ntransform\nput 0 0 30 5 aft:transform\nload\n```\n\nHere `transform` waits for the `extract` group to finish and is itself a member of the `transform` group. `load` waits for `transform` to complete — giving you a simple DAG pipeline.\n\nUse `stats-group \u003cname\u003e` to inspect group state — useful for debugging why `aft:` jobs aren't running:\n\n```text\nstats-group import\n→ OK \u003cbytes\u003e\n---\nname: \"import\"\npending: 2\nburied: 1\ncomplete: false\nwaiting-jobs: 1\n```\n\nA buried job blocks group completion (`complete: false`). Kick it to let the group finish.\n\nGroup names are global — jobs in the same group can span multiple tubes. Note that the server does not detect cycles: if two groups depend on each other, the waiting jobs will be held indefinitely. Cycle avoidance is the client's responsibility.\n\n### Concurrency Keys\n\nLimit parallel processing of related jobs. When a job with a `con:` key is reserved, other ready jobs sharing the same key are hidden from `reserve` until the reservation ends (via delete, release, bury, TTR timeout, or disconnect):\n\n```text\nput 0 0 30 7 con:user-42\npayload1\nput 0 0 30 7 con:user-42\npayload2\n```\n\nOnly one `con:user-42` job can be reserved at a time, ensuring serial processing per key.\n\nSet a higher limit with `con:key:N` to allow N concurrent reservations:\n\n```text\nput 0 0 30 7 con:api:3\npayload1\nput 0 0 30 7 con:api:3\npayload2\n```\n\nUp to 3 `con:api` jobs can be reserved simultaneously. `con:key` (no `:N`) defaults to a limit of 1.\n\nBurying or releasing-with-delay a job frees its concurrency slot immediately — the slot is only held while the job is reserved. Delayed jobs don't occupy a slot until they become ready and are reserved. Use `stats-job \u003cid\u003e` to check a job's current state if reserves are unexpectedly blocked.\n\n### Batch Operations\n\nReduce round trips when working with many jobs at once.\n\n#### reserve-batch\n\nReserve up to N jobs in a single call (1–1000). Returns immediately with whatever is available — if fewer jobs are ready than requested, you get fewer:\n\n```text\nreserve-batch 5\n→ RESERVED_BATCH 3\n→ RESERVED 1 5\n→ hello\n→ RESERVED 2 5\n→ world\n→ RESERVED 3 7\n→ goodbye\n```\n\nThe response starts with `RESERVED_BATCH \u003ccount\u003e`, followed by standard `RESERVED \u003cid\u003e \u003cbytes\u003e\\r\\n\u003cbody\u003e\\r\\n` entries for each job. If no jobs are available, `RESERVED_BATCH 0` is returned.\n\n#### delete-batch\n\nDelete multiple jobs in a single call (1–1000 IDs, space-separated):\n\n```text\ndelete-batch 1 2 3 99\n→ DELETED_BATCH 3 1\n```\n\nReturns `DELETED_BATCH \u003cdeleted_count\u003e \u003cnot_found_count\u003e` — here 3 jobs were deleted and 1 was not found.\n\n## Installation\n\n### Homebrew\n\n```bash\nbrew tap tuberq/tuber\nbrew install tuber\n```\n\n### Cargo\n\n```bash\ncargo install --git https://github.com/tuberq/tuber\n```\n\nPre-built binaries for Linux and macOS are available on the [releases page](https://github.com/tuberq/tuber/releases).\n\n### Docker\n\n```bash\ndocker run ghcr.io/tuberq/tuber server -l 0.0.0.0 -p 11300\n```\n\n### Building from source\n\n```bash\ncargo build --release\n```\n\nThe binary will be at `target/release/tuber`.\n\n## CLI Reference\n\n### Server\n\n```bash\ntuber server [OPTIONS]\n```\n\n| Option | Env var | Default | Description |\n|---|---|---|---|\n| `-l`, `--listen` | `TUBER_LISTEN` | `0.0.0.0` | Listen address |\n| `-p`, `--port` | `TUBER_PORT` | `11300` | Listen port |\n| `-b`, `--binlog-dir` | `TUBER_BINLOG_DIR` | — | WAL directory (enables persistence) |\n| `-z`, `--max-job-size` | `TUBER_MAX_JOB_SIZE` | `65535` | Max size of a single job body. Accepts suffixes: `k`, `m`, `g`, `t` (e.g. `64k`). |\n| `--max-jobs-size` | `TUBER_MAX_JOBS_SIZE` | unlimited | Max total in-memory size of all jobs (bodies + per-job overhead + tombstones). PUT returns `OUT_OF_MEMORY` when exceeded; reserve/release/bury/kick/delete always succeed. Accepts suffixes: `k`, `m`, `g`, `t` (e.g. `2g`, `500M`). |\n| `-V` | `TUBER_VERBOSE` | warn | Verbosity (`-V` info, `-VV` debug) |\n| `--metrics-port` | `TUBER_METRICS_PORT` | — | Prometheus metrics endpoint port |\n| `--name` | `TUBER_NAME` | — | Instance name (shown in stats and metrics) |\n\n```bash\n# Listen on a custom port with persistence\ntuber server -p 11301 -b /var/lib/tuber\n\n# Verbose mode with metrics\ntuber server -VV --metrics-port 9100\n\n# Memory-bounded server (Docker-friendly)\ntuber server --max-jobs-size 2g -b /var/lib/tuber --metrics-port 9100\n```\n\n#### Durability \u0026 fsync\n\nWhen persistence is enabled (`-b`), tuber appends job mutations to a write-ahead log (WAL). The WAL is fsynced every 100ms as part of the server's internal tick — not on every write. This means:\n\n- **At most 100ms of data can be lost on a crash.** Jobs written in the last tick interval may not have been fsynced to disk yet.\n- **fsync overhead is constant regardless of throughput.** Whether you're doing 10 jobs/sec or 100,000 jobs/sec, tuber calls fsync ~10 times per second. On NVMe/SSD storage this adds negligible latency; on spinning disks it costs ~50–150ms/sec of I/O time.\n\nThis is a different trade-off from databases like PostgreSQL or MySQL, which fsync on every transaction commit to guarantee durability of each acknowledged write (the \"D\" in ACID). Tuber's `INSERTED` response means the job is buffered in the WAL but not necessarily fsynced — similar to PostgreSQL's `synchronous_commit = off` mode. For most queue workloads, losing a fraction of a second of jobs on a hard crash is acceptable, and the throughput benefit is significant.\n\nWithout `-b`, all state is in-memory only and lost on restart.\n\n### Put\n\n```bash\ntuber put [OPTIONS] [BODY]\n```\n\n| Option | Default | Description |\n|---|---|---|\n| `-t`, `--tube` | `default` | Tube name |\n| `-p`, `--pri` | `0` | Priority (0 is most urgent) |\n| `-d`, `--delay` | `0` | Delay in seconds before job becomes ready |\n| `--ttr` | `60` | Time-to-run in seconds |\n| `-i`, `--idp` | — | Idempotency key — `key` or `key:ttl` (TTL seconds keeps deduping after delete) |\n| `-g`, `--grp` | — | Group name (for job grouping) |\n| `--aft` | — | After-group dependency (wait for this group to complete) |\n| `-c`, `--con` | — | Concurrency key — `key` or `key:N` (N = max concurrent reservations, default 1) |\n| `-a`, `--addr` | `localhost:11300` | Server address |\n\n```bash\n# Put a job on a specific tube with priority\ntuber put -t emails --pri 100 \"send welcome email\"\n\n# Pipe jobs from a file\ncat jobs.txt | tuber put -t batch\n\n# Put a job with a concurrency key\ntuber put -c deploy \"deploy-service-a\"\n\n# Put grouped jobs with a dependent follow-up\ntuber put -g import \"import-row-1\"\ntuber put -g import \"import-row-2\"\ntuber put --aft import \"send-summary\"\n```\n\n### Work\n\nReserve and execute jobs as shell commands.\n\n```bash\ntuber work [OPTIONS]\n```\n\n| Option | Default | Description |\n|---|---|---|\n| `-t`, `--tube` | `default` | Tube to watch |\n| `-j`, `--parallel` | `1` | Number of parallel workers |\n| `-a`, `--addr` | `localhost:11300` | Server address |\n\n```bash\n# Process jobs from the \"emails\" tube with 4 workers\ntuber work -t emails -j 4\n```\n\n### Tubes\n\nList all tubes with a summary of job counts.\n\n```bash\ntuber tubes [OPTIONS]\n```\n\n| Option | Default | Description |\n|---|---|---|\n| `-a`, `--addr` | `localhost:11300` | Server address |\n\n```bash\n$ tuber tubes\ndefault: ready=4 reserved=0 delayed=0 buried=0\nmy-tube: ready=16 reserved=0 delayed=0 buried=0\n```\n\n### Stats\n\nShow global server statistics or per-tube statistics. See [Statistics Reference](docs/statistics.md) for all available fields.\n\n```bash\ntuber stats [OPTIONS]\n```\n\n| Option | Default | Description |\n|---|---|---|\n| `-t`, `--tube` | — | Tube name (omit for global stats) |\n| `-a`, `--addr` | `localhost:11300` | Server address |\n\n```bash\n# Global stats\ntuber stats\n\n# Per-tube stats\ntuber stats -t emails\n```\n\n## Protocol Reference\n\nTuber speaks the [beanstalkd protocol](https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt) — any beanstalkd client library works out of the box. Commands marked with **+** are tuber extensions.\n\nAll commands are `\\r\\n`-terminated. `\u003cid\u003e` is a 64-bit job ID, `\u003cpri\u003e` is a 32-bit priority (0 = most urgent), `\u003cdelay\u003e` and `\u003cttr\u003e` are seconds, `\u003cbytes\u003e` is body length.\n\n### Producer commands\n\n| Command | Description |\n|---|---|\n| `put \u003cpri\u003e \u003cdelay\u003e \u003cttr\u003e \u003cbytes\u003e [tags]\\r\\n\u003cbody\u003e\\r\\n` | Submit a job. Returns `INSERTED \u003cid\u003e` or `BURIED \u003cid\u003e`. |\n| `use \u003ctube\u003e\\r\\n` | Set the tube for subsequent `put` commands. Returns `USING \u003ctube\u003e`. |\n\n**+ Put extension tags** — append space-separated tags after `\u003cbytes\u003e`:\n\n| Tag | Effect |\n|---|---|\n| `idp:\u003ckey\u003e` or `idp:\u003ckey\u003e:\u003cttl\u003e` | Idempotency — deduplicates jobs by key within the tube. Optional TTL (seconds) keeps deduplicating after deletion. See [Unique Jobs](#unique-jobs-idempotency). |\n| `grp:\u003cname\u003e` | Assigns the job to a group for fan-out/fan-in. See [Job Groups](#job-groups-fan-out--fan-in). |\n| `aft:\u003cname\u003e` | Holds the job until all jobs in the named group are deleted. See [Job Groups](#job-groups-fan-out--fan-in). |\n| `con:\u003ckey\u003e` or `con:\u003ckey\u003e:\u003climit\u003e` | Concurrency key — limits how many jobs per key can be reserved at once (default 1). See [Concurrency Keys](#concurrency-keys). |\n\n### Worker commands\n\n| Command | Description |\n|---|---|\n| `reserve\\r\\n` | Block until a job is available. Returns `RESERVED \u003cid\u003e \u003cbytes\u003e\\r\\n\u003cbody\u003e`. |\n| `reserve-with-timeout \u003cseconds\u003e\\r\\n` | Like `reserve` but times out. Returns `RESERVED …` or `TIMED_OUT`. |\n| `reserve-job \u003cid\u003e\\r\\n` | Reserve a specific job by ID. Returns `RESERVED …` or `NOT_FOUND`. |\n| **+** `reserve-batch \u003ccount\u003e\\r\\n` | Reserve up to `\u003ccount\u003e` jobs at once (1–1000). Non-blocking — returns whatever is available. See [Batch Operations](#batch-operations). |\n| **+** `reserve-mode \u003cmode\u003e\\r\\n` | Set reserve strategy: `default` (priority-first), `weighted` (random by tube weight), or `weighted-fair` (adjusted for processing time). See [Weighted Reserve](#weighted-reserve). |\n| `delete \u003cid\u003e\\r\\n` | Delete a job. Returns `DELETED` or `NOT_FOUND`. |\n| **+** `delete-batch \u003cid\u003e …\\r\\n` | Delete multiple jobs by ID (1–1000, space-separated). Returns `DELETED_BATCH \u003cdeleted\u003e \u003cnot_found\u003e`. See [Batch Operations](#batch-operations). |\n| `release \u003cid\u003e \u003cpri\u003e \u003cdelay\u003e\\r\\n` | Release a reserved job back to ready (or delayed). Returns `RELEASED`. |\n| `bury \u003cid\u003e \u003cpri\u003e\\r\\n` | Bury a reserved job. Returns `BURIED`. |\n| `touch \u003cid\u003e\\r\\n` | Reset the TTR timer on a reserved job. Returns `TOUCHED`. |\n| `watch \u003ctube\u003e [weight]\\r\\n` | Add a tube to the watch list. Optional **+** weight for weighted mode. Returns `WATCHING \u003ccount\u003e`. |\n| `ignore \u003ctube\u003e\\r\\n` | Remove a tube from the watch list. Returns `WATCHING \u003ccount\u003e` or `NOT_IGNORED`. |\n\n### Peek / inspect commands\n\n| Command | Description |\n|---|---|\n| `peek \u003cid\u003e\\r\\n` | Peek at a job by ID. Returns `FOUND \u003cid\u003e \u003cbytes\u003e\\r\\n\u003cbody\u003e` or `NOT_FOUND`. |\n| `peek-ready\\r\\n` | Peek at the next ready job in the used tube. |\n| `peek-delayed\\r\\n` | Peek at the next delayed job in the used tube. |\n| `peek-buried\\r\\n` | Peek at the next buried job in the used tube. |\n\n### Admin commands\n\n| Command | Description |\n|---|---|\n| `kick \u003cbound\u003e\\r\\n` | Kick up to `\u003cbound\u003e` buried/delayed jobs in the used tube. Returns `KICKED \u003ccount\u003e`. |\n| `kick-job \u003cid\u003e\\r\\n` | Kick a specific buried or delayed job. Returns `KICKED` or `NOT_FOUND`. |\n| `pause-tube \u003ctube\u003e \u003cdelay\u003e\\r\\n` | Pause a tube for `\u003cdelay\u003e` seconds. Returns `PAUSED`. |\n| **+** `flush-tube \u003ctube\u003e\\r\\n` | Delete all jobs from a tube. Returns `FLUSHED \u003ccount\u003e`. |\n| `stats\\r\\n` | Server-wide statistics in YAML. See [Statistics Reference](docs/statistics.md). |\n| `stats-job \u003cid\u003e\\r\\n` | Statistics for a single job in YAML. See [Statistics Reference](docs/statistics.md#job-stats-stats-job-id). |\n| `stats-tube \u003ctube\u003e\\r\\n` | Statistics for a tube in YAML. See [Statistics Reference](docs/statistics.md#tube-stats-stats-tube-tube). |\n| **+** `stats-group \u003cname\u003e\\r\\n` | Statistics for a job group in YAML. See [Statistics Reference](docs/statistics.md#group-stats-stats-group-name). |\n| `list-tubes\\r\\n` | List all existing tubes in YAML. |\n| `list-tube-used\\r\\n` | Show the currently used tube. Returns `USING \u003ctube\u003e`. |\n| `list-tubes-watched\\r\\n` | List watched tubes in YAML. |\n| `drain\\r\\n` | Enter drain mode: rejects new `put` commands with `DRAINING` while allowing workers to finish existing jobs. Also triggered by `SIGUSR1`. |\n| **+** `undrain\\r\\n` | Exit drain mode: resumes accepting `put` commands. Returns `NOT_DRAINING`. |\n| `quit\\r\\n` | Close the connection. |\n\n## Performance\n\nTuber achieves throughput comparable to beanstalkd on standard workloads. Indicative numbers from a single-client benchmark on localhost (100k jobs, Apple M-series):\n\n| Scenario | PUT/s | Reserve+Delete/s |\n|---|---|---|\n| Small body, no WAL | ~34,000 | ~7,300 |\n| Small body, WAL | ~26,500 | ~6,300 |\n| 4KB body, no WAL | ~27,000 | ~7,300 |\n| 4KB body, WAL | ~18,000 | ~6,600 |\n\nThe batch API (`reserve-batch`, `delete-batch`) significantly improves throughput by amortising per-command overhead:\n\n| Scenario | Reserve+Delete/s |\n|---|---|\n| Individual reserve + delete | ~7,300 |\n| Batch reserve (1000) + individual delete | ~32,500 |\n| Batch reserve (1000) + batch delete (1000) | ~300,000 |\n\nResults will vary by hardware, network, and workload. Run your own benchmarks for production sizing.\n\n## Claude Code Skill\n\nThe `skill/` directory contains a [Claude Code skill](https://support.claude.com/en/articles/12512198-how-to-create-custom-skills) that teaches AI coding agents how to interact with Tuber (and beanstalkd) using `echo` and `nc`. It covers the full protocol with copy-paste examples.\n\nTo install it globally in Claude Code:\n\n```bash\nln -s \"$(pwd)/skill\" ~/.claude/skills/tuber\n```\n\n## License\n\nMIT — see [LICENSE](LICENSE).\n\nOriginally created by Keith Rarick and contributors. The original beanstalkd is licensed under the [MIT License](https://github.com/beanstalkd/beanstalkd/blob/master/LICENSE).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftuberq%2Ftuber","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftuberq%2Ftuber","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftuberq%2Ftuber/lists"}