{"id":23155923,"url":"https://github.com/hyp3rd/go-worker","last_synced_at":"2026-04-14T23:01:14.760Z","repository":{"id":65552201,"uuid":"594558652","full_name":"hyp3rd/go-worker","owner":"hyp3rd","description":"`go-worker` provides a simple way to manage and execute tasks concurrently and prioritized, leveraging a `TaskManager` that spawns a pool of `workers`.","archived":false,"fork":false,"pushed_at":"2026-04-09T22:49:15.000Z","size":1256,"stargazers_count":6,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-10T00:28:09.564Z","etag":null,"topics":["concurrency","golang","parallel-programming","task-manager","task-runner","thread-pool","threading"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hyp3rd.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":["hyp3rd"]}},"created_at":"2023-01-28T23:03:08.000Z","updated_at":"2026-04-09T22:48:57.000Z","dependencies_parsed_at":"2024-06-20T15:33:27.177Z","dependency_job_id":"898981c2-5032-4c8c-937c-2911b074d6a9","html_url":"https://github.com/hyp3rd/go-worker","commit_stats":null,"previous_names":[],"tags_count":31,"template":false,"template_full_name":null,"purl":"pkg:github/hyp3rd/go-worker","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyp3rd%2Fgo-worker","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyp3rd%2Fgo-worker/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyp3rd%2Fgo-worker/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyp3rd%2Fgo-worker/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hyp3rd","download_url":"https://codeload.github.com/hyp3rd/go-worker/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyp3rd%2Fgo-worker/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31818840,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-14T18:05:02.291Z","status":"ssl_error","status_checked_at":"2026-04-14T18:05:01.765Z","response_time":153,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["concurrency","golang","parallel-programming","task-manager","task-runner","thread-pool","threading"],"created_at":"2024-12-17T21:11:57.497Z","updated_at":"2026-04-14T23:01:14.745Z","avatar_url":"https://github.com/hyp3rd.png","language":"Go","readme":"# go-worker\n\n[![Go](https://github.com/hyp3rd/go-worker/actions/workflows/go.yml/badge.svg)](https://github.com/hyp3rd/go-worker/actions/workflows/go.yml) [![CodeQL](https://github.com/hyp3rd/go-worker/actions/workflows/codeql.yml/badge.svg)](https://github.com/hyp3rd/go-worker/actions/workflows/codeql.yml) [![Go Report Card](https://goreportcard.com/badge/github.com/hyp3rd/go-worker)](https://goreportcard.com/report/github.com/hyp3rd/go-worker) [![Go Reference](https://pkg.go.dev/badge/github.com/hyp3rd/go-worker.svg)](https://pkg.go.dev/github.com/hyp3rd/go-worker) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n`go-worker` provides a simple way to manage and execute prioritized tasks concurrently, backed by a `TaskManager` with a worker pool and a priority queue.\n\n## Breaking changes (January 2026)\n\n- `Stop()` removed. Use `StopGraceful(ctx)` or `StopNow()`.\n- Local result streaming uses `SubscribeResults(buffer)`; `GetResults()` is now a compatibility shim and the legacy local `StreamResults()` is removed (gRPC `StreamResults` remains).\n- `RegisterTasks` now returns an error.\n- `Task.Execute` replaces `Fn` in examples.\n- `NewGRPCServer` requires a handler map.\n- Rate limiting is deterministic: burst is `min(maxWorkers, maxTasks)` and `ExecuteTask` uses the shared limiter.\n- gRPC durable tasks use `RegisterDurableTasks` and the new `DurableTask` message.\n- When a durable backend is configured, use `RegisterDurableTask(s)` instead of `RegisterTask(s)`.\n- `DurableBackend` now requires `Extend` (lease renewal support for custom backends).\n\n## Features\n\n- Task prioritization: tasks are scheduled by priority.\n- Concurrent execution: tasks run in a worker pool with strict rate limiting.\n- Middleware: wrap the `TaskManager` for logging/metrics, etc.\n- Results: fan-out subscriptions via `SubscribeResults`.\n- Cancellation: cancel tasks before or during execution.\n- Retries: exponential backoff with capped delays.\n- Durability: optional Redis-backed durable task queue (at-least-once, lease-based).\n\n## Admin UI\n\nThe admin UI is a Next.js app that connects to the worker admin gateway over\nHTTP/JSON with mTLS. For local setup and environment variables, see:\n\n- `docs/admin-ui.md`\n- `PRD-admin-service.md` (API contract + gateway design)\n\nLocal stack:\n\n```bash\n./scripts/gen-admin-certs.sh\ndocker compose -f compose.admin.yaml up --build\n```\n\nJob event history can be persisted across restarts by configuring the worker\nservice file-backed store (`WORKER_JOB_EVENT_DIR`); see the admin UI docs.\n\n## Architecture\n\n```mermaid\nflowchart LR\n    Client[Client code] --\u003e|register tasks| TaskManager\n    TaskManager --\u003e Queue[Priority Queue]\n    Queue --\u003e|dispatch| Worker1[Worker]\n    Queue --\u003e|dispatch| WorkerN[Worker]\n    Worker1 --\u003e Results[Result Broadcaster]\n    WorkerN --\u003e Results\n```\n\n## gRPC Service\n\n`go-worker` exposes its functionality over gRPC through the `WorkerService`.\nThe service allows clients to register tasks, stream results, cancel running\ntasks and query their status.\n\n### Handlers and Payloads\n\nThe server registers handlers keyed by name. Each handler consists of a `Make` function that constructs the expected payload type, and a `Fn` function that executes the task logic using the unpacked payload.\n\nClients send a `Task` message containing a `name` and a serialized `payload` using `google.protobuf.Any`. The server automatically unpacks the `Any` payload into the correct type based on the registered handler and passes it to the corresponding function. For durable tasks, use `RegisterDurableTasks` with the `DurableTask` message (the payload is still an `Any`).\n\n```go\nhandlers := map[string]worker.HandlerSpec{\n    \"create_user\": {\n        Make: func() protoreflect.ProtoMessage { return \u0026workerpb.CreateUserPayload{} },\n        Fn: func(ctx context.Context, payload protoreflect.ProtoMessage) (any, error) {\n            p := payload.(*workerpb.CreateUserPayload)\n            return \u0026workerpb.CreateUserResponse{UserId: \"1234\"}, nil\n        },\n    },\n}\n\nsrv := worker.NewGRPCServer(tm, handlers)\n```\n\nFor production, configure TLS credentials and interceptors (logging/auth) on the gRPC server; see `__examples/grpc` for a complete setup.\nFor a Redis-backed durable gRPC example, see `__examples/grpc_durable`.\n\nQueue selection for gRPC tasks is done via metadata (`worker.MetadataQueueKey`, `worker.MetadataWeightKey`):\n\n- `queue`: named queue (empty means `default`)\n- `weight`: integer weight (as string)\n\nSecurity defaults to follow in production:\n\n- Use TLS (prefer mTLS) for gRPC; the durable gRPC example uses insecure credentials for local demos only.\n- Scrub payloads and auth metadata from logs; log task IDs or correlation IDs instead of PII.\n- Implement auth via `WithGRPCAuth` and redact/validate tokens inside interceptors.\n\n### Typed handler registry (optional)\n\nFor compile-time payload checks in handlers, use the typed registry. It removes the need for payload type assertions inside your handler functions.\n\n```go\nregistry := worker.NewTypedHandlerRegistry()\n_ = worker.AddTypedHandler(registry, \"create_user\", worker.TypedHandlerSpec[*workerpb.CreateUserPayload]{\n    Make: func() *workerpb.CreateUserPayload { return \u0026workerpb.CreateUserPayload{} },\n    Fn: func(ctx context.Context, payload *workerpb.CreateUserPayload) (any, error) {\n        return \u0026workerpb.CreateUserResponse{UserId: \"1234\"}, nil\n    },\n})\n\nsrv := worker.NewGRPCServer(tm, registry.Handlers())\n```\n\n### Durable gRPC client (example)\n\nUse `RegisterDurableTasks` for persisted tasks (payload is still `Any`). Results stream is shared with non-durable tasks.\n\n```go\npayload, _ := anypb.New(\u0026workerpb.SendEmailPayload{\n    To:      \"ops@example.com\",\n    Subject: \"Hello durable gRPC\",\n    Body:    \"Persisted task\",\n})\n\nresp, err := client.RegisterDurableTasks(ctx, \u0026workerpb.RegisterDurableTasksRequest{\n    Tasks: []*workerpb.DurableTask{\n        {\n            Name:           \"send_email\",\n            Payload:        payload,\n            IdempotencyKey: \"durable:send_email:ops@example.com\",\n        },\n    },\n})\nif err != nil {\n    log.Fatal(err)\n}\n```\n\n### Authorization hook\n\nYou can enforce authentication/authorization at the gRPC boundary with `WithGRPCAuth`.\nReturn a gRPC status error to control the response code (e.g., `Unauthenticated` or `PermissionDenied`).\n\n```go\nauth := func(ctx context.Context, method string, _ any) error {\n md, _ := metadata.FromIncomingContext(ctx)\n values := md.Get(\"authorization\")\n if len(values) == 0 {\n  return status.Error(codes.Unauthenticated, \"missing token\")\n }\n\n token := strings.TrimSpace(strings.TrimPrefix(values[0], \"Bearer \"))\n if token != \"expected-token\" {\n  return status.Error(codes.Unauthenticated, \"missing or invalid token\")\n }\n\n return nil\n}\n\nsrv := worker.NewGRPCServer(tm, handlers, worker.WithGRPCAuth(auth))\n```\n\n**Note on deadlines:** When the client uses a stream context with a deadline, exceeding the deadline will terminate the stream but **does not cancel the tasks running on the server**. To properly handle cancellation, use separate contexts for task execution or cancel tasks explicitly.\n\n## API Example (gRPC)\n\n```go\ntm := worker.NewTaskManagerWithDefaults(context.Background())\nhandlers := map[string]worker.HandlerSpec{\n    \"create_user\": {\n        Make: func() protoreflect.ProtoMessage { return \u0026workerpb.CreateUserPayload{} },\n        Fn: func(ctx context.Context, payload protoreflect.ProtoMessage) (any, error) {\n            p := payload.(*workerpb.CreateUserPayload)\n            return \u0026workerpb.CreateUserResponse{UserId: \"1234\"}, nil\n        },\n    },\n}\n\nsrv := worker.NewGRPCServer(tm, handlers)\n\ngs := grpc.NewServer()\nworkerpb.RegisterWorkerServiceServer(gs, srv)\n// listen and serve ...\n\nclient := workerpb.NewWorkerServiceClient(conn)\n\n// register a task with payload\npayload, err := anypb.New(\u0026workerpb.CreateUserPayload{\n    Username: \"newuser\",\n    Email:    \"newuser@example.com\",\n})\nif err != nil {\n    log.Fatal(err)\n}\n\n_, _ = client.RegisterTasks(ctx, \u0026workerpb.RegisterTasksRequest{\n    Tasks: []*workerpb.Task{\n        {\n            Name:           \"create_user\",\n            Payload:        payload,\n            CorrelationId:  uuid.NewString(),\n            IdempotencyKey: \"create_user:newuser@example.com\",\n            Metadata:       map[string]string{\"source\": \"api_example\", \"role\": \"admin\"},\n        },\n    },\n})\n\n// cancel by id\n_, _ = client.CancelTask(ctx, \u0026workerpb.CancelTaskRequest{Id: \"\u003ctask-id\u003e\"})\n\n// get task information\nres, _ := client.GetTask(ctx, \u0026workerpb.GetTaskRequest{Id: \"\u003ctask-id\u003e\"})\nfmt.Println(res.Status)\n```\n\n## API Usage Examples\n\n### Quick Start\n\n```go\ntm := worker.NewTaskManager(context.Background(), 2, 10, 5, 30*time.Second, time.Second, 3)\n\ntask := \u0026worker.Task{\n    ID:       uuid.New(),\n    Priority: 1,\n    Ctx:      context.Background(),\n    Execute:  func(ctx context.Context, _ ...any) (any, error) { return \"hello\", nil },\n}\n\nif err := tm.RegisterTask(context.Background(), task); err != nil {\n    log.Fatal(err)\n}\n\nresults, cancel := tm.SubscribeResults(1)\nres := \u003c-results\ncancel()\n\nfmt.Println(res.Result)\n```\n\n### Result backpressure\n\nBy default, full subscriber buffers drop new results. You can change the policy:\n\n```go\ntm.SetResultsDropPolicy(worker.DropOldest)\n```\n\n`GetResults()` remains as a compatibility shim and returns a channel with a default buffer.\nPrefer `SubscribeResults(buffer)` so you can control buffering and explicitly unsubscribe.\n\n### Initialization\n\nCreate a new `TaskManager` by calling the `NewTaskManager()` function with the following parameters:\n\n- `ctx` is the base context for the task manager (used for shutdown and derived task contexts)\n- `maxWorkers` is the number of workers to start. If \u003c= 0, it will default to the number of available CPUs\n- `maxTasks` is the maximum number of queued tasks, defaults to 10\n- `tasksPerSecond` is the rate limit of tasks that can be executed per second. If \u003c= 0, rate limiting is disabled\n  (the limiter uses a burst size of `min(maxWorkers, maxTasks)` for deterministic throttling)\n- `timeout` is the default timeout for tasks, defaults to 5 minutes\n- `retryDelay` is the default delay between retries, defaults to 1 second\n- `maxRetries` is the default maximum number of retries, defaults to 3 (0 disables retries)\n\n```go\ntm := worker.NewTaskManager(context.Background(), 4, 10, 5, 30*time.Second, 1*time.Second, 3)\n```\n\n### Durable backend (Redis)\n\nDurable tasks use a separate `DurableTask` type and a handler registry keyed by name.\nThe default encoding is protobuf via `ProtoDurableCodec`. When a durable backend is enabled,\n`RegisterTask`/`RegisterTasks` are disabled in favor of `RegisterDurableTask(s)`.\nSee `__examples/durable_redis` for a runnable example.\n\n```go\nclient, err := rueidis.NewClient(rueidis.ClientOption{\n    InitAddress: []string{\"127.0.0.1:6379\"},\n})\nif err != nil {\n    log.Fatal(err)\n}\ndefer client.Close()\n\nbackend, err := worker.NewRedisDurableBackend(client)\nif err != nil {\n    log.Fatal(err)\n}\n\nhandlers := map[string]worker.DurableHandlerSpec{\n    \"send_email\": {\n        Make: func() proto.Message { return \u0026workerpb.SendEmailRequest{} },\n        Fn: func(ctx context.Context, payload proto.Message) (any, error) {\n            req := payload.(*workerpb.SendEmailRequest)\n            // process request\n            return \u0026workerpb.SendEmailResponse{MessageId: \"msg-1\"}, nil\n        },\n    },\n}\n\ntm := worker.NewTaskManagerWithOptions(\n    context.Background(),\n    worker.WithDurableBackend(backend),\n    worker.WithDurableHandlers(handlers),\n)\n\nerr = tm.RegisterDurableTask(context.Background(), worker.DurableTask{\n    Handler: \"send_email\",\n    Message: \u0026workerpb.SendEmailRequest{To: \"ops@example.com\"},\n    Retries: 5,\n    Queue:   \"email\",\n    Weight:  2,\n})\nif err != nil {\n    log.Fatal(err)\n}\n```\n\nOr use the typed durable registry for compile-time checks:\n\n```go\ndurableRegistry := worker.NewTypedDurableRegistry()\n_ = worker.AddTypedDurableHandler(durableRegistry, \"send_email\", worker.TypedDurableHandlerSpec[*workerpb.SendEmailRequest]{\n    Make: func() *workerpb.SendEmailRequest { return \u0026workerpb.SendEmailRequest{} },\n    Fn: func(ctx context.Context, payload *workerpb.SendEmailRequest) (any, error) {\n        // process request\n        return \u0026workerpb.SendEmailResponse{MessageId: \"msg-1\"}, nil\n    },\n})\n\ntm := worker.NewTaskManagerWithOptions(\n    context.Background(),\n    worker.WithDurableBackend(backend),\n    worker.WithDurableHandlers(durableRegistry.Handlers()),\n)\n```\n\nDefaults: lease is 30s, poll interval is 200ms, Redis dequeue batch is 50, and lease renewal is disabled (configurable via options).\nQueue weights for durable tasks can be configured with `WithRedisDurableQueueWeights`, and the default queue via `WithRedisDurableDefaultQueue`.\n\n### Scheduled jobs (cron)\n\nYou can register cron-based jobs that enqueue tasks on a schedule. Both 5-field and 6-field (seconds) cron expressions are supported.\n\n```go\ntm := worker.NewTaskManagerWithDefaults(context.Background())\n\nerr := tm.RegisterCronTask(context.Background(), \"hourly-report\", \"0 * * * *\", func(ctx context.Context) (*worker.Task, error) {\n return worker.NewTask(ctx, func(ctx context.Context, _ ...any) (any, error) {\n  // do work\n  return \"ok\", nil\n })\n})\nif err != nil {\n panic(err)\n}\n```\n\nFor durable backends, use:\n\n```go\n_ = tm.RegisterDurableCronTask(context.Background(), \"daily-email\", \"0 0 * * *\", func(ctx context.Context) (worker.DurableTask, error) {\n return worker.DurableTask{\n  Handler: \"send_email\",\n  Payload: []byte(\"...\"),\n }, nil\n})\n```\n\nOperational notes (durable Redis):\n\n- **Key hashing**: Redis Lua scripts touch multiple keys; for clustered Redis, all keys must share the same hash slot. The backend auto-wraps the prefix in `{}` to enforce this (e.g., `{go-worker}:ready`).\n- **DLQ**: Failed tasks are pushed to a dead-letter list (`{prefix}:dead`).\n- **DLQ replay**: Use the `workerctl durable dlq replay` command or the `__examples/durable_dlq_replay` utility (dry-run by default; use `--apply`/`-apply` to replay).\n- **Multi-node workers**: Multiple workers can safely dequeue from the same backend. Lease timeouts handle worker crashes, but tune `WithDurableLease` for your workload.\n- **Lease renewal**: enable `WithDurableLeaseRenewalInterval` for long-running tasks to extend leases while a task executes.\n- **Global coordination**: optional global rate limiting (`WithRedisDurableGlobalRateLimit`) and leader lock (`WithRedisDurableLeaderLock`) can limit dequeue rate or enforce a single active leader.\n- **Visibility**: Ready/processing queues live in per-queue sorted sets: `{prefix}:ready:\u003cqueue\u003e` and `{prefix}:processing:\u003cqueue\u003e`. Known queues are tracked in `{prefix}:queues`.\n- **Inspect utility**: `workerctl durable inspect` (or `__examples/durable_queue_inspect`) prints ready/processing/dead counts; use `--show-ids --queue=\u003cname\u003e` (or `-show-ids -queue=\u003cname\u003e`) to display IDs.\n\n### CLI tooling (workerctl)\n\nBuild the CLI:\n\n```bash\ngo build -o workerctl ./cmd/workerctl\n```\n\nInspect queues:\n\n```bash\n./workerctl durable inspect --redis-addr localhost:6380 --redis-password supersecret --redis-prefix go-worker --queue default --show-ids --peek 10\n```\n\nList queues:\n\n```bash\n./workerctl durable queues --with-counts\n```\n\nRequeue specific tasks by ID:\n\n```bash\n./workerctl durable retry --id 8c0f8b2d-0a4d-4a3b-9ad7-2d2a5b7f5d12 --apply\n```\n\nRequeue tasks from a source set (DLQ/ready/processing):\n\n```bash\n./workerctl durable retry --source dlq --limit 100 --apply\n./workerctl durable retry --source ready --from-queue default --limit 50 --apply\n```\n\nRequeue a queue directly (shortcut):\n\n```bash\n./workerctl durable requeue --queue default --limit 50 --apply\n```\n\nFetch a task by ID:\n\n```bash\n./workerctl durable get --id 8c0f8b2d-0a4d-4a3b-9ad7-2d2a5b7f5d12\n```\n\nEnqueue a durable task from JSON/YAML payload:\n\n```bash\n./workerctl durable enqueue --handler send_email --queue default --payload '{\"to\":\"ops@example.com\",\"subject\":\"Hello\",\"body\":\"Hi\"}' --apply\n./workerctl durable enqueue --handler send_email --payload-file payload.yaml --payload-format yaml --apply\n./workerctl durable enqueue --handler send_email --payload-b64 \"$(base64 -w0 payload.bin)\" --apply\n```\n\nNote: the payload is stored as raw bytes. JSON/YAML are encoded to JSON bytes. Make sure the bytes match your durable codec (default is protobuf).\n\nDelete a task (and optionally its hash):\n\n```bash\n./workerctl durable delete --id 8c0f8b2d-0a4d-4a3b-9ad7-2d2a5b7f5d12 --apply\n./workerctl durable delete --id 8c0f8b2d-0a4d-4a3b-9ad7-2d2a5b7f5d12 --delete-hash --apply\n```\n\nShow stats in JSON:\n\n```bash\n./workerctl durable stats --json\n./workerctl durable stats --watch 2s\n```\n\nPause/resume durable dequeue:\n\n```bash\n./workerctl durable pause --apply\n./workerctl durable resume --apply\n./workerctl durable paused\n```\n\nPurge queues (use with care):\n\n```bash\n./workerctl durable purge --ready --processing --queue default --apply\n```\n\nDump task metadata (JSON lines, no payloads):\n\n```bash\n./workerctl durable dump --queue default --ready --limit 100 \u003e dump.jsonl\n```\n\nExport/import queue snapshots (JSONL):\n\n```bash\n./workerctl durable snapshot export --out snapshot.jsonl --ready --processing --dlq\n./workerctl durable snapshot import --in snapshot.jsonl --apply\n```\n\nReplay DLQ items (dry-run by default):\n\n```bash\n./workerctl durable dlq replay --batch 100 --apply\n```\n\nUse `--tls` (and `--tls-insecure` if needed) for secure Redis connections.\n\nGenerate shell completion:\n\n```bash\n./workerctl completion zsh \u003e \"${fpath[1]}/_workerctl\"\n```\n\n### Multi-node coordination (durable Redis)\n\nDurable processing is **at-least-once**. When multiple nodes consume from the same Redis backend:\n\n- **Lease sizing**: set `WithDurableLease` longer than your worst-case task duration (plus buffer). If a task exceeds its lease, it can be requeued and run again on another node.\n- **Lease renewal (optional)**: set `WithDurableLeaseRenewalInterval` (less than the lease duration) to extend leases while tasks run.\n- **Idempotency**: enforce idempotency at the task level (idempotency key + handler-side dedupe) because duplicates are possible on retries and lease expiry.\n- **Throughput control**: worker count and polling interval are per node. If you need a **global** rate limit across nodes, enforce it externally or in the handler.\n- **Clock skew**: Redis uses server time for scores; keep node clocks in sync to avoid uneven dequeue/lease timing.\n- **Isolation**: use distinct prefixes per environment/region/tenant to avoid cross-talk.\n\nChecklist:\n\n- Set `WithDurableLease` above p99 task duration (plus buffer).\n- Enable `WithDurableLeaseRenewalInterval` for tasks that can exceed the lease duration.\n- Keep task handlers idempotent; always use idempotency keys for external side effects.\n- Tune `WithDurablePollInterval` based on desired responsiveness vs. Redis load.\n- Scale `WithMaxWorkers` per node based on CPU and downstream throughput.\n\nExample:\n\n```bash\ngo run __examples/durable_queue_inspect/main.go -redis-addr=localhost:6380 -redis-password=supersecret -redis-prefix=go-worker\ngo run __examples/durable_queue_inspect/main.go -redis-addr=localhost:6380 -redis-password=supersecret -redis-prefix=go-worker -queue=default -show-ids -peek=5\n```\n\nSample output:\n\n```shell\nqueue=default ready=3 processing=1\nready=3 processing=1 dead=0\nready IDs: 8c0f8b2d-0a4d-4a3b-9ad7-2d2a5b7f5d12, 9b18d5f2-3b7f-4d7a-9dd1-1bb1a3a56c55\n```\n\nDLQ replay example (dry-run by default):\n\n```bash\ngo run __examples/durable_dlq_replay/main.go -redis-addr=localhost:6380 -redis-password=supersecret -redis-prefix=go-worker -batch=100\ngo run __examples/durable_dlq_replay/main.go -redis-addr=localhost:6380 -redis-password=supersecret -redis-prefix=go-worker -batch=100 -apply\n```\n\nOptional retention can be configured to prevent unbounded task registry growth:\n\n```go\ntm.SetRetentionPolicy(worker.RetentionPolicy{\n    TTL:        24 * time.Hour,\n    MaxEntries: 100000,\n})\n```\n\nRetention applies only to terminal tasks (completed/failed/cancelled/etc). Running or queued tasks are never evicted.\nCleanup is best-effort: it runs on task completion and periodically when `TTL \u003e 0`.\nIf `CleanupInterval` is unset, the default interval is `clamp(TTL/2, 1s, 1m)`.\nIf `MaxEntries` is lower than the number of active tasks, the registry may exceed the limit until tasks finish.\n\nTask lifecycle hooks can be configured for structured logging or tracing:\n\n```go\ntm.SetHooks(worker.TaskHooks{\n    OnQueued: func(task *worker.Task) {\n        // log enqueue\n    },\n    OnStart: func(task *worker.Task) {\n        // log start\n    },\n    OnFinish: func(task *worker.Task, status worker.TaskStatus, _ any, err error) {\n        // log completion\n        _ = err\n        _ = status\n    },\n})\n```\n\nTracing hooks can be configured with a tracer implementation:\n\n```go\ntm.SetTracer(myTracer)\n```\n\nSee `__examples/tracing` for a minimal logger-based tracer.\nSee `__examples/otel_tracing` for OpenTelemetry tracing with a stdout exporter.\n\n### OpenTelemetry metrics\n\nTo export metrics with OpenTelemetry, configure a meter provider and pass it to the task manager:\n\n```go\nexporter, err := stdoutmetric.New(stdoutmetric.WithPrettyPrint())\nif err != nil {\n    log.Fatal(err)\n}\n\nreader := sdkmetric.NewPeriodicReader(exporter)\nmp := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader))\ndefer func() {\n    _ = mp.Shutdown(context.Background())\n}()\n\nif err := tm.SetMeterProvider(mp); err != nil {\n    log.Fatal(err)\n}\n```\n\nSee `__examples/otel_metrics` for a complete runnable example.\nSee `__examples/otel_metrics_otlp` for an OTLP/HTTP exporter example.\n\nEmitted metrics:\n\n- `tasks_scheduled_total`\n- `tasks_running`\n- `tasks_completed_total`\n- `tasks_failed_total`\n- `tasks_cancelled_total`\n- `tasks_retried_total`\n- `results_dropped_total`\n- `queue_depth`\n- `task_latency_seconds`\n\n### Registering Tasks\n\nRegister new tasks by calling the `RegisterTasks()` method of the `TaskManager` struct and passing in a variadic number of tasks.\n\n```go\nid := uuid.New()\n\ntask := \u0026worker.Task{\n    ID:          id,\n    Name:        \"Some task\",\n    Description: \"Here goes the description of the task\",\n    Priority:    10,\n    Queue:       \"critical\",\n    Weight:      2,\n    Ctx:         context.Background(),\n    Execute: func(ctx context.Context, _ ...any) (any, error) {\n        time.Sleep(time.Second)\n        return fmt.Sprintf(\"task %s executed\", id), nil\n    },\n    Retries:    3,\n    RetryDelay: 2 * time.Second,\n}\n\ntask2 := \u0026worker.Task{\n    ID:       uuid.New(),\n    Priority: 10,\n    Queue:    \"default\",\n    Weight:   1,\n    Ctx:      context.Background(),\n    Execute:  func(ctx context.Context, _ ...any) (any, error) { return \"Hello, World!\", nil },\n}\n\nif err := tm.RegisterTasks(context.Background(), task, task2); err != nil {\n    log.Fatal(err)\n}\n```\n\nQueues and weights:\n\n- `Queue` groups tasks for scheduling. Empty means `default`.\n- `Weight` is a per-task scheduling hint within a queue (higher weight runs earlier among equal priorities).\n- Queue weights control inter-queue share via `WithQueueWeights`; change the default queue via `WithDefaultQueue`.\n\nFor gRPC, set `metadata[\"queue\"]` and `metadata[\"weight\"]` (string) on `Task`/`DurableTask`.\n\n### Scheduling Tasks\n\nSchedule tasks for later execution with `RunAt`, `RegisterTaskAt`, or `RegisterTaskAfter`.\n\n```go\ntask, _ := worker.NewTask(context.Background(), func(ctx context.Context, _ ...any) (any, error) {\n    return \"delayed\", nil\n})\n\n_ = tm.RegisterTaskAt(context.Background(), task, time.Now().Add(30*time.Second))\n// or\n_ = tm.RegisterTaskAfter(context.Background(), task, 30*time.Second)\n```\n\nDurable tasks can also be delayed by setting `RunAt` before `RegisterDurableTask`.\n\n### Stopping the Task Manager\n\nUse `StopGraceful` to stop accepting new tasks and wait for completion, or `StopNow` to cancel tasks immediately.\n\n```go\nctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)\ndefer cancel()\n\n_ = tm.StopGraceful(ctx)\n// or\n// tm.StopNow()\n```\n\n### Results\n\nSubscribe to results with a dedicated channel per subscriber.\n\n```go\nresults, cancel := tm.SubscribeResults(10)\n\nctx, cancelWait := context.WithTimeout(context.Background(), 5*time.Second)\ndefer cancelWait()\n\n_ = tm.Wait(ctx)\ncancel()\n\nfor res := range results {\n    fmt.Println(res)\n}\n```\n\n### Cancellation\n\nYou can cancel a `Task` by calling the `CancelTask()` method of the `TaskManager` struct and passing in the task ID as a parameter.\n\n```go\n_ = tm.CancelTask(task.ID)\n```\n\nYou can cancel all tasks by calling the `CancelAll()` method of the `TaskManager` struct.\n\n```go\ntm.CancelAll()\n```\n\n### Middleware\n\nYou can apply middleware to the `TaskManager` by calling the `RegisterMiddleware()` function and passing in the `TaskManager` and the middleware functions.\n\n```go\nsrv := worker.RegisterMiddleware[worker.Service](tm,\n    func(next worker.Service) worker.Service {\n        return middleware.NewLoggerMiddleware(next, logger)\n    },\n)\n```\n\n### Example\n\n```go\npackage main\n\nimport (\n    \"context\"\n    \"fmt\"\n    \"time\"\n\n    \"github.com/google/uuid\"\n    worker \"github.com/hyp3rd/go-worker\"\n    \"github.com/hyp3rd/go-worker/middleware\"\n)\n\nfunc main() {\n    tm := worker.NewTaskManager(context.Background(), 4, 10, 5, 3*time.Second, 30*time.Second, 3)\n\n    var srv worker.Service = worker.RegisterMiddleware[worker.Service](tm,\n        func(next worker.Service) worker.Service {\n            return middleware.NewLoggerMiddleware(next, middleware.DefaultLogger())\n        },\n    )\n\n    task := \u0026worker.Task{\n        ID:       uuid.New(),\n        Priority: 1,\n        Ctx:      context.Background(),\n        Execute: func(ctx context.Context, _ ...any) (any, error) {\n            return 2 + 5, nil\n        },\n    }\n\n    _ = srv.RegisterTasks(context.Background(), task)\n\n    results, cancel := srv.SubscribeResults(10)\n    defer cancel()\n\n    ctx, cancelWait := context.WithTimeout(context.Background(), 5*time.Second)\n    defer cancelWait()\n    _ = srv.Wait(ctx)\n\n    for res := range results {\n        fmt.Println(res)\n    }\n}\n```\n\n## Versioning\n\nThis project follows [Semantic Versioning](https://semver.org/).\n\n## Contribution Guidelines\n\nWe welcome contributions! Fork the repository, create a feature branch, run the linters and tests, then open a pull request.\n\n### Feature Requests\n\nTo propose new ideas, open an issue using the *Feature request* template.\n\n### Newcomer-Friendly Issues\n\nIssues labeled `good first issue` or `help wanted` are ideal starting points for new contributors.\n\n## Release Notes\n\nSee [CHANGELOG](CHANGELOG.md) for the history of released versions.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n","funding_links":["https://github.com/sponsors/hyp3rd"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyp3rd%2Fgo-worker","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhyp3rd%2Fgo-worker","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyp3rd%2Fgo-worker/lists"}