{"id":31377510,"url":"https://github.com/goptics/varmq","last_synced_at":"2026-01-16T20:35:11.225Z","repository":{"id":281111180,"uuid":"937699990","full_name":"goptics/varmq","owner":"goptics","description":"A Simplest Storage-Agnostic and Zero-dep Message Queue for Your Concurrent Go Program","archived":false,"fork":false,"pushed_at":"2026-01-01T13:36:24.000Z","size":78077,"stargazers_count":175,"open_issues_count":2,"forks_count":12,"subscribers_count":2,"default_branch":"main","last_synced_at":"2026-01-14T23:27:39.602Z","etag":null,"topics":["concurrency","distrubted-systems","go","goroutine","goroutine-pool","hacktoberfest","message-queue","persistence","pool","priority-queue","queue","varmq","worker","worker-pool"],"latest_commit_sha":null,"homepage":"https://deepwiki.com/goptics/varmq","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/goptics.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-02-23T17:39:04.000Z","updated_at":"2026-01-01T13:36:28.000Z","dependencies_parsed_at":null,"dependency_job_id":"02e26c09-3876-4a17-abd3-ca2722d39563","html_url":"https://github.com/goptics/varmq","commit_stats":null,"previous_names":["fahimfaisaal/gocq","goptics/varmq"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/goptics/varmq","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/goptics%2Fvarmq","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/goptics%2Fvarmq/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/goptics%2Fvarmq/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/goptics%2Fvarmq/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/goptics","download_url":"https://codeload.github.com/goptics/varmq/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/goptics%2Fvarmq/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28482267,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T11:59:17.896Z","status":"ssl_error","status_checked_at":"2026-01-16T11:55:55.838Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["concurrency","distrubted-systems","go","goroutine","goroutine-pool","hacktoberfest","message-queue","persistence","pool","priority-queue","queue","varmq","worker","worker-pool"],"created_at":"2025-09-28T05:00:48.004Z","updated_at":"2026-01-16T20:35:11.217Z","avatar_url":"https://github.com/goptics.png","language":"Go","readme":"# VarMQ\n\n[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge-flat.svg)](https://github.com/avelino/awesome-go?tab=readme-ov-file#messaging)\n[![Go Reference](https://img.shields.io/badge/go-pkg-00ADD8.svg?logo=go)](https://pkg.go.dev/github.com/goptics/varmq)\n[![Go Report Card](https://goreportcard.com/badge/github.com/goptics/varmq)](https://goreportcard.com/report/github.com/goptics/varmq)\n[![CI](https://github.com/goptics/varmq/actions/workflows/ci.yml/badge.svg)](https://github.com/goptics/varmq/actions/workflows/ci.yml)\n[![Codecov](https://codecov.io/gh/goptics/varmq/branch/main/graph/badge.svg)](https://codecov.io/gh/goptics/varmq)\n[![Go Version](https://img.shields.io/badge/Go-1.24+-00ADD8?style=flat-square\u0026logo=go)](https://golang.org/doc/devel/release.html)\n[![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](LICENSE)\n\nA high-performance message queue and pool system for Go that simplifies concurrent task processing using [worker pool](#the-concurrency-architecture). Through Go generics, it provides type safety without sacrificing performance.\n\nWith `VarMQ`, you can process messages asynchronously, handle errors properly, store data persistently, and scale across systems using [adapters](#built-in-adapters). All through a clean, intuitive API that feels natural to Go developers.\n\n## ✨ Features\n\n- **⚡ High performance**: Optimized for high throughput with minimal overhead, even under heavy load. [see benchmarks](#benchmarks)\n- **🛠️ Variants of queue types**:\n  - Standard queues for in-memory processing\n  - Priority queues for importance-based ordering\n  - Persistent queues for durability across restarts\n  - Distributed queues for processing across multiple systems\n- **🧩 Worker abstractions**:\n  - `NewWorker` - Fire-and-forget operations (most performant)\n  - `NewErrWorker` - Returns only error (when result isn't needed)\n  - `NewResultWorker` - Returns result and error\n- **🚦 Concurrency control**: Fine-grained control over worker pool size, dynamic tuning and idle workers management\n- **🧬 Multi Queue Binding**: Bind multiple queues to a single worker\n- **💾 Persistence**: Support for durable storage through adapter interfaces\n- **🌐 Distribution**: Scale processing across multiple instances via adapter interfaces\n- **🧩 Extensible**: Build your own storage adapters by implementing VarMQ's [internal queue interfaces](./assets/diagrams/interface.drawio.png)\n\n## Quick Start\n\n### Installation\n\n```bash\ngo get github.com/goptics/varmq\n```\n\n### Basic Usage\n\n```go\npackage main\n\nimport (\n    \"fmt\"\n    \"time\"\n\n    \"github.com/goptics/varmq\"\n)\n\nfunc main() {\n  worker := varmq.NewWorker(func(j varmq.Job[int]) {\n    fmt.Printf(\"Processing %d\\n\", j.Data())\n    time.Sleep(500 * time.Millisecond)\n  }, 10) // with concurrency 10\n  defer worker.WaitUntilFinished()\n  queue := worker.BindQueue()\n\n  for i := range 100 {\n    queue.Add(i)\n  }\n}\n```\n\n↗️ **[Run it on Playground](https://go.dev/play/p/XugpmYb9Dal)**\n\n### Priority Queue\n\nYou can use priority queue to prioritize jobs based on their priority. `Lower number = higher priority`\n\n```go\n// just bind priority queue\nqueue := worker.BindPriorityQueue()\n\n// add jobs to priority queue\nfor i := range 10 {\n    queue.Add(i, i%2) // prioritize even tasks\n}\n```\n\n↗️ **[Run it on Playground](https://go.dev/play/p/w_RuYKv-VxB)**\n\n## 💡 Highlighted Features\n\n### Persistent and Distributed Queues\n\nVarMQ supports both persistent and distributed queue processing through adapter interfaces:\n\n- **Persistent Queues**: Store jobs durably so they survive program restarts\n- **Distributed Queues**: Process jobs across multiple systems\n\nUsage is simple:\n\n```go\n// For persistent queues (with any IPersistentQueue adapter)\nqueue := worker.WithPersistentQueue(persistentQueueAdapter)\n\n// For distributed queues (with any IDistributedQueue adapter)\nqueue := worker.WithDistributedQueue(distributedQueueAdapter)\n```\n\nSee complete working examples in the [examples directory](./examples):\n\n- [Persistent Queue Example (SQLite)](./examples/sqlite-persistent)\n- [Persistent Queue Example (Redis)](./examples/redis-persistent)\n- [Distributed Queue Example (Redis)](./examples/redis-distributed)\n\nCreate your own adapters by implementing the `IPersistentQueue` or `IDistributedQueue` interfaces.\n\n\u003e [!Note]\n\u003e Before testing examples, make sure to start the Redis server using `docker compose up -d`.\n\n#### Built-in adapters\n\n- ⚡ Redis: [redisq](https://github.com/goptics/redisq)\n- 🗃️ SQLite: [sqliteq](https://github.com/goptics/sqliteq)\n- 🦆 DuckDB: [duckq](https://github.com/goptics/duckq)\n- 🐘 PostgreSQL: 🔄 Upcoming\n\n### Multi Queue Binds\n\nBind multiple queues to a single worker, enabling efficient processing of jobs from different sources with configurable strategies. The worker supports three strategies:\n\n1. **RoundRobin** (default - cycles through queues equally)\n2. **MaxLen** (prioritizes queues with more jobs)\n3. **MinLen** (prioritizes queues with fewer jobs)\n\n```go\nworker := varmq.NewWorker(func(j varmq.Job[string]) {\n  fmt.Println(\"Processing:\", j.Data())\n  time.Sleep(500 * time.Millisecond) // Simulate work\n}) // change strategy through using varmq.WithStrategy\ndefer worker.WaitUntilFinished()\n\n// Bind to a standard queues\nq1 := worker.BindQueue()\nq2 := worker.BindQueue()\npq := worker.BindPriorityQueue()\n\nfor i := range 10 {\n  q1.Add(fmt.Sprintf(\"Task queue 1 %d\", i))\n}\n\nfor i := range 15 {\n  q2.Add(fmt.Sprintf(\"Task queue 2 %d\", i))\n}\n\nfor i := range 10 {\n  pq.Add(fmt.Sprintf(\"Task priority queue %d\", i), i%2) // prioritize even tasks\n}\n```\n\n↗️ **[Run it on Playground](https://go.dev/play/p/_j_ZDLZqvtX)**\n\nIt will process jobs from all queues in a `round-robin` fashion.\n\n### Result and Error Worker\n\nVarMQ provides a `NewResultWorker` that returns both the result and error for each job processed. This is useful when you need to handle both success and failure cases.\n\n```go\nworker := varmq.NewResultWorker(func(j varmq.Job[string]) (int, error) {\n fmt.Println(\"Processing:\", j.Data())\n time.Sleep(500 * time.Millisecond) // Simulate work\n data := j.Data()\n\n if data == \"error\" {\n  return 0, errors.New(\"error occurred\")\n }\n\n return len(data), nil\n})\ndefer worker.WaitUntilFinished()\nqueue := worker.BindQueue()\n\n// Add jobs to the queue (non-blocking)\nif job, ok := queue.Add(\"The length of this string is 31\"); ok {\n fmt.Println(\"Job 1 added to queue.\")\n\n go func() {\n  result, _ := job.Result()\n  fmt.Println(\"Result:\", result)\n }()\n}\n\nif job, ok := queue.Add(\"error\"); ok {\n fmt.Println(\"Job 2 added to queue.\")\n\n go func() {\n  _, err := job.Result()\n  fmt.Println(\"Result:\", err)\n }()\n}\n```\n\n↗️ **[Run it on Playground](https://go.dev/play/p/W8Pi_QrzTHe)**\n\n`NewErrWorker` is similar to `NewResultWorker` but it returns only error.\n\n### Function Helpers\n\nVarMQ provides helper functions that enable direct function submission similar to the `Submit()` pattern in other pool packages like [Pond](https://github.com/alitto/pond) or [Ants](https://github.com/panjf2000/ants)\n\n- **`Func()`**: For basic functions with no return values - use with `NewWorker`\n- **`ErrFunc()`**: For functions that return errors - use with `NewErrWorker`\n- **`ResultFunc[R]()`**: For functions that return a result and error - use with `NewResultWorker`\n\n```go\nworker := varmq.NewWorker(varmq.Func(), 10)\ndefer worker.WaitUntilFinished()\n\nqueue := worker.BindQueue()\n\nfor i := range 100 {\n    queue.Add(func() {\n        time.Sleep(500 * time.Millisecond)\n        fmt.Println(\"Processing\", i)\n    })\n}\n```\n\n↗️ **[Run it on Playground](https://go.dev/play/p/YO4vOu3sg9f)**\n\n\u003e [!Important]\n\u003e Function helpers don't support persistence or distribution since functions cannot be serialized.\n\n## Benchmarks\n\n```text\ngoos: linux\ngoarch: amd64\npkg: github.com/goptics/varmq\ncpu: 13th Gen Intel(R) Core(TM) i7-13700\n```\n\n### `Add` Operation\n\nCommand: `go test -run=^$ -benchmem -bench '^(BenchmarkAdd)$' -cpu=1`\n\n\u003e Why use `-cpu=1`? Since the benchmark doesn’t test with more than 1 concurrent worker, a single CPU is ideal to accurately measure performance.\n\n| Worker Type      | Queue Type     | Time (ns/op) | Memory (B/op) | Allocations (allocs/op) |\n| ---------------- | -------------- | ------------ | ------------- | ----------------------- |\n| **Worker**       | Queue          | 918.6        | 128           | 3                       |\n|                  | Priority       | 952.7        | 144           | 4                       |\n| **ErrWorker**    | ErrQueue       | 1017         | 305           | 6                       |\n|                  | ErrPriority    | 1006         | 320           | 7                       |\n| **ResultWorker** | ResultQueue    | 1026         | 353           | 6                       |\n|                  | ResultPriority | 1039         | 368           | 7                       |\n\n### `AddAll` Operation\n\nCommand: `go test -run=^$ -benchmem -bench '^(BenchmarkAddAll)$' -cpu=1`\n\n| Worker Type      | Queue Type     | Time (ns/op) | Memory (B/op) | Allocations (allocs/op) |\n| ---------------- | -------------- | ------------ | ------------- | ----------------------- |\n| **Worker**       | Queue          | 635,186      | 146,841       | 4,002                   |\n|                  | Priority       | 755,276      | 162,144       | 5,002                   |\n| **ErrWorker**    | ErrQueue       | 673,912      | 171,090       | 4,505                   |\n|                  | ErrPriority    | 766,043      | 186,663       | 5,505                   |\n| **ResultWorker** | ResultQueue    | 675,420      | 187,897       | 4,005                   |\n|                  | ResultPriority | 777,680      | 203,263       | 5,005                   |\n\n\u003e [!Note]\n\u003e\n\u003e `AddAll` benchmarks use a batch of **1000 items** per call. The reported numbers (`ns/op`, `B/op`, `allocs/op`) are totals for the whole batch. For per-item values, divide each by 1000.  \n\u003e e.g. for default `Queue`, the average time per item is approximately **635ns**.\n\nWhy is `AddAll` faster than individual `Add` calls? Here's what makes the difference:\n\n1. **Batch Processing**: Uses a single group job to process multiple items, reducing per-item overhead\n2. **Shared Resources**: Utilizes a single result channel for all items in the batch\n\n### Charts\n\n\u003ctable\u003e\n\u003ctr\u003e\n  \u003cth\u003eMetric\u003c/th\u003e\n  \u003cth\u003e\u003ccode\u003eAdd\u003c/code\u003e Operation\u003c/th\u003e\n  \u003cth\u003e\u003ccode\u003eAddAll\u003c/code\u003e Operation\u003c/th\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n  \u003ctd\u003e\u003cstrong\u003eExecution Time\u003c/strong\u003e\u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eTime (ns/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/add_exe_bench_chart.png\" alt=\"VarMQ Add/Execute Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eTime (ms/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/addall_exe_bench_chart.png\" alt=\"VarMQ AddAll/Execute Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n  \u003ctd\u003e\u003cstrong\u003eMemory Usage\u003c/strong\u003e\u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eMemory (B/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/add_mem_bench_chart.png\" alt=\"VarMQ Add/Memory Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eMemory (KB/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/addall_mem_bench_chart.png\" alt=\"VarMQ AddAll/Memory Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n  \u003ctd\u003e\u003cstrong\u003eAllocations\u003c/strong\u003e\u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eAllocations (allocs/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/add_alloc_bench_chart.png\" alt=\"VarMQ Add/Allocations Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n  \u003ctd\u003e\n    \u003cdetails\u003e\n      \u003csummary\u003e\u003cstrong\u003eAllocations (allocs/op)\u003c/strong\u003e\u003c/summary\u003e\n      \u003cimg src=\"assets/bench-charts/addall_alloc_bench_chart.png\" alt=\"VarMQ AddAll/Allocations Benchmark Chart\"\u003e\n    \u003c/details\u003e\n  \u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\nChart images is been generated using **[Vizb](https://github.com/goptics/vizb)**\n\n### Comparison with Other Packages\n\nWe conducted comprehensive benchmarking between VarMQ and [Pond v2](https://github.com/alitto/pond), as both packages provide similar worker pool functionalities. While VarMQ draws inspiration from some of Pond's design patterns, it offers unique advantages in queue management and persistence capabilities.\n\n**Key Differences:**\n\n- **Queue Types**: VarMQ provides multiple queue variants (standard, priority, persistent, distributed) vs Pond's single pool type\n- **Multi-Queue Management**: VarMQ supports binding multiple queues to a single worker with configurable strategies (RoundRobin, MaxLen, MinLen)\n\nFor detailed performance comparisons and benchmarking results, visit:\n\n- 📊 **[Benchmark Repository](https://github.com/goptics/varmq-benchmarks)** - Complete benchmark suite\n- 📈 **[Interactive Charts](https://varmq-benchmarks.netlify.app/)** - Visual performance comparisons\n\n## API Reference\n\nFor detailed API documentation, see the **[API Reference](./docs/API_REFERENCE.md)**.\n\n## The Concurrency Architecture\n\nVarMQ's concurrency model is built around a smart event loop that keeps everything running smoothly.\n\nThe event loop continuously monitors for pending jobs in queues and available workers in the pool. When both conditions are met, jobs get distributed to workers instantly. When there's no work to distribute, the system enters a low-power wait state.\n\nWorkers operate independently - they process jobs and immediately signal back when they're ready for more work. This triggers the event loop to check for new jobs and distribute them right away.\n\nThe system handles worker lifecycle automatically. Idle workers either stay in the pool or get cleaned up based on your configuration, so you never waste resources or run short on capacity.\n\n![varmq architecture](./assets/diagrams/varmq-architecture.png)\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=goptics/varmq\u0026type=Date)](https://www.star-history.com/#goptics/varmq\u0026Date)\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request or open an issue.\n\nPlease note that this project has a [Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project, you agree to abide by its terms.\n","funding_links":[],"categories":["Messaging","消息"],"sub_categories":["Search and Analytic Databases","检索及分析资料库"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoptics%2Fvarmq","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoptics%2Fvarmq","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoptics%2Fvarmq/lists"}