{"id":17219751,"url":"https://github.com/hdt3213/delayqueue","last_synced_at":"2025-04-09T10:05:06.518Z","repository":{"id":43440832,"uuid":"467144026","full_name":"HDT3213/delayqueue","owner":"HDT3213","description":"Simple, reliable, installation-free distributed delayed delivery message queue in Go. 简单、可靠、免安装的分布式延时投递消息队列","archived":false,"fork":false,"pushed_at":"2024-10-04T07:31:51.000Z","size":83,"stargazers_count":299,"open_issues_count":1,"forks_count":54,"subscribers_count":13,"default_branch":"master","last_synced_at":"2024-10-15T03:52:04.488Z","etag":null,"topics":["delay-queue","delayed-job","golang","message-queue","redis"],"latest_commit_sha":null,"homepage":"https://www.cnblogs.com/Finley/p/16400287.html","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/HDT3213.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-03-07T15:09:38.000Z","updated_at":"2024-10-04T07:28:48.000Z","dependencies_parsed_at":"2024-10-25T19:13:41.531Z","dependency_job_id":"cee5a7b8-4c29-402f-8880-2206041b3215","html_url":"https://github.com/HDT3213/delayqueue","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HDT3213%2Fdelayqueue","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HDT3213%2Fdelayqueue/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HDT3213%2Fdelayqueue/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HDT3213%2Fdelayqueue/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/HDT3213","download_url":"https://codeload.github.com/HDT3213/delayqueue/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248018060,"owners_count":21034048,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["delay-queue","delayed-job","golang","message-queue","redis"],"created_at":"2024-10-15T03:50:40.184Z","updated_at":"2025-04-09T10:05:06.487Z","avatar_url":"https://github.com/HDT3213.png","language":"Go","funding_links":[],"categories":[],"sub_categories":[],"readme":"# DelayQueue\n\n![license](https://img.shields.io/github/license/HDT3213/delayqueue)\n![Build Status](https://github.com/hdt3213/delayqueue/actions/workflows/coverall.yml/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/github/HDT3213/delayqueue/badge.svg?branch=master)](https://coveralls.io/github/HDT3213/delayqueue?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/HDT3213/delayqueue)](https://goreportcard.com/report/github.com/HDT3213/delayqueue)\n[![Go Reference](https://pkg.go.dev/badge/github.com/hdt3213/delayqueue.svg)](https://pkg.go.dev/github.com/hdt3213/delayqueue)\n\n[中文版](https://github.com/HDT3213/delayqueue/blob/master/README_CN.md)\n\nDelayQueue is a message queue supporting delayed/scheduled delivery based on redis. It is designed to be reliable, scalable and easy to get started.\n\nCore Advantages:\n\n- Guaranteed at least once consumption\n- Auto retry failed messages\n- Works out of the box, Config Nothing and Deploy Nothing, A Redis is all you need.\n- Natively adapted to the distributed environment, messages processed concurrently on multiple machines\n. Workers can be added, removed or migrated at any time\n- Support Redis Cluster or clusters of most cloud service providers. see chapter [Cluster](./README.md#Cluster)\n- Easy to use monitoring data exporter, see [Monitoring](./README.md#Monitoring)\n\n## Install\n\nDelayQueue requires a Go version with modules support. Run following command line in your project with go.mod:\n\n```bash\ngo get github.com/hdt3213/delayqueue\n```\n\n\u003e if you are using `github.com/go-redis/redis/v8` please use `go get github.com/hdt3213/delayqueue@redisv8`\n\n## Get Started\n\n```go\npackage main\n\nimport (\n\t\"github.com/redis/go-redis/v9\"\n\t\"github.com/hdt3213/delayqueue\"\n\t\"strconv\"\n\t\"time\"\n)\n\nfunc main() {\n\tredisCli := redis.NewClient(\u0026redis.Options{\n\t\tAddr: \"127.0.0.1:6379\",\n\t})\n\tqueue := delayqueue.NewQueue(\"example\", redisCli, func(payload string) bool {\n\t\t// callback returns true to confirm successful consumption.\n\t\t// If callback returns false or not return within maxConsumeDuration, DelayQueue will re-deliver this message\n\t\treturn true\n\t}).WithConcurrent(4) // set the number of concurrent consumers \n\t// send delay message\n\tfor i := 0; i \u003c 10; i++ {\n\t\t_, err := queue.SendDelayMsgV2(strconv.Itoa(i), time.Hour, delayqueue.WithRetryCount(3))\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}\n\t// send schedule message\n\tfor i := 0; i \u003c 10; i++ {\n\t\t_, err := queue.SendScheduleMsgV2(strconv.Itoa(i), time.Now().Add(time.Hour))\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}\n\t// start consume\n\tdone := queue.StartConsume()\n\t\u003c-done\n}\n```\n\n\u003e `SendScheduleMsgV2` (`SendDelayMsgV2`) is fully compatible with `SendScheduleMsg` (`SendDelayMsg`)\n\n\u003e Please note that redis/v8 is not compatible with redis cluster 7.x. [detail](https://github.com/redis/go-redis/issues/2085)\n\n\u003e If you are using redis client other than go-redis, you could wrap your redis client into [RedisCli](https://pkg.go.dev/github.com/hdt3213/delayqueue#RedisCli) interface\n\n\u003e If you don't want to set the callback during initialization, you can use func `WithCallback`.\n\n## Producer consumer distributed deployment\n\nBy default, delayqueue instances can be both producers and consumers. \n\nIf your program only need producers and consumers are placed elsewhere, `delayqueue.NewPublisher` is a good option for you.\n\n```go\nfunc consumer() {\n\tqueue := NewQueue(\"test\", redisCli, cb)\n\tqueue.StartConsume()\n}\n\nfunc producer() {\n\tpublisher := NewPublisher(\"test\", redisCli)\n\tpublisher.SendDelayMsg(strconv.Itoa(i), 0)\n}\n```\n\n## Intercept/Delete Messages\n\n```go\nmsg, err := queue.SendScheduleMsgV2(strconv.Itoa(i), time.Now().Add(time.Second))\nif err != nil {\n\tpanic(err)\n}\nresult, err := queue.TryIntercept(msg)\nif err != nil {\n\tpanic(err)\n}\nif result.Intercepted {\n\tprintln(\"interception success!\")\n} else {\n\tprintln(\"interception failed, message has been consumed!\")\n}\n```\n\n`SendScheduleMsgV2` and `SendDelayMsgV2` return a structure which contains message tracking information.Then passing it to `TryIntercept` to try to intercept the consumption of the message.\n\nIf the message is pending or waiting to consume the interception will succeed.If the message has been consumed or is awaiting retry, the interception will fail, but TryIntercept will prevent subsequent retries.\n\nTryIntercept returns a InterceptResult, which Intercepted field indicates whether it is successful.\n\n## Options\n\n### Consume Function\n\n```go\nfunc (q *DelayQueue)WithCallback(callback CallbackFunc) *DelayQueue\n```\n\nWithCallback set callback for queue to receives and consumes messages\ncallback returns true to confirm successfully consumed, false to re-deliver this message.\n\nIf there is no callback set, StartConsume will panic\n\n```go\nqueue := NewQueue(\"test\", redisCli)\nqueue.WithCallback(func(payload string) bool {\n\treturn true\n})\n```\n\n### Logger\n\n```go\nfunc (q *DelayQueue)WithLogger(logger Logger) *DelayQueue\n```\n\nWithLogger customizes logger for queue. Logger should implemented the following interface:\n\n```go\ntype Logger interface {\n\tPrintf(format string, v ...interface{})\n}\n```\n\n### Concurrent\n\n```go\nfunc (q *DelayQueue)WithConcurrent(c uint) *DelayQueue\n```\n\nWithConcurrent sets the number of concurrent consumers\n\n### Polling Interval\n\n```go\nfunc (q *DelayQueue)WithFetchInterval(d time.Duration) *DelayQueue\n```\n\nWithFetchInterval customizes the interval at which consumer fetch message from redis\n\n### Timeout\n\n```go\nfunc (q *DelayQueue)WithMaxConsumeDuration(d time.Duration) *DelayQueue\n```\n\nWithMaxConsumeDuration customizes max consume duration\n\nIf no acknowledge received within WithMaxConsumeDuration after message delivery, DelayQueue will try to deliver this\nmessage again\n\n### Max Processing Limit\n\n```go\nfunc (q *DelayQueue)WithFetchLimit(limit uint) *DelayQueue\n```\n\nWithFetchLimit limits the max number of unack (processing) messages\n\n### Hash Tag\n\n```go\nUseHashTagKey()\n```\n\nUseHashTagKey add hashtags to redis keys to ensure all keys of this queue are allocated in the same hash slot.\n\nIf you are using Codis/AliyunRedisCluster/TencentCloudRedisCluster, you should add this option to NewQueue: `NewQueue(\"test\", redisCli, cb, UseHashTagKey())`. This Option cannot be changed after DelayQueue has been created.\n\nWARNING! CHANGING(add or remove) this option will cause DelayQueue failing to read existed data in redis\n\n\u003e see more:  https://redis.io/docs/reference/cluster-spec/#hash-tags\n\n### Default Retry Count\n\n```go\nWithDefaultRetryCount(count uint)  *DelayQueue\n```\n\nWithDefaultRetryCount customizes the max number of retry, it effects of messages in this queue\n\nuse WithRetryCount during DelayQueue.SendScheduleMsg or DelayQueue.SendDelayMsg to specific retry count of particular message\n\n```go\nqueue.SendDelayMsg(msg, time.Hour, delayqueue.WithRetryCount(3))\n```\n\n### Nack Redelivery Delay\n\n```go\nWithNackRedeliveryDelay(d time.Duration) *DelayQueue\n```\n\nWithNackRedeliveryDelay customizes the interval between redelivery and nack (callback returns false) \nBut if consumption exceeded deadline, the message will be redelivered immediately.\n\n### Script Preload\n\n```go\n(q *DelayQueue) WithScriptPreload(flag bool) *DelayQueue\n```\n\nWithScriptPreload(true) makes DelayQueue preload scripts and call them using EvalSha to reduce communication costs. WithScriptPreload(false) makes DelayQueue run scripts by Eval commnand. Using preload and EvalSha by Default\n\n### Customize Prefix\n\n```go\nqueue := delayqueue.NewQueue(\"example\", redisCli, callback, UseCustomPrefix(\"MyPrefix\"))\n```\n\nAll keys of delayqueue has a smae prefix, `dp` by default. If you want to modify the prefix, you could use `UseCustomPrefix`. \n\n\n## Monitoring\n\nWe provides Monitor to monitor the running status.\n\n```go\nmonitor := delayqueue.NewMonitor(\"example\", redisCli)\n```\n\nMonitor.ListenEvent can register a listener that can receive all internal events, so you can use it to implement customized data reporting and metrics.\n\nThe monitor can receive events from all workers, even if they are running on another server.\n\n```go\ntype EventListener interface {\n\tOnEvent(*Event)\n}\n\n// returns: close function, error\nfunc (m *Monitor) ListenEvent(listener EventListener) (func(), error) \n```\n\nThe definition of event could be found in [events.go](./events.go).\n\nBesides, We provide a demo that uses EventListener to monitor the production and consumption amount per minute.\n\nThe complete demo code can be found in  [example/monitor](./example/monitor/main.go).\n\n```go\ntype MyProfiler struct {\n\tList  []*Metrics\n\tStart int64\n}\n\nfunc (p *MyProfiler) OnEvent(event *delayqueue.Event) {\n\tsinceUptime := event.Timestamp - p.Start\n\tupMinutes := sinceUptime / 60\n\tif len(p.List) \u003c= int(upMinutes) {\n\t\tp.List = append(p.List, \u0026Metrics{})\n\t}\n\tcurrent := p.List[upMinutes]\n\tswitch event.Code {\n\tcase delayqueue.NewMessageEvent:\n\t\tcurrent.ProduceCount += event.MsgCount\n\tcase delayqueue.DeliveredEvent:\n\t\tcurrent.DeliverCount += event.MsgCount\n\tcase delayqueue.AckEvent:\n\t\tcurrent.ConsumeCount += event.MsgCount\n\tcase delayqueue.RetryEvent:\n\t\tcurrent.RetryCount += event.MsgCount\n\tcase delayqueue.FinalFailedEvent:\n\t\tcurrent.FailCount += event.MsgCount\n\t}\n}\n\nfunc main() {\n\tqueue := delayqueue.NewQueue(\"example\", redisCli, func(payload string) bool {\n\t\treturn true\n\t})\n\tstart := time.Now()\n\t// IMPORTANT: EnableReport must be called so monitor can do its work\n\tqueue.EnableReport() \n\n\t// setup monitor\n\tmonitor := delayqueue.NewMonitor(\"example\", redisCli)\n\tlistener := \u0026MyProfiler{\n\t\tStart: start.Unix(),\n\t}\n\tmonitor.ListenEvent(listener)\n\n\t// print metrics every minute\n\ttick := time.Tick(time.Minute)\n\tgo func() {\n\t\tfor range tick {\n\t\t\tminutes := len(listener.List)-1\n\t\t\tfmt.Printf(\"%d: %#v\", minutes, listener.List[minutes])\n\t\t}\n\t}()\n}\n```\n\nMonitor use redis pub/sub to collect data, so it is important to call `DelayQueue.EnableReport` of all workers, to enable events reporting for monitor.\n\nIf you do not want to use redis pub/sub, you can use `DelayQueue.ListenEvent` to collect data yourself. \n\nPlease be advised, `DelayQueue.ListenEvent` can only receive events from the current instance, while monitor can receive events from all instances in the queue. \n\nOnce `DelayQueue.ListenEvent` is called, the monitor's listener will be overwritten unless EnableReport is called again to re-enable the monitor.\n\n### Get Status\n\nYou could get Pending Count, Ready Count and Processing Count from the monitor:\n\n```go\nfunc (m *Monitor) GetPendingCount() (int64, error) \n```\n\nGetPendingCount returns the number of which delivery time has not arrived.\n\n```go\nfunc (m *Monitor) GetReadyCount() (int64, error)\n```\n\nGetReadyCount returns the number of messages which have arrived delivery time but but have not been delivered yet\n\n```go\nfunc (m *Monitor) GetProcessingCount() (int64, error)\n```\n\nGetProcessingCount returns the number of messages which are being processed\n\n\n## Cluster\n\nIf you are using Redis Cluster, please use `NewQueueOnCluster`\n\n```go\nredisCli := redis.NewClusterClient(\u0026redis.ClusterOptions{\n    Addrs: []string{\n        \"127.0.0.1:7000\",\n        \"127.0.0.1:7001\",\n        \"127.0.0.1:7002\",\n    },\n})\ncallback := func(s string) bool {\n    return true\n}\nqueue := NewQueueOnCluster(\"test\", redisCli, callback)\n```\n\nIf you are using transparent clusters, such as codis, twemproxy, or the redis of cluster architecture on aliyun, tencentcloud,\njust use `NewQueue` and enable hash tag\n\n```go\nredisCli := redis.NewClient(\u0026redis.Options{\n    Addr: \"127.0.0.1:6379\",\n})\ncallback := func(s string) bool {\n    return true\n}\nqueue := delayqueue.NewQueue(\"example\", redisCli, callback, UseHashTagKey())\n```\n\n## More Details\n\nHere is the complete flowchart:\n\n![](https://s2.loli.net/2022/09/10/tziHmcAX4sFJPN6.png)\n\n- pending: A sorted set of messages pending for delivery. `member` is message id, `score` is delivery unix timestamp.\n- ready: A list of messages ready to deliver. Workers fetch messages from here.\n- unack: A sorted set of messages waiting for ack (successfully consumed confirmation) which means the messages here is being processing. `member` is message id, `score` is the unix timestamp of processing deadline.\n- retry: A list of messages which processing exceeded deadline and waits for retry\n- garbage: A list of messages reaching max retry count and waits for cleaning ","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhdt3213%2Fdelayqueue","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhdt3213%2Fdelayqueue","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhdt3213%2Fdelayqueue/lists"}