{"id":18821621,"url":"https://github.com/octu0/chanque","last_synced_at":"2025-10-12T10:12:04.707Z","repository":{"id":39869703,"uuid":"263362859","full_name":"octu0/chanque","owner":"octu0","description":"framework for asynchronous programming and goroutine management and safe use of channels","archived":false,"fork":false,"pushed_at":"2023-02-26T06:40:43.000Z","size":140,"stargazers_count":4,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2024-06-19T23:13:00.895Z","etag":null,"topics":["channel","concurrency","concurrent","golang","goroutine","goroutine-management","goroutine-pool","parallel","queue","queue-workers","worker-pool"],"latest_commit_sha":null,"homepage":"https://pkg.go.dev/github.com/octu0/chanque","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/octu0.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-05-12T14:36:44.000Z","updated_at":"2024-01-20T16:02:48.000Z","dependencies_parsed_at":"2024-06-19T22:50:30.367Z","dependency_job_id":"9b9e7ed5-ae91-477e-9268-e42bb9558205","html_url":"https://github.com/octu0/chanque","commit_stats":null,"previous_names":[],"tags_count":23,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octu0%2Fchanque","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octu0%2Fchanque/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octu0%2Fchanque/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/octu0%2Fchanque/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/octu0","download_url":"https://codeload.github.com/octu0/chanque/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223612891,"owners_count":17173631,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["channel","concurrency","concurrent","golang","goroutine","goroutine-management","goroutine-pool","parallel","queue","queue-workers","worker-pool"],"created_at":"2024-11-08T00:44:54.974Z","updated_at":"2025-10-12T10:11:59.673Z","avatar_url":"https://github.com/octu0.png","language":"Go","readme":"# `chanque`\n\n[![MIT License](https://img.shields.io/github/license/octu0/chanque)](https://github.com/octu0/chanque/blob/master/LICENSE)\n[![GoDoc](https://godoc.org/github.com/octu0/chanque?status.svg)](https://godoc.org/github.com/octu0/chanque)\n[![Go Report Card](https://goreportcard.com/badge/github.com/octu0/chanque)](https://goreportcard.com/report/github.com/octu0/chanque)\n[![Releases](https://img.shields.io/github/v/release/octu0/chanque)](https://github.com/octu0/chanque/releases)\n\n`chanque` provides simple framework for asynchronous programming and goroutine management and safe use of channel.\n\n## Installation\n\n```\n$ go get github.com/octu0/chanque\n```\n\n## Usage\n\n### Queue\n\nQueue implementation.  \nIt provides blocking and non-blocking methods, as well as panic handling of channels.\n\n```go\nfunc main() {\n\tque1 := chanque.NewQueue(10)\n\tdefer que1.Close()\n\n\tgo func() {\n\t\tfor {\n\t\t\tval := que1.Dequeue()\n\t\t\tfmt.Println(val.(string))\n\t\t}\n\t}()\n\tif ok := que1.Enqueue(\"hello\"); ok {\n\t\tfmt.Println(\"enqueue\")\n\t}\n\n\tque2 := chanque.NewQueue(10,\n\t\tQueuePanicHandler(func(pt chanque.PanicType, rcv interface{}) {\n\t\t\tfmt.Println(\"panic occurred\", rcv.(error))\n\t\t}),\n\t)\n\tdefer que2.Close()\n\tif ok := que2.EnqueueNB(\"world w/ non-blocking enqueue\"); ok {\n\t\tfmt.Println(\"enqueue\")\n\t}\n}\n```\n\n### Executor\n\nWorkerPool implementation,  \nwhich limits the number of concurrent executions of goroutines and creates goroutines as needed,  \nand can also be used as goroutine resource management.\n\n```go\nfunc main() {\n\t// minWorker 1 maxWorker 2\n\texec := chanque.NewExecutor(1, 2)\n\tdefer exec.Release()\n\n\texec.Submit(func() {\n\t\tfmt.Println(\"job1\")\n\t\ttime.Sleep(1 * time.Second)\n\t})\n\texec.Submit(func() {\n\t\tfmt.Println(\"job2\")\n\t\ttime.Sleep(1 * time.Second)\n\t})\n\n\t// Blocking because it became the maximum number of workers,\n\t// executing when there are no more workers running\n\texec.Submit(func() {\n\t\tfmt.Println(\"job3\")\n\t})\n\n\t// Generate goroutines on demand up to the maximum number of workers.\n\t// Submit does not block up to the size of MaxCapacity\n\t// Workers that are not running are recycled to minWorker at the time of ReduceInterval.\n\texec2 := chanque.NewExecutor(10, 50,\n\t\tchanque.ExecutorMaxCapacicy(1000),\n\t\tchanque.ExecutorReducderInterval(60*time.Second),\n\t)\n\tdefer exec2.Release()\n\n\tfor i := 0; i \u003c 100; i += 1 {\n\t\texec2.Submit(func(i id) func() {\n\t\t\treturn func() {\n\t\t\t\tfmt.Println(\"heavy process\", id)\n\t\t\t\ttime.Sleep(100 * time.Millisecond)\n\t\t\t\tfmt.Println(\"done process\", id)\n\t\t\t}\n\t\t}(i))\n\t}\n\n\t// On-demand tune min/max worker size\n\texec.TuneMaxWorker(10)\n\texec.TuneMinWorker(5)\n}\n```\n\n### Worker\n\nWorker implementation for asynchronous execution, register WorkerHandler and execute it with Enqueue parameter.  \nEnqueue of parameter is blocked while WorkerHandler is running.  \nThere is also a BufferWorker implementation that non-blocking enqueue during asynchronous execution.\n\n```go\nfunc main() {\n\thandler := func(param interface{}) {\n\t\tif s, ok := param.(string); ok {\n\t\t\tfmt.Println(s)\n\t\t}\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\n\t// DefaultWorker executes in order, waiting for the previous one\n\tw1 := chanque.NewDefaultWorker(handler)\n\tdefer w1.Shutdown()\n\n\tgo func() {\n\t\tw1.Enqueue(\"hello\")\n\t\tw1.Enqueue(\"world\") // blocking during 1 sec\n\t}()\n\n\tw2 := chanque.NewBufferWorker(handler)\n\tdefer w2.Shutdown()\n\n\tgo func() {\n\t\tw2.Enqueue(\"hello\")\n\t\tw2.Enqueue(\"world\") // non-blocking\n\t}()\n\n\t// BufferWorker provides helpers for performing sequential operations\n\t// by using PreHook and PostHook to perform the operations collectively.\n\tpre := func() {\n\t\tdb.Begin()\n\t}\n\tpost := func() {\n\t\tdb.Commit()\n\t}\n\thnd := func(param interface{}) {\n\t\tdb.Insert(param.(string))\n\t}\n\tw3 := chanque.NewBufferWorker(hnd,\n\t\tWorkerPreHook(pre),\n\t\tWorkerPostHook(post),\n\t)\n\tfor i := 0; i \u003c 100; i += 1 {\n\t\tw3.Enqueue(strconv.Itoa(i))\n\t}\n\tw3.ShutdownAndWait()\n}\n```\n\n### Parallel\n\nParallel provides for executing in parallel and acquiring the execution result.  \nextended implementation of Worker.\n\n```go\nfunc main() {\n\texecutor := chanque.NewExecutor(10, 100)\n\tdefer executor.Release()\n\n\tpara := chanque.NewParallel(\n\t\texecutor,\n\t\tchanque.Parallelism(2),\n\t)\n\tpara.Queue(func() (interface{}, error) {\n\t\treturn \"#1 result\", nil\n\t})\n\tpara.Queue(func() (interface{}, error) {\n\t\treturn \"#2 result\", nil\n\t})\n\tpara.Queue(func() (interface{}, error) {\n\t\treturn nil, errors.New(\"#3 error\")\n\t})\n\n\tfuture := para.Submit()\n\tfor _, r := range future.Result() {\n\t\tif r.Value() != nil {\n\t\t\tprintln(\"result:\", r.Value().(string))\n\t\t}\n\t\tif r.Err() != nil {\n\t\t\tprintln(\"error:\", r.Err().Error())\n\t\t}\n\t}\n}\n```\n\n### Retry\n\nRetry provides function retry based on the exponential backoff algorithm.\n\n```go\nfunc main() {\n\tretry := chanque.NewRetry(\n\t\tchanque.RetryMax(10),\n\t\tchanque.RetryBackoffIntervalMin(100*time.Millisecond),\n\t\tchanque.RetryBackoffIntervalMax(30*time.Second),\n\t)\n\tfuture := retry.Retry(func(ctx context.Context) (interface{}, error) {\n\t\treq, _ := http.NewRequest(\"GET\", url, nil)\n\t\tclient := \u0026http.Client{}\n\t\tresp, err := client.Do(req)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t...\n\t\treturn ioutil.ReadAll(resp.Body), nil\n\t})\n\tr := future.Result()\n\tif err := r.Err(); err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"GET resp = %s\", r.Value().([]byte))\n}\n```\n\n### Wait\n\nWait provides wait handling like sync.WaitGroup and context.Done.  \nProvides implementations for patterns that run concurrently, wait for multiple processes, wait for responses, and many other use cases.\n\n```go\nfunc one() {\n\tw := WaitOne()\n\tdefer w.Cancel()\n\n\tgo func(w *Wait) {\n\t\tdefer w.Done()\n\n\t\tfmt.Println(\"heavy process\")\n\t}(w)\n\n\tw.Wait()\n}\n\nfunc any() {\n\tw := WaitN(10)\n\tdefer w.Cancel()\n\n\tfor i := 0; i \u003c 10; i += 1 {\n\t\tgo func(w *Wait) {\n\t\t\tdefer w.Done()\n\n\t\t\tfmt.Println(\"N proc\")\n\t\t}(w)\n\t}\n\tw.Wait()\n}\n\nfunc sequencial() {\n\tw1, := WaitOne()\n\tdefer w1.Cancel()\n\tgo Preprocess(w1)\n\n\tw2 := WaitOne()\n\tdefer w2.Cancel()\n\tgo Preprocess(w2)\n\n\tws := WaitSeq(w1, w2)\n\tdefer ws.Cancel()\n\n\t// Wait for A.Done() -\u003e B.Done() -\u003e ... N.Done() ordered\n\tws.Wait()\n}\n\nfunc rendezvous() {\n\twr := WaitRendez(2)\n\tdefer wr.Cancel()\n\n\tgo func() {\n\t\tif err := wr.Wait(); err != nil {\n\t\t\tfmt.Println(\"timeout or cancel\")\n\t\t\treturn\n\t\t}\n\t\tfmt.Println(\"run sync\")\n\t}()\n\tgo func() {\n\t\tif err := wr.Wait(); err != nil {\n\t\t\tfmt.Println(\"timeout or cancel\")\n\t\t\treturn\n\t\t}\n\t\tfmt.Println(\"run sync\")\n\t}()\n}\n\nfunc req() {\n\twreq := WaitReq()\n\tdefer wreq.Cancel()\n\n\tgo func() {\n\t\tif err := wreq.Req(\"hello world\"); err != nil {\n\t\t\tfmt.Println(\"timeout or cancel\")\n\t\t}\n\t\tfmt.Println(\"send req\")\n\t}()\n\n\tv, err := wreq.Wait()\n\tif err != nil {\n\t\tfmt.Println(\"timeout or cancel\")\n\t}\n\tfmt.Println(v.(string)) // =\u003e \"hello world\"\n}\n\nfunc reqreply() {\n\twrr := WaitReqReply()\n\tgo func() {\n\t\tv, err := wrr.Req(\"hello\")\n\t\tif err != nil {\n\t\t\tfmt.Println(\"timeout or cancel\")\n\t\t}\n\t\tfmt.Println(v.(string)) // =\u003e \"hello world2\"\n\t}()\n\tgo func() {\n\t\terr := wrr.Reply(func(v interface{}) (interface{}, err) {\n\t\t\ts := v.(string)\n\t\t\treturn s + \" world2\", nil\n\t\t})\n\t\tif err != nil {\n\t\t\tfmt.Println(\"timeout or cancel\")\n\t\t}\n\t}()\n}\n```\n\n### Loop\n\nLoop provides safe termination of an infinite loop by goroutine.  \nYou can use callbacks with Queue and time.Ticker.  \n\n```go\nfunc newloop() {\n\te := NewExecutor(1, 10)\n\n\tqueue := NewQueue(0)\n\n\tloop := NewLoop(e)\n\tloop.SetDequeue(func(val interface{}, ok bool) chanque.LoopNext {\n\t\tif ok != true {\n\t\t\t// queue closed\n\t\t\treturn chanque.LoopNextBreak\n\t\t}\n\t\tprintln(\"queue=\", val.(string))\n\t\treturn chanque.LoopNextContinue\n\t}, queue)\n\n\tloop.ExecuteTimeout(10 * time.Second)\n\n\tgo func() {\n\t\tqueue.Enqueue(\"hello1\")\n\t\tqueue.Enqueue(\"hello2\")\n\t\ttime.Sleep(1 * time.Second)\n\t\tqueue.EnqueueNB(\"world\") // Enqueue / EnqueueNB / EnqueueRetry\n\t}()\n\tgo func() {\n\t\ttime.Sleep(1 * time.Second)\n\t\tloop.Stop() // done for loop\n\t}()\n}\n```\n\n### Pipeline\n\nPipeline provides sequential asynchronous input and output.\nExecute func combination asynchronously\n\n```go\nfunc main() {\n\tcalcFn := func(parameter interface{}) (interface{}, error) {\n\t\t// heavy process\n\t\ttime.Sleep(1 * time.Second)\n\n\t\tif val, ok := parameter.(int); ok {\n\t\t\treturn val * 2, nil\n\t\t}\n\t\treturn -1, fmt.Errorf(\"invalid parameter\")\n\t}\n\toutFn := func(result interface{}, err error) {\n\t\tif err != nil {\n\t\t\tfmt.Fatal(err)\n\t\t\treturn\n\t\t}\n\n\t\tfmt.Println(\"value =\", parameter.(int))\n\t}\n\n\tpipe := chanque.NewPipeline(calcFn, outFn)\n\tpipe.Enqueue(10)\n\tpipe.Enqueue(20)\n\tpipe.Enqueue(30)\n\tpipe.ShutdownAndWait()\n}\n```\n\n## Documentation\n\nhttps://godoc.org/github.com/octu0/chanque\n\n# Benchmark\n\n## `go func()` vs `Executor`\n\n```bash\n$ go test -v -run=BenchmarkExecutor -bench=BenchmarkExecutor -benchmem  ./\ngoos: darwin\ngoarch: amd64\npkg: github.com/octu0/chanque\nBenchmarkExecutor/goroutine-8         \t 1000000\t      2306 ns/op\t     544 B/op\t       2 allocs/op\nBenchmarkExecutor/executor/100-1000-8 \t  952410\t      1252 ns/op\t      16 B/op\t       1 allocs/op\nBenchmarkExecutor/executor/1000-5000-8    795402\t      1327 ns/op\t      18 B/op\t       1 allocs/op\n--- BENCH: BenchmarkExecutor\n    executor_test.go:19: goroutine           \tTotalAlloc=546437344\tStackInUse=1996357632\n    executor_test.go:19: executor/100-1000   \tTotalAlloc=25966144\tStackInUse=-1993277440\n    executor_test.go:19: executor/1000-5000  \tTotalAlloc=16092752\tStackInUse=7012352\nPASS\nok  \tgithub.com/octu0/chanque\t6.935s\n```\n\n## License\n\nMIT, see LICENSE file for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foctu0%2Fchanque","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foctu0%2Fchanque","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foctu0%2Fchanque/lists"}