https://github.com/karitham/wq
work queue
https://github.com/karitham/wq
atomics go golang multiplexing work-queue
Last synced: 2 months ago
JSON representation
work queue
- Host: GitHub
- URL: https://github.com/karitham/wq
- Owner: karitham
- License: mit
- Created: 2022-08-18T16:56:23.000Z (almost 3 years ago)
- Default Branch: master
- Last Pushed: 2022-08-18T22:24:33.000Z (almost 3 years ago)
- Last Synced: 2025-01-22T05:29:47.050Z (4 months ago)
- Topics: atomics, go, golang, multiplexing, work-queue
- Language: Go
- Homepage:
- Size: 7.81 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# wq
Stupid lock-free work queue.
This was mostly an experiment, and hasn't been throughly tested, but should be equivalent or faster than a channel based one.
```benchmark
>> go test -benchmem -run=^$ -bench=. -benchtime=5s .
goos: linux
goarch: amd64
pkg: github.com/Karitham/wq
cpu: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
BenchmarkQueue-8 52272778 120.6 ns/op 0 B/op 0 allocs/op
BenchmarkChans-8 18827577 322.6 ns/op 0 B/op 0 allocs/op
PASS
ok github.com/Karitham/wq 13.076s
```It has the downside of requiring you to know your queue size ahead of time, which dictates the worker count.
It is also a busy queue, which works best if your producer are faster than your consumers.
## Usage
```go
q := wq.New(func(v *int) { fmt.Println(*v) })for i := 0; i < 1000; i++ {
i := i // copy (else it would be a pointer to the same value)
q.EnQ(&i) // enqueue i
}q.Wait() // wait for all workers to be done
```