https://github.com/go-faster/flightorder
https://github.com/go-faster/flightorder
Last synced: about 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/go-faster/flightorder
- Owner: go-faster
- License: apache-2.0
- Created: 2024-01-30T23:56:46.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-03-19T11:44:02.000Z (2 months ago)
- Last Synced: 2025-03-19T12:34:38.313Z (2 months ago)
- Language: Go
- Size: 79.1 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# flightorder [](https://godoc.org/github.com/go-faster/flightorder)
This package allows to do _[ordered input] -> [parallel processing] -> [ordered output]_ in a streaming manner.
The name was inspired by [golang.org/x/sync/singleflight](https://pkg.go.dev/golang.org/x/sync/singleflight) package.
## Installation
```
go get github.com/go-faster/flightorder@latest
```## Example
```go
package mainimport (
"context"
"fmt"
"math/rand"
"sync"
"time""github.com/go-faster/flightorder"
)func main() {
input := []int{1, 2, 3, 4, 5, 6, 7, 8, 9}
processingOrder, output := processInput(context.TODO(), input)
fmt.Printf("input: %v\n", input)
fmt.Printf("processed: %v\n", processingOrder)
fmt.Printf("output: %v\n", output)
}func processInput(ctx context.Context, input []int) (processing, output []int) {
route := flightorder.NewRoute(flightorder.RouteParams{})var (
mux sync.Mutex
wg sync.WaitGroup
)wg.Add(len(input))
for _, v := range input {
ticket := route.Ticket()
go func(t *flightorder.Ticket, v int) {
defer wg.Done()
time.Sleep(time.Millisecond * time.Duration(rand.Intn(100)))mux.Lock()
processing = append(processing, v)
mux.Unlock()_ = route.CompleteTicket(ctx, t, func(context.Context) error {
mux.Lock()
output = append(output, v)
mux.Unlock()
return nil
})
}(ticket, v)
}wg.Wait()
return
}
```Output:
```
input: [1 2 3 4 5 6 7 8 9]
processed: [3 1 9 7 6 2 5 4 8]
output: [1 2 3 4 5 6 7 8 9]
```## Motivation
Sending logs from a single file to multiple kafka brokers concurrently while preserving at-least-once delivery guarantees:
* Logs are sent to multiple kafka brokers in parallel to enhance throughput.
* File offsets are committed in the exact order they are read to ensure at-least-once delivery guarantees and prevent data loss in case of shipper or broker failures.## License
Source code is available under Apache License 2.0