https://github.com/elgopher/batch
Simple Go package for handling incoming requests in batches.
https://github.com/elgopher/batch
batch batch-processing go golang high-performance high-throughput
Last synced: 2 months ago
JSON representation
Simple Go package for handling incoming requests in batches.
- Host: GitHub
- URL: https://github.com/elgopher/batch
- Owner: elgopher
- License: mit
- Created: 2022-04-28T19:54:10.000Z (almost 4 years ago)
- Default Branch: master
- Last Pushed: 2023-08-30T20:56:40.000Z (over 2 years ago)
- Last Synced: 2025-12-14T10:17:51.536Z (2 months ago)
- Topics: batch, batch-processing, go, golang, high-performance, high-throughput
- Language: Go
- Homepage:
- Size: 46.9 KB
- Stars: 11
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[](https://github.com/elgopher/batch/actions/workflows/build.yml)
[](https://pkg.go.dev/github.com/elgopher/batch)
[](https://goreportcard.com/report/github.com/elgopher/batch)
[](https://codecov.io/gh/elgopher/batch)
[](https://www.repostatus.org/#active)

## What it can be used for?
To **increase** database-driven web application **throughput** without sacrificing *data consistency* and *data durability* or making source code and architecture complex.
The **batch** package simplifies writing Go applications that process incoming requests (HTTP, GRPC etc.) in a batch manner:
instead of processing each request separately, they group incoming requests to a batch and run whole group at once.
This method of processing can significantly speed up the application and reduce the consumption of disk, network or CPU.
The **batch** package can be used to write any type of *servers* that handle thousands of requests per second.
Thanks to this small library, you can create relatively simple code without the need to use low-level data structures.
## Why batch processing improves performance?
Normally a web application is using following pattern to modify data in the database:
1. **Load resource** from database. **Resource** is some portion of data
such as set of records from relational database, document from Document-oriented database or value from KV store
(in Domain-Driven Design terms it is called an [aggregate](https://martinfowler.com/bliki/DDD_Aggregate.html)).
Lock the entire resource [optimistically](https://www.martinfowler.com/eaaCatalog/optimisticOfflineLock.html)
by reading version number.
2. **Apply change** to data in plain Go
3. **Save resource** to database. Release the lock by running
atomic update with version check.
But such architecture does not scale well if the number of requests
for a single resource is very high
(meaning hundreds or thousands of requests per second).
The lock contention in such case is very high and database is significantly
overloaded. Also, round-trips between application server and database add latency.
Practically, the number of concurrent requests is severely limited.
One solution to this problem is to reduce the number of costly operations.
Because a single resource is loaded and saved thousands of times per second
we can instead:
1. Load the resource **once** (let's say once per second)
2. Execute all the requests from this period of time on an already loaded resource. Run them all sequentially to keep things simple and data consistent.
3. Save the resource and send responses to all clients if data was stored successfully.
Such solution could improve the performance by a factor of 1000. And resource is still stored in a consistent state.
The **batch** package does exactly that. You configure the duration of window, provide functions
to load and save resource and once the request comes in - you run a function:
```go
// Set up the batch processor:
processor := batch.StartProcessor(
batch.Options[*YourResource]{ // YourResource is your own Go struct
MinDuration: 100 * time.Millisecond,
LoadResource: func(ctx context.Context, resourceKey string) (*YourResource, error){
// resourceKey uniquely identifies the resource
...
},
SaveResource: ...,
},
)
// And use the processor inside http/grpc handler or technology-agnostic service.
// ctx is a standard context.Context and resourceKey can be taken from request parameter
err := processor.Run(ctx, resourceKey, func(r *YourResource) {
// Here you put the code which will executed sequentially inside batch
})
```
**For real-life example see [example web application](https://github.com/elgopher/batch-example).**
## Installation
```sh
# Add batch to your Go module:
go get github.com/elgopher/batch
```
Please note that at least **Go 1.18** is required. The package is using generics, which was added in 1.18.
## Scaling out
Single Go http server is able to handle up to tens of thousands of requests per second on a commodity hardware.
This is a lot, but very often you also need:
* high availability (if one server goes down you want other to handle the traffic)
* you want to handle hundred-thousands or millions of requests per second
For both cases you need to deploy **multiple servers** and put a **load balancer** in front of them.
Please note though, that you have to carefully configure the load balancing algorithm.
_Round-robin_ is not an option here, because sooner or later you will have problems with locking
(multiple server instances will run batches on the same resource).
Ideal solution is to route requests based on URL path or query string parameters.
For example some http query string parameter could have a resource key. You can instruct load balancer
to calculate hash on this parameter and always route requests with the same key
to the same backend. If backend will be no longer available the load balancer should route request to a different
server.