https://github.com/nccapo/rate-limiter
Rate Limiter middleware in Golang using the Gin based on Redis
https://github.com/nccapo/rate-limiter
gin-gonic go go-rate-limiter golang middleware rate-limiter rate-limiting redis token-bu token-bucket-algorithm
Last synced: 3 months ago
JSON representation
Rate Limiter middleware in Golang using the Gin based on Redis
- Host: GitHub
- URL: https://github.com/nccapo/rate-limiter
- Owner: nccapo
- License: mit
- Created: 2024-06-08T15:39:28.000Z (almost 2 years ago)
- Default Branch: master
- Last Pushed: 2025-03-22T12:04:02.000Z (about 1 year ago)
- Last Synced: 2025-03-22T13:20:09.004Z (about 1 year ago)
- Topics: gin-gonic, go, go-rate-limiter, golang, middleware, rate-limiter, rate-limiting, redis, token-bu, token-bucket-algorithm
- Language: Go
- Homepage:
- Size: 60.5 KB
- Stars: 15
- Watchers: 2
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Rate Limiter
A robust, thread-safe, and distributed rate limiter for Go, designed for high-throughput applications. It implements the **Token Bucket** algorithm and supports both **Redis** (for distributed systems) and **In-Memory** (for single-instance apps) backends.


[](https://goreportcard.com/report/github.com/nccapo/rate-limiter)
[](https://godoc.org/github.com/nccapo/rate-limiter)
[](https://github.com/nccapo/rate-limiter/actions)
[](https://codecov.io/gh/nccapo/rate-limiter)
## ๐ Features
* **๐ก๏ธ Atomic Operations**: Leverages Redis Lua scripts to ensure strict rate limiting without race conditions in distributed environments.
* **๐พ Pluggable Storage**:
* **Redis**: First-class support for `go-redis/v9`. Ideal for microservices and load-balanced APIs.
* **In-Memory**: fast, thread-safe local storage. Perfect for unit tests or standalone binaries.
* **โ๏ธ Functional Options**: Clean, idiomatic Go API for configuration (`WithRate`, `WithStore`, etc.).
* **โฎ๏ธ Blocking Support**: `Wait(ctx, key)` method for client-side throttling (like `uber-go/ratelimit`'s `Take`).
* **๐ Middleware Ready**:
* Standard `net/http` middleware included.
* Specialized `Gin` middleware available in a sub-package.
* **๐ง Memory Safe**: Automatic TTL management for Redis keys prevents zombie data and memory leaks.
* **๐ Thread-Safe**: Fixed critical race conditions in v0.7.4. Default `RateLimiter` is now fully atomic for concurrent use.
## ๐ What's New in v0.7.5
* **Critical Fix**: Resolved a data race in `RateLimiter` where header generation was using shared state. Now uses a thread-safe `Allow(ctx, key)` API.
* **Circuit Breaker**: Fixed a concurrency flaw in the "Half-Open" state. It now correctly allows only *one* probe request at a time, preventing backend overload.
* **API Update**: `IsRequestAllowed(key)` is **deprecated**. Please use `Allow(ctx, key)` which returns detailed metadata (`Remaining`, `RetryAfter`).
* **Performance**: Identified opportunity to optimize ID generation (Roadmap).
## ๐ฆ Installation
```bash
go get github.com/nccapo/rate-limiter
```
## ๐ ๏ธ Configuration & Usage
The library uses the **Functional Options** pattern for valid, flexible configuration.
### 1. Using Redis Storage (Recommended for Production)
Use this mode when running multiple instances of your application (e.g., behind a load balancer), so they share the same rate limit quotas.
```go
package main
import (
"log"
"time"
"github.com/redis/go-redis/v9"
rrl "github.com/nccapo/rate-limiter"
)
func main() {
// 1. Initialize your Redis client
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
})
// 2. Configure the Rate Limiter
// NewRedisStore(client, hashKey)
// - client: your redis connection (UniversalClient: supports Cluster/Ring)
// - hashKey: if true, keys are base64 encoded to avoid issues with special chars
store := rrl.NewRedisStore(rdb, true)
limiter, err := rrl.NewRateLimiter(
rrl.WithRate(10), // Cost: 10 tokens per request (or use 1 for standard counting)
rrl.WithMaxTokens(100), // Capacity: Bucket holds 100 tokens max
rrl.WithRefillInterval(time.Second), // Refill: Add query cost back continuously
rrl.WithStore(store),
)
if err != nil {
log.Fatalf("Failed to create limiter: %v", err)
}
}
```
### 2. Client-Side Throttling (Blocking)
If you are writing a worker or client that sends requests, you can use `Wait()` to automatically sleep until a token is available. This mimics `uber-go/ratelimit`'s `Take()` behavior.
```go
func worker(ctx context.Context, limiter *rrl.RateLimiter) {
for {
// Blocks until request is allowed
if err := limiter.Wait(ctx, "worker-id"); err != nil {
return // Context cancelled
}
// Do heavy work...
performTask()
}
}
```
### 3. Strict Pacing (Leaky Bucket Style)
To enforce strict spacing between requests (no bursts), use `WithStrictPacing()`.
```go
limiter, _ := rrl.NewRateLimiter(
rrl.WithRate(1),
rrl.WithRefillInterval(100 * time.Millisecond), // 10 reqs/sec
rrl.WithStrictPacing(), // MaxTokens = 1 (No bursts!)
rrl.WithStore(store),
)
```
### 5. Multi-Level (Tiered) Rate Limiting ๐
For high-traffic distributed applications, checking Redis for *every* request can be expensive. Use a **Tiered Store** to buffer requests in-memory first.
* **Logic**: Check local MemoryStore (Primary) -> If allowed, check Redis (Secondary).
* **Drift**: Local store might be slightly ahead of Redis, effectively providing a "circuit breaker" for your Redis instance.
* **Benefit**: If a specific service instance is flooded, it blocks locally, saving network trips to Redis for other services.
```go
// 1. Create Stores
localStore := rrl.NewMemoryStore()
redisStore := rrl.NewRedisStore(rdb, true)
// 2. Chain them
tieredStore := rrl.NewTieredStore(localStore, redisStore)
// 3. Create Limiter
limiter, _ := rrl.NewRateLimiter(
rrl.WithRate(100),
rrl.WithStore(tieredStore), // Uses Hybrid logic
)
```
### 6. Sliding Window Algorithm (Strict) ๐ช
If you need a strict limit (e.g., "Max 100 requests" in "Last 60 seconds") without the "bursts" allowed by the Token Bucket algorithm, use the **Sliding Window** store.
* **Logic**: Uses Redis Sorted Sets (`ZSET`) to track individual request timestamps.
* **Precision**: Extremely precise but uses more Redis memory (stores one entry per request).
* **Window Size**: Calculated as `MaxTokens * RefillInterval`.
* Example: `MaxTokens(100)` and `RefillInterval(1s)` -> Window = 100 seconds.
* Example: `MaxTokens(10)`, `RefillInterval(1m)` -> Window = 10 minutes.
```go
// 1. Create Sliding Window Store
store := rrl.NewRedisSlidingWindowStore(rdb, true)
// 2. Create Limiter
// Limit: 5 requests. Window: 5 seconds.
// How? MaxTokens=5. RefillInterval=1s.
limiter, _ := rrl.NewRateLimiter(
rrl.WithMaxTokens(5),
rrl.WithRefillInterval(time.Second),
rrl.WithStore(store),
)
```
## ๐ค Contributing
| Option | Description | Default |
|--------|-------------|---------|
| `WithRate(int64)` | The number of tokens required for a single request (Cost). | `1` |
| `WithMaxTokens(int64)` | The maximum capacity of the bucket (Burst size). | `10` |
| `WithStrictPacing()` | Sets `MaxTokens` to 1. Disables bursts, ensuring strict spacing. | `false` |
| `WithRefillInterval(duration)` | The time it takes to refill **one** token. | `1s` |
| `WithStore(Store)` | The storage backend (`RedisStore` or `MemoryStore`). | **Required** |
| `WithLogger(*log.Logger)` | Custom logger for debug/error events. | `os.Stderr` |
---
## ๐ฆ Middleware Usage
### Standard `net/http`
```go
import (
"net/http"
rrl "github.com/nccapo/rate-limiter"
)
func main() {
// ... create limiter ...
mux := http.NewServeMux()
mux.HandleFunc("/", handler)
// Wrap specific handlers or the entire mux
mw := rrl.HTTPRateLimiter(rrl.HTTPRateLimiterConfig{
Limiter: limiter,
// Optional: Custom key function (IP is default)
KeyFunc: func(r *http.Request) string {
return r.Header.Get("X-API-Key")
},
// Optional: Custom rejection handler
StatusHandler: func(w http.ResponseWriter, r *http.Request, limit, remaining int64) {
w.WriteHeader(429)
w.Write([]byte("Slow down!"))
},
})
http.ListenAndServe(":8080", mw(mux))
}
```
### Gin Framework
The Gin middleware is decoupled into a separate package to keep the core library dependency-free.
```bash
go get github.com/nccapo/rate-limiter/gin
```
```go
import (
"github.com/gin-gonic/gin"
rrl "github.com/nccapo/rate-limiter"
ginratelimit "github.com/nccapo/rate-limiter/gin"
)
func main() {
// ... create limiter ...
r := gin.Default()
r.Use(ginratelimit.RateLimiter(rrl.HTTPRateLimiterConfig{
Limiter: limiter,
KeyFunc: func(r *http.Request) string {
return r.ClientIP()
},
}))
r.GET("/ping", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "pong"})
})
r.Run()
}
```
## ๐ค Contributing
Pull requests are welcome! For major changes, please open an issue first to discuss what you would like to change.
## ๐ License
[MIT](https://choosealicense.com/licenses/mit/)
## ๐ Benchmarks
Hardware: Apple M1 Pro
```text
BenchmarkMemoryStore_Allow-10 13665328 85.44 ns/op 0 B/op 0 allocs/op
BenchmarkRedisStore_Allow-10 14238 85246 ns/op 208 B/op 6 allocs/op
BenchmarkMemoryStore_Wait-10 5834898 197.6 ns/op 48 B/op 1 allocs/op
```
* **MemoryStore**: Ultra-low latency (~85ns), zero allocations.
* **RedisStore**: Dependent on network (mocked here, showing ~85ยตs overhead for client/lua parsing).
## ๐ Comparison
| Feature | `nccapo/rate-limiter` | `uber-go/ratelimit` |
| :--- | :---: | :---: |
| **Algorithm** | Token Bucket (Allow Bursts) | Leaky Bucket (Smooth) |
| **Distributed** | โ
Yes (Redis) | โ No (Local only) |
| **Atomic** | โ
Yes (Lua Scripts) | โ
Yes (Atomic CAS) |
| **Blocking Wait** | โ
Yes (`Wait`) | โ
Yes (`Take`) |
| **Strict Pacing** | โ
Yes (`WithStrictPacing`) | โ
Yes (`WithoutSlack`) |
| **Middleware** | โ
Yes (Http & Gin) | โ No |