Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sportebois/nginx-rate-limit-sandbox
Docker image with various NGINX rate limit settings to play with burst and nodelay settings
https://github.com/sportebois/nginx-rate-limit-sandbox
nginx nginx-rate rate
Last synced: about 1 month ago
JSON representation
Docker image with various NGINX rate limit settings to play with burst and nodelay settings
- Host: GitHub
- URL: https://github.com/sportebois/nginx-rate-limit-sandbox
- Owner: sportebois
- Created: 2017-03-10T13:53:48.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2017-03-16T19:29:30.000Z (almost 8 years ago)
- Last Synced: 2024-08-10T11:02:54.440Z (5 months ago)
- Topics: nginx, nginx-rate, rate
- Language: HTML
- Size: 60.5 KB
- Stars: 102
- Watchers: 4
- Forks: 25
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# NGINX Rate Limit, Burst and nodelay sandbox
NGINX Rate limiting is more traffic-shaping than pure rate-limiting. And this is an important point to understand how it works with the `burst` and `no_delay` settings.
Tools: Docker, and [Siege](https://www.joedog.org/siege-home/) (you can brew-it, or use other cli load-testing tools like ab or artillery or anything you like!)
The Nginx config defines a few locations to test the various combinations of:
- limit_req_zone by uri or by ip
- using the burst argument (set to 5 in this case) or not
- adding nodelay to control how to deal with request going over-quota during bursts.The rates defined are:
- 30 req/min
- burst locations allow a burst of 5With the leaky bucket, that means a new request should be allowed ever 2 seconds.
The endpoints defined are:
- http://127.0.0.1:80/by-uri/burst0
- http://127.0.0.1:80/by-uri/burst0_nodelay
- http://127.0.0.1:80/by-uri/burst5
- http://127.0.0.1:80/by-uri/burst5_nodelay
- http://127.0.0.1:80/by-ip/burst0
- http://127.0.0.1:80/by-ip/burst0_nodelay
- http://127.0.0.1:80/by-ip/burst5
- http://127.0.0.1:80/by-ip/burst5_nodelay## Test it!
Run the Nginx in a docker container:
Choose one of:
# If you want to see NGINX logs
docker run -it --rm -p 80:80 sportebois/nginx-rate-limit-sandbox
# If you want to run it in the background
NGINX_CONTAINER_ID=$(docker run -d --rm -p 80:80 sportebois/nginx-rate-limit-sandbox)
# Then when you want to stop and clean it:
docker stop $NGINX_CONTAINER_IDUsing Siege to send 10 concurrent requests at once on the various endpoints
The most interesting ones are the burst5 and burst5_nodelay which let you really visualize and remember how nginx deal with burst settings!siege -b -r 1 -c 10 http://127.0.0.1:80/by-uri/burst0
siege -b -r 1 -c 10 http://127.0.0.1:80/by-uri/burst0_nodelay
siege -b -r 1 -c 10 http://127.0.0.1:80/by-uri/burst5
siege -b -r 1 -c 10 http://127.0.0.1:80/by-uri/burst5_nodelaysiege -b -r 1 -c 10 http://127.0.0.1:80/by-ip/burst0
siege -b -r 1 -c 10 http://127.0.0.1:80/by-ip/burst0_nodelay
siege -b -r 1 -c 10 http://127.0.0.1:80/by-ip/burst5
siege -b -r 1 -c 10 http://127.0.0.1:80/by-ip/burst5_nodelayWhen doing these tests, you will want to pay attention to:
- the success/status code (obviously)
- the response time it took, both for the rate-limited requests _and_ the succesfull requestsYou should see something like this, then you'll be able to play with all the other various locations/settings: ![burst5_output](burst5_demo.gif)