{"id":13489995,"url":"https://github.com/denji/nginx-tuning","last_synced_at":"2025-05-15T06:03:57.073Z","repository":{"id":40650476,"uuid":"85368996","full_name":"denji/nginx-tuning","owner":"denji","description":"NGINX tuning for best performance","archived":false,"fork":false,"pushed_at":"2024-05-09T22:24:50.000Z","size":59,"stargazers_count":2577,"open_issues_count":2,"forks_count":395,"subscribers_count":118,"default_branch":"master","last_synced_at":"2025-05-15T06:03:20.531Z","etag":null,"topics":["best-practices","details","nginx","security","tuning"],"latest_commit_sha":null,"homepage":"https://git.io/vSvsq","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/denji.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-03-18T02:01:25.000Z","updated_at":"2025-05-14T04:10:14.000Z","dependencies_parsed_at":"2024-09-23T07:00:47.286Z","dependency_job_id":null,"html_url":"https://github.com/denji/nginx-tuning","commit_stats":{"total_commits":49,"total_committers":4,"mean_commits":12.25,"dds":"0.18367346938775508","last_synced_commit":"e9aad516aaf6b87600cfcb40c9cd72e87afdebe9"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/denji%2Fnginx-tuning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/denji%2Fnginx-tuning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/denji%2Fnginx-tuning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/denji%2Fnginx-tuning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/denji","download_url":"https://codeload.github.com/denji/nginx-tuning/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254283336,"owners_count":22045140,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["best-practices","details","nginx","security","tuning"],"created_at":"2024-07-31T19:00:39.091Z","updated_at":"2025-05-15T06:03:57.045Z","avatar_url":"https://github.com/denji.png","language":null,"readme":"NGINX Tuning For Best Performance\n=================================\n\nFor this configuration you can use web server you like, I decided, because I work mostly with it to use nginx.\n\nGenerally, properly configured nginx can handle up to 400K to 500K requests per second (clustered). Most what I saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, of course, this was `2 x Intel Xeon` with HyperThreading enabled, but it can work without problem on slower machines.\n\n__You must understand that this config is used in a testing environment and not in production, so you will need to find a way to implement most of those features as best possible for your servers.__\n\n* [Stable version NGINX (deb/rpm)](https://nginx.org/en/linux_packages.html#stable)\n* [Mainline version NGINX (deb/rpm)](https://nginx.org/en/linux_packages.html#mainline)\n\nFirst, you will need to install nginx\n\n```bash\nyum install nginx\napt install nginx\n```\n\nBackup your original configs and you can start reconfigure your configs. You will need to open your `nginx.conf` at `/etc/nginx/nginx.conf` with your favorite editor.\n\n```nginx\n# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that\nworker_processes auto; #some last versions calculate it automatically\n\n# number of file descriptors used for nginx\n# the limit for the maximum FDs on the server is usually set by the OS.\n# if you don't set FD's then OS settings will be used which is by default 2000\nworker_rlimit_nofile 100000;\n\n# only log critical errors\nerror_log /var/log/nginx/error.log crit;\n\n# provides the configuration file context in which the directives that affect connection processing are specified.\nevents {\n    # determines how much clients will be served per worker\n    # max clients = worker_connections * worker_processes\n    # max clients is also limited by the number of socket connections available on the system (~64k)\n    worker_connections 4000;\n\n    # optimized to serve many clients with each thread, essential for linux -- for testing environment\n    use epoll;\n\n    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment\n    multi_accept on;\n}\n\nhttp {\n    # cache informations about FDs, frequently accessed files\n    # can boost performance, but you need to test those values\n    open_file_cache max=200000 inactive=20s;\n    open_file_cache_valid 30s;\n    open_file_cache_min_uses 2;\n    open_file_cache_errors on;\n\n    # to boost I/O on HDD we can disable access logs\n    access_log off;\n\n    # copies data between one FD and other from within the kernel\n    # faster than read() + write()\n    sendfile on;\n\n    # send headers in one piece, it is better than sending them one by one\n    tcp_nopush on;\n\n    # don't buffer data sent, good for small data bursts in real time\n    # https://brooker.co.za/blog/2024/05/09/nagle.html\n    # https://news.ycombinator.com/item?id=10608356\n    #tcp_nodelay on;\n\n    # reduce the data that needs to be sent over network -- for testing environment\n    gzip on;\n    # gzip_static on;\n    gzip_min_length 10240;\n    gzip_comp_level 1;\n    gzip_vary on;\n    gzip_disable msie6;\n    gzip_proxied expired no-cache no-store private auth;\n    gzip_types\n        # text/html is always compressed by HttpGzipModule\n        text/css\n        text/javascript\n        text/xml\n        text/plain\n        text/x-component\n        application/javascript\n        application/x-javascript\n        application/json\n        application/xml\n        application/rss+xml\n        application/atom+xml\n        font/truetype\n        font/opentype\n        application/vnd.ms-fontobject\n        image/svg+xml;\n\n    # allow the server to close connection on non responding client, this will free up memory\n    reset_timedout_connection on;\n\n    # request timed out -- default 60\n    client_body_timeout 10;\n\n    # if client stop responding, free up memory -- default 60\n    send_timeout 2;\n\n    # server will close connection after this time -- default 75\n    keepalive_timeout 30;\n\n    # number of requests client can make over keep-alive -- for testing environment\n    keepalive_requests 100000;\n}\n```\n\nNow you can save the configuration and run the below [command](https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/#stopping-or-restarting-nginx)\n\n```\nnginx -s reload\n/etc/init.d/nginx start|restart\n```\n\nIf you wish to test the configuration first you can run\n\n```\nnginx -t\n/etc/init.d/nginx configtest\n```\n\nJust For Security Reasons\n------------------------\n\n```nginx\nserver_tokens off;\n```\n\nNGINX Simple DDoS Defense\n-------------------------\n\nThis is far away from a secure DDoS defense but can slow down some small DDoS. This configuration is for a testing environment and you should use your own values.\n\n```nginx\n# limit the number of connections per single IP\nlimit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;\n\n# limit the number of requests for a given session\nlimit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;\n\n# zone which we want to limit by upper values, we want limit whole server\nserver {\n    limit_conn conn_limit_per_ip 10;\n    limit_req zone=req_limit_per_ip burst=10 nodelay;\n}\n\n# if the request body size is more than the buffer size, then the entire (or partial)\n# request body is written into a temporary file\nclient_body_buffer_size  128k;\n\n# buffer size for reading client request header -- for testing environment\nclient_header_buffer_size 3m;\n\n# maximum number and size of buffers for large headers to read from client request\nlarge_client_header_buffers 4 256k;\n\n# read timeout for the request body from client -- for testing environment\nclient_body_timeout   3m;\n\n# how long to wait for the client to send a request header -- for testing environment\nclient_header_timeout 3m;\n```\n\nNow you can test the configuration again\n\n```bash\nnginx -t # /etc/init.d/nginx configtest\n```\nAnd then [reload or restart your nginx](https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/#stopping-or-restarting-nginx)\n\n```\nnginx -s reload\n/etc/init.d/nginx reload|restart\n```\n\nYou can test this configuration with `tsung` and when you are satisfied with the result you can hit `Ctrl+C` because it can run for hours.\n\nIncrease The Maximum Number Of Open Files (`nofile` limit) – Linux\n-----------------------------------------------\n\nThere are two ways to raise the nofile/max open files/file descriptors/file handles limit for NGINX in RHEL/CentOS 7+.\nWith NGINX running, check the current limit on the master process\n\n    $ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files\n    Max open files            1024                 4096                 files\n\n#### And worker processes\n\n    ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files\n\n    Max open files            1024                 4096                 files\n    Max open files            1024                 4096                 files\n\nTrying with the `worker_rlimit_nofile` directive in `{,/usr/local}/etc/nginx/nginx.conf` fails as SELinux policy doesn't allow `setrlimit`. This is shown in `/var/log/nginx/error.log`\n\n    015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied)\n\n#### And in /var/log/audit/audit.log\n\n    type=AVC msg=audit(1437731200.211:366): avc:  denied  { setrlimit } for  pid=12066 comm=\"nginx\" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process\n\n#### `nolimit` without Systemd\n\n    # /etc/security/limits.conf\n    # /etc/default/nginx (ULIMIT)\n    $ nano /etc/security/limits.d/nginx.conf\n    nginx   soft    nofile  65536\n    nginx   hard    nofile  65536\n    $ sysctl -p\n\n#### `nolimit` with Systemd\n\n    $ mkdir -p /etc/systemd/system/nginx.service.d\n    $ nano /etc/systemd/system/nginx.service.d/nginx.conf\n    [Service]\n    LimitNOFILE=30000\n    $ systemctl daemon-reload\n    $ systemctl restart nginx.service\n\n#### SELinux boolean `httpd_setrlimit` to true(1)\n\nThis will set fd limits for the worker processes. Leave the `worker_rlimit_nofile` directive in `{,/usr/local}/etc/nginx/nginx.conf` and run the following as root\n\n    setsebool -P httpd_setrlimit 1\n\nDoS [HTTP/1.1 and above: Range Requests](https://tools.ietf.org/html/rfc7233#section-6.1)\n----------------------------------------\n\nBy default [`max_ranges`](https://nginx.org/r/max_ranges) is not limited.\nDoS attacks can create many Range-Requests (Impact on stability I/O).\n\nSocket Sharding in NGINX 1.9.1+ (DragonFly BSD and Linux 3.9+)\n-------------------------------------------------------------------\n\n| Socket type      | Latency (ms) | Latency stdev (ms) | CPU Load |\n|------------------|--------------|--------------------|----------|\n| Default          | 15.65        | 26.59              | 0.3      |\n| accept_mutex off | 15.59        | 26.48              | 10       |\n| reuseport        | 12.35        | 3.15               | 0.3      |\n\n[Thread Pools](https://nginx.org/r/thread_pool) in NGINX Boost Performance 9x! (Linux)\n--------------\n\n[Multi-threaded](https://nginx.org/r/aio) sending of files is currently supported only in Linux.\nWithout [`sendfile_max_chunk`](https://nginx.org/r/sendfile_max_chunk) limit, one fast connection may seize the worker process entirely.\n\nSelecting an upstream based on SSL protocol version\n---------------------------------------------------\n```nginx\nmap $ssl_preread_protocol $upstream {\n    \"\"        ssh.example.com:22;\n    \"TLSv1.2\" new.example.com:443;\n    default   tls.example.com:443;\n}\n\n# ssh and https on the same port\nserver {\n    listen      192.168.0.1:443;\n    proxy_pass  $upstream;\n    ssl_preread on;\n}\n```\n\nHappy Hacking!\n==============\n\nReference links\n---------------\n\n* __https://github.com/trimstray/nginx-admins-handbook__\n* __https://github.com/GrrrDog/weird_proxies__\n* __https://github.com/h5bp/server-configs-nginx__\n* __https://github.com/leandromoreira/linux-network-performance-parameters__\n* https://github.com/nginx-boilerplate/nginx-boilerplate\n* https://www.nginx.com/blog/thread-pools-boost-performance-9x/\n* https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/\n* https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/\n* https://www.nginx.com/blog/performing-a-b-testing-nginx-plus/\n* https://www.nginx.com/blog/10-tips-for-10x-application-performance/\n* https://www.nginx.com/blog/http-keepalives-and-web-performance/\n* https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/\n* https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/\n* https://www.nginx.com/blog/introducing-cicd-with-nginx-and-nginx-plus/\n* https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/\n* https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/\n* https://www.nginx.com/blog/nginx-high-performance-caching/\n* https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/\n* https://nginx.org/r/pcre_jit\n* https://nginx.org/r/ssl_engine (`openssl engine -t `)\n* https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/\n* https://www.nginx.com/blog/tuning-nginx/\n* https://github.com/intel/asynch_mode_nginx\n* https://openresty.org/download/agentzh-nginx-tutorials-en.html\n* https://www.maxcdn.com/blog/nginx-application-performance-optimization/\n* https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/\n* https://medium.freecodecamp.org/a8afdbfde64d\n* https://medium.freecodecamp.org/secure-your-web-application-with-these-http-headers-fd66e0367628\n* https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88\n* https://gist.github.com/denji/9130d1c95e350c58bc50e4b3a9e29bf4\n* https://8gwifi.org/docs/nginx-secure.jsp\n* http://www.codestance.com/tutorials-archive/nginx-tuning-for-best-performance-255\n* https://ospi.fi/blog/centos-7-raise-nofile-limit-for-nginx.html\n* https://www.linode.com/docs/websites/nginx/configure-nginx-for-optimized-performance\n* https://haydenjames.io/nginx-tuning-tips-tls-ssl-https-ttfb-latency/\n* https://gist.github.com/kekru/c09dbab5e78bf76402966b13fa72b9d2\n\n\nStatic analyzers\n----------------\n* https://github.com/yandex/gixy\n\nSyntax highlighting\n-------------------\n* https://github.com/chr4/sslsecure.vim\n* https://github.com/chr4/nginx.vim\n* https://github.com/nginx/nginx/tree/master/contrib/vim\n\nNGINX config formatter\n----------------------\n* https://github.com/rwx------/nginxConfigFormatterGo\n* https://github.com/1connect/nginx-config-formatter\n* https://github.com/lovette/nginx-tools/tree/master/nginx-minify-conf\n\nNGINX configuration tools\n-------------------------\n* https://github.com/nginxinc/crossplane\n* https://github.com/valentinxxx/nginxconfig.io\n\nBBR (Linux 4.9+)\n----------------\n* https://blog.cloudflare.com/http-2-prioritization-with-nginx/\n* Linux v4.13+ as no longer required FQ (`q_disc`) with BBR.\n* https://github.com/google/bbr/blob/master/Documentation/bbr-quick-start.md\n* https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/commit/?id=218af599fa635b107cfe10acf3249c4dfe5e4123\n* https://github.com/systemd/systemd/issues/9725#issuecomment-413369212\n* If the latest Linux kernel distribution does not have `tcp_bbr` enabled by default:\n```sh\nmodprobe tcp_bbr \u0026\u0026 echo 'tcp_bbr' \u003e\u003e /etc/modules-load.d/bbr.conf\necho 'net.ipv4.tcp_congestion_control=bbr' \u003e\u003e /etc/sysctl.d/99-bbr.conf\n# Recommended for production, but with  Linux v4.13rc1+ can be used not only in FQ (`q_disc') in BBR mode.\necho 'net.core.default_qdisc=fq' \u003e\u003e /etc/sysctl.d/99-bbr.conf\nsysctl --system\n```\n","funding_links":[],"categories":["Others","best-practices"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdenji%2Fnginx-tuning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdenji%2Fnginx-tuning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdenji%2Fnginx-tuning/lists"}