Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nginx-modules/ngx_http_tls_dyn_size
Optimizing TLS over TCP to reduce latency for NGINX
https://github.com/nginx-modules/ngx_http_tls_dyn_size
cloudflare dynamic http2 https nginx optimization segment tcp tls
Last synced: 12 days ago
JSON representation
Optimizing TLS over TCP to reduce latency for NGINX
- Host: GitHub
- URL: https://github.com/nginx-modules/ngx_http_tls_dyn_size
- Owner: nginx-modules
- Created: 2016-12-01T17:14:58.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2024-05-04T19:16:00.000Z (6 months ago)
- Last Synced: 2024-08-01T19:33:05.705Z (3 months ago)
- Topics: cloudflare, dynamic, http2, https, nginx, optimization, segment, tcp, tls
- Homepage: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/
- Size: 19.5 KB
- Stars: 30
- Watchers: 6
- Forks: 7
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Optimizing TLS over TCP to reduce latency for NGINX
* [`nginx__dynamic_tls_records`](https://github.com/cloudflare/sslconfig/blob/3e45b99/patches/)
* [`Optimizing HTTP/2 prioritization with BBR and tcp_notsent_lowat`](https://blog.cloudflare.com/http-2-prioritization-with-nginx/)### What we do now
We use a static record size of 4K.
This gives a good balance of latency and throughput.#### Configuration
*Example*
```nginx
http {
ssl_dyn_rec_enable on;
}
```#### Optimize latency
By initialy sending small (1 TCP segment) sized records,
we are able to avoid HoL blocking of the first byte.
This means TTFB is sometime lower by a whole RTT.#### Optimizing throughput
By sending increasingly larger records later in the connection,
when HoL is not a problem, we reduce the overhead of TLS record
(29 bytes per record with GCM/CHACHA-POLY).#### Logic
Start each connection with small records
(1369 byte default, change with `ssl_dyn_rec_size_lo`).After a given number of records (40, change with `ssl_dyn_rec_threshold`)
start sending larger records (4229, `ssl_dyn_rec_size_hi`).Eventually after the same number of records,
start sending the largest records (`ssl_buffer_size`).In case the connection idles for a given amount of time
(1s, `ssl_dyn_rec_timeout`), the process repeats itself
(i.e. begin sending small records again).### Configuration directives
#### ssl_dyn_rec_enable
* **syntax**: `ssl_dyn_rec_enable bool`
* **default**: `off`
* **context**: `http`, `server`#### ssl_dyn_rec_timeout
* **syntax**: `ssl_dyn_rec_timeout number`
* **default**: `1000`
* **context**: `http`, `server`We want the initial records to fit into one TCP segment
so we don't get TCP HoL blocking due to TCP Slow Start.A connection always starts with small records, but after
a given amount of records sent, we make the records larger
to reduce header overhead.After a connection has idled for a given timeout, begin
the process from the start. The actual parameters are
configurable. If `ssl_dyn_rec_timeout` is `0`, we assume `ssl_dyn_rec` is `off`.#### ssl_dyn_rec_size_lo
* **syntax**: `ssl_dyn_rec_size_lo number`
* **default**: `1369`
* **context**: `http`, `server`Default sizes for the dynamic record sizes are defined to fit maximal
TLS + IPv6 overhead in a single TCP segment for lo and 3 segments for hi:
1369 = 1500 - 40 (IP) - 20 (TCP) - 10 (Time) - 61 (Max TLS overhead)#### ssl_dyn_rec_size_hi
* **syntax**: `ssl_dyn_rec_size_hi number`
* **default**: `4229`
* **context**: `http`, `server`4229 = (1500 - 40 - 20 - 10) * 3 - 61
#### ssl_dyn_rec_threshold
* **syntax**: `ssl_dyn_rec_threshold number`
* **default**: `40`
* **context**: `http`, `server`### License
* [Cloudflare](https://github.com/cloudflare), [Vlad Krasnov](https://github.com/vkrasnov)