Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/smallnest/1m-go-tcp-server
benchmarks for implementation of servers which support 1 million connections
https://github.com/smallnest/1m-go-tcp-server
benchmark epoll go golang
Last synced: 20 days ago
JSON representation
benchmarks for implementation of servers which support 1 million connections
- Host: GitHub
- URL: https://github.com/smallnest/1m-go-tcp-server
- Owner: smallnest
- Created: 2019-02-15T09:07:15.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2021-04-15T14:32:27.000Z (over 3 years ago)
- Last Synced: 2024-10-01T13:43:21.638Z (about 1 month ago)
- Topics: benchmark, epoll, go, golang
- Language: Go
- Homepage:
- Size: 43.9 KB
- Stars: 1,886
- Watchers: 60
- Forks: 351
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-starts - smallnest/1m-go-tcp-server - benchmarks for implementation of servers which support 1 million connections (Go)
- awesome-list - 1m-go-tcp-server
- awesome-golang-repositories - 1m-go-tcp-server
README
# Benchmark for implementation of servers that support 1m connections
inspired by [handling 1M websockets connections in Go ](https://github.com/eranyanay/1m-go-websockets)
## Servers
1. **1_simple_tcp_server**: a 1m-connections server implemented based on `goroutines per connection`
2. **2_epoll_server**: a 1m-connections server implemented based on `epoll`
3. **3_epoll_server_throughputs**: add throughputs and latency test for 2_epoll_server
4. **4_epoll_client**: implement the client based on `epoll`
5. **5_multiple_client**: use `multiple epoll` to manage connections in client
6. **6_multiple_server**: use `multiple epoll` to manage connections in server
7. **7_server_prefork**: use `prefork` style of apache to implement server
8. **8_server_workerpool**: use `Reactor` pattern to implement multiple event loops
9. **9_few_clients_high_throughputs**: a simple `goroutines per connection` server for test throughtputs and latency
10. **10_io_intensive_epoll_server**: an io-bound `multiple epoll` server
11. **11_io_intensive_goroutine**: an io-bound `goroutines per connection` server
12. **12_cpu_intensive_epoll_server**: a cpu-bound `multiple epoll` server
13. **13_cpu_intensive_goroutine**: an cpu-bound `goroutines per connection` server
## Test Environment- two `E5-2630 V4` cpus, total **20** cores, **40** logicial cores.
- 32G memorytune the linux:
```sh
sysctl -w fs.file-max=2000500
sysctl -w fs.nr_open=2000500
sysctl -w net.nf_conntrack_max=2000500
ulimit -n 2000500sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1
```client sends the next request only when it has received the response. it has not used the `pipeline` style to test.
## Benchmarks### 1m connections
| | throughputs (tps) | latency |
|--|--|--|
|goroutine-per-conn|202830|4.9s|
|single epoll(both server and client)| 42495 | 23s|
|single epoll server| 42402 | 0.8s|
|multiple epoll server| 197814 | 0.9s|
|prefork| 444415 | 1.5s|
|workerpool| 190022 | 0.3s|**中文介绍**:
1. [百万 Go TCP 连接的思考: epoll方式减少资源占用](https://colobu.com/2019/02/23/1m-go-tcp-connection/)
2. [百万 Go TCP 连接的思考2: 百万连接的服务器的性能](https://colobu.com/2019/02/27/1m-go-tcp-connection-2/)
3. [百万 Go TCP 连接的思考3: 低连接场景下的服务器的吞吐和延迟](https://colobu.com/2019/02/28/1m-go-tcp-connection-3/)