{"id":16797063,"url":"https://github.com/jdockerty/gruglb","last_synced_at":"2025-03-17T03:31:03.730Z","repository":{"id":206194504,"uuid":"703032297","full_name":"jdockerty/gruglb","owner":"jdockerty","description":"A simple L4/L7 load balancer for grugs.","archived":false,"fork":false,"pushed_at":"2024-09-18T10:08:46.000Z","size":134,"stargazers_count":34,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-02-27T17:17:12.187Z","etag":null,"topics":["load-balancer","rust","rust-lang","rust-learning"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jdockerty.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-10-10T13:18:50.000Z","updated_at":"2025-02-14T01:04:38.000Z","dependencies_parsed_at":"2024-09-18T11:59:36.422Z","dependency_job_id":"2a4b64e7-fdec-4ef3-a9a5-cb90d31204f0","html_url":"https://github.com/jdockerty/gruglb","commit_stats":null,"previous_names":["jdockerty/gruglb"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jdockerty%2Fgruglb","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jdockerty%2Fgruglb/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jdockerty%2Fgruglb/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jdockerty%2Fgruglb/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jdockerty","download_url":"https://codeload.github.com/jdockerty/gruglb/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243841204,"owners_count":20356441,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["load-balancer","rust","rust-lang","rust-learning"],"created_at":"2024-10-13T09:21:02.411Z","updated_at":"2025-03-17T03:31:03.430Z","avatar_url":"https://github.com/jdockerty.png","language":"Rust","readme":"# Grug Load Balancer (gruglb)\n\nA simplistic L4/L7 load balancer, written in Rust, for [grug brained developers](https://grugbrain.dev/) (me).\n\n# Why?\n\nThis is largely a toy project and not intended for production use, but also provides a segue into being able to use a simple load balancer without many frills for my own projects and to learn\nabout writing more complex systems in Rust.\n\n## Install\n\nUsing `cargo` you can install via\n\n```bash\ncargo install --git https://github.com/jdockerty/gruglb --bin gruglb\n```\n\nOnce installed, pass a YAML config file using the `--config` flag, for example\n\n```bash\ngruglb --config path/to/config.yml\n```\n\n## Features\n\n- Round-robin load balancing of HTTP/HTTPS/TCP connections.\n- Health checks for HTTP/HTTPS/TCP targets.\n- Graceful termination.\n- TLS via termination, backends are still expected to be accessible over HTTP.\n\n## How does it work?\n\nGiven a number of pre-defined targets which contains various backend servers, `gruglb` will route traffic between them in round-robin fashion.\n\nWhen a backend server is deemed unhealthy, by failing a `GET` request to the specified `health_path` for a HTTP target or failing to establish a connection for a TCP target, it is removed\nfrom the routable backends for the specified target. This means that a target with two backends will have all traffic be directed to the single healthy backend until the other server becomes healthy again.\n\nHealth checks are conducted at a fixed interval upon the application starting and continue throughout its active lifecycle.\n\nThe configuration is defined in YAML, using the `example-config.yaml` that is used for testing, it looks like this:\n\n```yaml\n# The interval, in seconds, to conduct HTTP/TCP health checks.\nhealth_check_interval: 2\n\n# Run a graceful shutdown period of 30 seconds to terminate separate worker threads.\n# Defaults to true.\ngraceful_shutdown: true\n\n# Log level information, defaults to 'info'\nlogging: info\n\n# Defined \"targets\", the key simply acts as a convenient label for various backend\n# servers which are to have traffic routed to them.\ntargets:\n\n  # TCP target example\n  tcpServersA:\n    # Either TCP or HTTP, defaults to TCP when not set.\n    protocol: 'tcp'\n\n    # Port to bind to for this target.\n    listener: 9090\n\n    # Statically defined backend servers.\n    backends:\n      - host: \"127.0.0.1\"\n        port: 8090\n      - host: \"127.0.0.1\"\n        port: 8091\n\n  # HTTP target example\n  webServersA:\n    protocol: 'http'\n    listener: 8080\n    backends:\n      - host: \"127.0.0.1\"\n        port: 8092\n        # A `health_path` is only required for HTTP backends.\n        health_path: \"/health\"\n      - host: \"127.0.0.1\"\n        port: 8093\n        health_path: \"/health\"\n```\n\nUsing the HTTP bound listener of `8080` as our example, if we send traffic to this we expect to see a response back from our\nconfigured backends under `webServersA`. In this instance, the `fake_backend` application is already running.\n\n```bash\n# In separate terminal windows (or as background jobs) run the fake backends\nfake_backend --id fake-1 --protocol http --port 8092\nfake_backend --id fake-2 --protocol http --port 8093\n\n# In your main window, run the load balancer\ngruglb --config tests/fixtures/example-config.yaml\n\n# Send some traffic to the load balancer\nfor i in {1..5}; do curl localhost:8080; echo; done\n\n# You should have the requests routed in a round-robin fashion to the backends.\n# The output from the above command should look like this\nHello from fake-2\nHello from fake-1\nHello from fake-2\nHello from fake-1\nHello from fake-2\n```\n\n## Performance\n\n_These tests are not very scientific and were simply ran as small experiments to see comparative performance between my implementation\nand something that I know is very good._\n\nUsing [`bombardier`](https://github.com/codesenberg/bombardier/) as the tool of choice.\n\n### gruglb\n\n\u003cdetails\u003e\n\n \u003csummary\u003e Running on localhost \u003c/summary\u003e\n\n_CPU: Intel i7-8700 (12) @ 4.600GHz_\n\n\nUsing two [`simplebenchserver`](https://pkg.go.dev/github.com/codesenberg/bombardier@v1.2.6/cmd/utils/simplebenchserver) servers as backends for a HTTP target:\n\n```\nbombardier http://127.0.0.1:8080 --latencies --fasthttp -H \"Connection: close\"\nBombarding http://127.0.0.1:8080 for 10s using 125 connection(s)\n[========================================================================================] 10s\nDone!\nStatistics        Avg      Stdev        Max\n  Reqs/sec     42558.30    3130.17   47446.16\n  Latency        2.93ms   427.72us    29.29ms\n  Latency Distribution\n     50%     2.85ms\n     75%     3.17ms\n     90%     3.61ms\n     95%     4.01ms\n     99%     5.22ms\n  HTTP codes:\n    1xx - 0, 2xx - 425267, 3xx - 0, 4xx - 0, 5xx - 0\n    others - 0\n```\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n \u003csummary\u003e Running on AWS with m5.xlarge nodes \u003c/summary\u003e\n\nThis test was performed with 4 nodes: 1 for the load balancer, 2 backend servers running a slightly modified version of `simplebenchserver` which allows binding to `0.0.0.0`, and 1 node used to send traffic internally to the load balancer.\n\n```\nbombardier http://172.31.22.113:8080 --latencies --fasthttp -H \"Connection: close\"\nBombarding http://172.31.22.113:8080 for 10s using 125 connection(s)\n[======================================================================================================================================================] 10s\nDone!\nStatistics        Avg      Stdev        Max\n  Reqs/sec     16949.53    9201.05   29354.53\n  Latency        7.37ms     6.62ms   103.98ms\n  Latency Distribution\n     50%     4.99ms\n     75%     6.14ms\n     90%    14.03ms\n     95%    22.23ms\n     99%    42.41ms\n  HTTP codes:\n    1xx - 0, 2xx - 169571, 3xx - 0, 4xx - 0, 5xx - 0\n    others - 0\n  Throughput:    20.14MB/s\n```\n\n\u003c/details\u003e\n\n\n### nginx\n\n\u003cdetails\u003e\n\n\u003csummary\u003e Configuration \u003c/summary\u003e\n\n```\nevents {\n    worker_connections 1024;\n}\n\nhttp {\n    server {\n        listen 8080;\n        location / {\n            proxy_pass http://backend;\n        }\n    }\n    upstream backend {\n        server 172.31.21.226:8091;\n        server 172.31.27.167:8092;\n    }\n}\n```\n\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n \u003csummary\u003e Running on localhost \u003c/summary\u003e\n\nUsing the same two backend servers and a single worker process for `nginx`\n\n```\nbombardier http://127.0.0.1:8080 --latencies --fasthttp -H \"Connection: close\"\nBombarding http://127.0.0.1:8080 for 10s using 125 connection(s)\n[========================================================================================] 10s\nDone!\nStatistics        Avg      Stdev        Max\n  Reqs/sec     11996.59     784.99   14555.03\n  Latency       10.42ms     2.91ms   226.42ms\n  Latency Distribution\n     50%    10.37ms\n     75%    10.72ms\n     90%    11.04ms\n     95%    11.22ms\n     99%    11.71ms\n  HTTP codes:\n    1xx - 0, 2xx - 119862, 3xx - 0, 4xx - 0, 5xx - 0\n    others - 0\n  Throughput:    14.29MB/s\n```\n\nSomething to note is that `gruglb` does not have the concept of `worker_processes` like `nginx` does.\n\nThis was ran with the default of a single process, it performs even better with multiple (~85k req/s).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n \u003csummary\u003e Running on AWS with m5.xlarge nodes \u003c/summary\u003e\n\nThis test was performed with 4 nodes: 1 for the load balancer, 2 backend servers running a slightly modified version of `simplebenchserver` which allows binding to `0.0.0.0`, and 1 node used to send traffic internally to the load balancer.\n\nAgain, using the default of `worker_processes 1;`\n\n```\nbombardier http://172.31.22.113:8080 --latencies --fasthttp -H \"Connection: close\"\nBombarding http://172.31.22.113:8080 for 10s using 125 connection(s)\n[======================================================================================================================================================] 10s\nDone!\nStatistics        Avg      Stdev        Max\n  Reqs/sec      8207.42    2301.56   11692.24\n  Latency       15.22ms     6.57ms   100.47ms\n  Latency Distribution\n     50%    14.71ms\n     75%    15.88ms\n     90%    18.67ms\n     95%    25.69ms\n     99%    49.37ms\n  HTTP codes:\n    1xx - 0, 2xx - 82117, 3xx - 0, 4xx - 0, 5xx - 0\n    others - 0\n  Throughput:     9.93MB/s\n```\n\n\u003c/details\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjdockerty%2Fgruglb","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjdockerty%2Fgruglb","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjdockerty%2Fgruglb/lists"}