Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/the-benchmarker/graphql-benchmarks
GraphQL benchmarks using the-benchmarker framework.
https://github.com/the-benchmarker/graphql-benchmarks
benchmark graphql http measurement performance web
Last synced: 3 months ago
JSON representation
GraphQL benchmarks using the-benchmarker framework.
- Host: GitHub
- URL: https://github.com/the-benchmarker/graphql-benchmarks
- Owner: the-benchmarker
- License: mit
- Created: 2019-03-06T19:06:34.000Z (almost 6 years ago)
- Default Branch: develop
- Last Pushed: 2021-08-19T23:13:58.000Z (over 3 years ago)
- Last Synced: 2024-08-07T08:13:17.574Z (6 months ago)
- Topics: benchmark, graphql, http, measurement, performance, web
- Language: Ruby
- Size: 5.95 MB
- Stars: 56
- Watchers: 9
- Forks: 7
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# Which is the fastest GraphQL?
It's all about GraphQL server benchmarking across many languages.
Benchmarks cover maximum throughput and normal use latency. For a more
detailed description of the methodology used, the how, and the why see the
bottom of this page.## Results
### Top 5 Ranking
| | Rate | Latency | Verbosity |
|:---:| ---- | ------- | --------- |
| :one: | agoo-c (c) | agoo (ruby) | fastify-mercurius (javascript) |
| :two: | ggql-i (go) | agoo-c (c) | express-graphql (javascript) |
| :three: | ggql (go) | ggql-i (go) | koa-koa-graphql (javascript) |
| :four: | agoo (ruby) | ggql (go) | apollo-server-fastify (javascript) |
| :five: | fastify-mercurius (javascript) | koa-koa-graphql (javascript) | apollo-server-express (javascript) |#### Parameters
- Last updated: 2021-08-19
- OS: Linux (version: 5.7.1-050701-generic, arch: x86_64)
- CPU Cores: 12
- Connections: 1000
- Duration: 20 seconds| [Rate](rates.md) | [Latency](latency.md) | [Verbosity](verbosity.md) | [README](README.md) |
| ---------------- | --------------------- | ------------------------- | ------------------- |## Requirements
+ [Ruby](https://www.ruby-lang.org) for tooling
+ [Docker](https://www.docker.com) as **frameworks** are `isolated` into _containers_
+ [perfer](https://github.com/ohler55/perfer) the benchmarking tool, `>= 1.5.3`
+ [Oj](https://github.com/ohler55/oj) is needed by the benchmarking Ruby script, `>= 3.7`
+ [RSpec](https://rubygems.org/gems/rspec) is needed for testing## Usage
+ Install all dependencies, Ruby, Docker, Perfer, Oj, and RSpec.
+ Build containers
> build all
```sh
build.rb
```> build just named targets
```sh
build.rb [target] [target] ...
```+ Runs the tests (optional)
```sh
rspec spec.rb
```+ Run the benchmarks
> frameworks is an options list of frameworks or languages run (example: ruby agoo-c)
```sh
benchmarker.rb [frameworks...]
```## Methodology
Performance of a framework includes latency and maximum number of requests
that can be handled in a span of time. The assumption is that users of a
framework will choose to run at somewhat less that fully loaded. Running fully
loaded would leave no room for a spike in usage. With that in mind, the
maximum number of requests per second will serve as the upper limit for a
framework.Latency tends to vary significantly not only radomly but according to the
load. A typical latency versus throughput curve starts at some low-load value
and stays fairly flat in the normal load region until some inflection
point. At the inflection point until the maximum throughput the latency
increases.```
| *
| ****
| ****
| ****
|******************************************************
+---------------------------------------------------------------------
^ \ / ^ ^
low-load normal-load inflection max
```These benchmarks show the normal-load latency as that is what most users will
see when using a service. Most deployments do not run at near maximum
throughput but try to stay in the normal-load are but are prepared for spike
in usage. To accomdate slower frameworks a value of 1000 request per second is
used for determing the median latency. The assumption is that a rate of 1000
request per second falls in the normal range for most if not all frameworks
tested.The `perfer` benchmarking tool is used for these reasons:
- A rate can be specified for latency determination.
- JSON output makes parsing output easier.
- Fewer threads are needed by `perfer` leaving more for the application being benchmarked.
- `perfer` is faster than `wrk` albeit only slightly## How to Contribute
In any way you want ...
+ Provide a Pull Request for a framework addition
+ Report a bug (on any implementation)
+ Suggest an idea
+ [More details](CONTRIBUTING.md)All ideas are welcome.
## Contributors
- [Peter Ohler](https://github.com/ohler55) - Author, maintainer
- [the-benchmarker/web-frameworks](https://github.com/the-benchmarker/web-frameworks) - the original cloned source that has been modified for this repository