https://github.com/skippia/js-benchmark-builder
Comparison different JS frameworks / runtimes in couple usecases scaffolding web server dynamically from CLI
https://github.com/skippia/js-benchmark-builder
abstract-server bun-benchmark dynamic-transport express-benchmark fastify-benchmark heavy-blocking heavy-non-blocking javascript-performance nodejs-benchmark performance pg-pool-create-user pg-pool-get-user redis-create-user redis-get-user uws-benchmark
Last synced: 7 months ago
JSON representation
Comparison different JS frameworks / runtimes in couple usecases scaffolding web server dynamically from CLI
- Host: GitHub
- URL: https://github.com/skippia/js-benchmark-builder
- Owner: Skippia
- Created: 2024-08-29T13:12:53.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-10-11T14:42:01.000Z (about 1 year ago)
- Last Synced: 2025-01-13T14:52:36.464Z (9 months ago)
- Topics: abstract-server, bun-benchmark, dynamic-transport, express-benchmark, fastify-benchmark, heavy-blocking, heavy-non-blocking, javascript-performance, nodejs-benchmark, performance, pg-pool-create-user, pg-pool-get-user, redis-create-user, redis-get-user, uws-benchmark
- Language: TypeScript
- Homepage:
- Size: 564 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Benchmark playground
## Description
This repository is devoted benchamarking different JS (frameworks / runtimes) a.k.a "transports" in different usecases building web-server dynamically in runtime based on CLI.
These transports are:
- Node.js (v20)
- Bun (v1.1.26)
- Express.js (v4.19.2)
- Fastify (v4.28.1)
- uWebSockets (v20.48.0)These usecases are:
- Empty request
- Heavy non-blocking request (setTimeout)
- Heavy blocking request (heavy CPU-bound)
- Pg-pool create user request
- Pg-pool get user request
- Redis create user request
- Redis get user request## Features
- dynamic scaffolding web-server based on CLI (transport + usecase)
- dynamic scaffolding benchmark test based on CLI ()
## How it works?### Class diagram
### Sequency diagram
## Usage
### Manual mode
In this mode you should separately run web server and benchmark usecase. For example:
1. Run web server:
```bash
npm run server -- -t node -u empty
```
Supported flags for manual benchmark running:
- u — usecase (*)
- t — transport (*)2. Run benchmark:
```bash
npm run benchmark:manual -- -u empty -c 100 -p 1 -w 3 -d 60
```Supported flags for manual benchmark running:
- u — usecase (*)
- c — connections
- p — pipelining factor
- w — workers
- d — durationor
```bash
autocannon http://localhost:3001/empty -d 30 -c 100 -w 3
```Such mode will produce benchmark result only in terminal for specific usecase.
### Automatic mode
In this mode you should run only one script which under the hood will test all usecases running them on each transport (you change config at `src/benchmark/automate-config.ts`):
Run automate script:
```bash
npm run benchmark:automate
```Such mode will produce benchmark result in new file `/benchmarks-data/benchmark-${last-snapshot}.json` and create / upgrade `benchmark-summary.md` file which will contain comparison table based on last snapshot json file.
## Benchmark results
- To check "raw" data check `/benchmarks-data/benchmark-${last-snapshot}.json` (each new run of benchmark generates new json file with result)
- To check summary data of last benchmark (comparison table) check `benchmark-summary.md` file.