https://github.com/googlecontainertools/minikube-image-benchmark
https://github.com/googlecontainertools/minikube-image-benchmark
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/googlecontainertools/minikube-image-benchmark
- Owner: GoogleContainerTools
- Created: 2021-03-04T20:14:03.000Z (almost 5 years ago)
- Default Branch: main
- Last Pushed: 2023-11-06T19:10:14.000Z (over 2 years ago)
- Last Synced: 2025-01-11T14:21:53.393Z (about 1 year ago)
- Language: Go
- Size: 90.8 KB
- Stars: 1
- Watchers: 4
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# minikube-image-benchmark
## Purpose
The purpose of this project is to create a simple to run application that benchmarks different methods of building & pushing an image to minikube.
Each benchmark is run multiple times and the average run time for the runs is calculated and output to a csv file to review the results.
## Warning!
This benchmarking tool is going to make changes to your Docker and minikube instances, so don't run if you don't want those to be disturbed.
For example, the `/etc/docker/daemon.json` is modified and Docker is restarted, the following commands are run as well
```
minikube delete --all
docker system prune -a -f
```
## Requirements
* Docker needs to be installed
* Currently only supported on Linux (only tested on Debian)
## Methods
The three current methods the benchmarks tests is using minikube docker-env, minikube image load, and minikube registry addon, with more being added in the future.
## How to Run Benchmarks
```
make
```
```
./out/benchmark # defaults to 100 runs per method
```
or
```
./out/benchmark --runs 20 # will run 20 runs per method
```
```
cat ./out/results.csv # where the output is stored
```
## Non-Iterative vs Iterative Flow
In the non-iterative flow the images/cache is cleared after every image build, making it so each build is on a brand new Docker.
In the iterative flow the images/cache is cleared at the end of a set of benchmarks. So if 20 runs per benchmark, no cache is cleared until all 20 runs have completed, just the last layer of the image is changed between runs.