Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/czgdp1807/numpy-benchmarks-openblas

Scripts for automatically benchmarking several CPU architectures supported by OpenBLAS
https://github.com/czgdp1807/numpy-benchmarks-openblas

Last synced: 2 days ago
JSON representation

Scripts for automatically benchmarking several CPU architectures supported by OpenBLAS

Awesome Lists containing this project

README

        

## About

This repository contains scripts for automatically benchmarking several CPU architectures supported by OpenBLAS simultaneously.

It will also contain scripts for analysing the resulting benchmarking data.

## Steps to use

Use [size_config](https://github.com/czgdp1807/numpy/tree/size_config) from my fork of numpy.

Inside `numpy` project root execute the following steps,

1. `git clean -fdx`
2. `pip install -e . --no-build-isolation`
3. `export NUMPY_BENCHMARK_SIZE_FILE=full/path/to/bench_sizes.json`
4. `python ../numpy-benchmarks-openblas/script.py --set-commit-hash=2da02ea321f557c0cfe0ad6d0e7d8a4354c51103 --benchmark-name=bench_linalg.Eindot.time_matmul_a_b --hardware=x86_64 --result-dir=/path/to/dir/where/results/will/be/stored/by/this/script`

In the third step note the `../numpy-benchmarks-openblas/script.py`. It means that `numpy-benchmarks-openblas` is in the same directory as `numpy`. For example, if `numpy` is present in `~/Quansight/` then `numpy-benchmarks-openblas` should also be present in `~/Quansight/`

For graphical representation, pass` --presentation=graph`. The graphs will be saved in `simplified_results/graphs` under the path passed in `--result-dir`. For example, if you pass `--result-dir=~/x/y/z` then the graphs will be saved inside, `~/x/y/z/simplified_results/graphs`.

After running the `script.py` once if you only want to do the visualisation of the benchmarking data, you need to only run the `CompareSimplifiedBenchmarkResults.py` with all the command options as it is. In other words, just replace `script.py` with `CompareSimplifiedBenchmarkResults.py`. That would just only plot graphs or output a table.

For tuning the sizes for different benchmarks, change the values in `bench_sizes.json`. The keys are the name of benchmarks (defined as Python classes in `bench_linalg.py`). The value of benchmark key is a mapping of variable (defined inside the Python classes) to its sizes/dimensions (change these for benchmarking purposes).

`CompareAndParseMarkdownResults.py` can be used to parse and compare two different tabular benchmarking results generated by this repository. Note that the set of kernels in both the tabular results should be same.
It will list down all those results for each of the benchmarks which have a difference of at least `10%` (by default). The `10%` can be changed via `--threshold` command line argument to this file. If you want to increase the difference to `20%`, just pass `--threshold=0.2`. `--file1` and `--file2` are compulsory command line arguments to this script. Both are paths to `.txt` files which contain the tabular results. `--file1` is considered as original values i.e., while calculating percentage change the data from this file will be used as the denominator (i.e., the absolute change will be divided by the timings from `--file1`). Sample usage is in [this gist](https://gist.github.com/czgdp1807/73d431e940c8073f0926a6bb03be3c78).