An open API service indexing awesome lists of open source software.

https://github.com/shader-slang/slang-material-modules-benchmark


https://github.com/shader-slang/slang-material-modules-benchmark

Last synced: 2 months ago
JSON representation

Awesome Lists containing this project

README

        

# Material Modules Benchmark

[**See the charts here**](https://shader-slang.com/slang-material-modules-benchmark/dev/bench)

## How it works
1. Commits to the master branch of [Slang](https://github.com/shader-slang/slang) will trigger a CI workflow that [runs the benchmark](https://github.com/shader-slang/slang/actions/workflows/push-benchmark-results.yml) and uploads the JSON file to **this** repository (specifically to `benchmarks.json`).
2. Changes to `benchmark.json` will trigger a CI workflow in this repository.
3. This workflow updates the `gh-pages` repository using [`github-action-benchmark`](fffffhttps://github.com/benchmark-action/github-action-benchmark). It reads the new results from `benchmarks.json` and updates a database in `gh-pages`.
4. Once `gh-pages` receives the update, it will trigger the final CI workflow that builds the GitHub Pages site.

The diagram below summarizes these steps.

![smmb graphs](media/smmb-graphs.png)

*Notes:*
- In (1), the worflow from the Slang repository overrides `benchmarks.json` when it pushes to this repository; this is expected to happen.
- This repository contains another file, `currrent`, which holds the latest commit's message and hash for debugging purposes.
- Each time `benchmarks.json` is updated (2), the `github-action-benchmark` workflow reads its contents and appends a database that is embedded in a [Javascript file](https://github.com/shader-slang/slang-material-modules-benchmark/blob/gh-pages/dev/bench/data.js).
- There is currently no behaviour to limit the number of entries in the database, so it can grow to a couple megabytes. Manually trimming the data is possible by directly editing the Javascript file and removing entries.

## Running the benchmark locally

It is possible to run the benchmark locally to see immediate results or to customize the results. This requires the having cloning both the [Slang repository](https://github.com/shader-slang/slang) and the [MDL-SDK fork](https://github.com/shader-slang/MDL-SDK). Then, starting from the root of the Slang repository:
1. Build the `Release` version of slangc; it should be located in `build/Release/bin/slangc`.
2. Change directories to `tool/benchmark` within the Slang repository.
3. Copy the Slang shaders:
- From `/examples/mdl_sdk/dxr/content/slangified` (`*.slang` files specifically)
- To `tool/benchmark`
4. Run the `compile.py` script using Python (3.12+ recommended):
- Requires some light packages, which can be installed with `pip` using: `pip install prettytable argparse`.
- Linux users may have to tweak the script to set the `slangc` variable to end with `slangc` instead of `slangc.exe`.
- Script options:
- `--target` to select target mode:
- `spirv` generates SPIRV directly.
- `spirv-glsl` generates SPIRV via glsl.
- `dxil` generates DXIL through DXC.
- `dxil-embedded` generates DXIL through DXC, but precomiles slang modules to DXIL
- `--samples` to set the number of times to repeat and average the measurements over.
- `--output` path to the JSON file where results will be stored.

The script will output timings in a Markdown friendly way, as shown below:

![script](media/script.png)

## How to read the charts
- Each chart is titled with the format ` : : `
- `` is one of `closesthit`, `anyhit`, and `shadow`
- `` is either `mono` for monolithic compilation or `module` for modular compilation.
- `` is currently fixed to DXIL.
- Other targets can be generated by running the benchmarking script with a different target (DXIL or SPIRV; with or without precompiled modules).
- The $x$-axis tracks the commit hash. Unfortunately there is currently no way to display concrete dates.
- The $y$-axis shows the time, in milliseconds, taken to compile the specific shader stage under the particular compilation mode and target.

## Interacting with the charts

In case there is a commit which results in alarming measurements, there is a convenient way to reach the original commit/PR. Each data point of each graph can be highlighted as so:

![nodes](media/nodes.png)

Clicking on the node will result in a redirection to the associated commit in **this** repository.

![nodes](media/one.png)

Clicking on the highlighted link will then redirect the user to original commit/PR in the Slang repository.

![nodes](media/two.png)