https://github.com/ai-hypercomputer/accelerator-microbenchmarks
https://github.com/ai-hypercomputer/accelerator-microbenchmarks
Last synced: 8 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/ai-hypercomputer/accelerator-microbenchmarks
- Owner: AI-Hypercomputer
- License: apache-2.0
- Created: 2024-10-24T20:09:53.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-05-31T01:37:52.000Z (9 months ago)
- Last Synced: 2025-05-31T12:28:21.866Z (9 months ago)
- Language: Python
- Size: 106 KB
- Stars: 1
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: docs/contributing.md
- License: LICENSE
- Code of conduct: docs/code-of-conduct.md
Awesome Lists containing this project
README
# Microbenchmarks
Microbenchmarks that assess the performance of individual operations and components on accelerators with JAX.
## Setup
Setup the cloud TPU environment. For more information about how to set up a TPU environment, refer to one of the following references:
* GCE: [Manage TPU resources | Google Cloud](https://cloud.google.com/kubernetes-engine/docs/how-to/tpus)
* GKE: [Deploy TPU workloads in GKE Standard | Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/how-to/tpus)
### Quick Start
The following command sets up a V4 TPU VM:
```bash
gcloud compute tpus tpu-vm create $TPU_NAME /
--zone=$ZONE /
--accelerator-type=v4-8 /
--version=tpu-ubuntu2204-base
```
You may ssh into the VM for subsequent testing:
```bash
gcloud compute ssh $TPU_NAME --zone=$ZONE
```
## Running the microbenchmarks on VM
Now that you have the VM environment set up, `git clone` the accelerator-microbenchmarks on the VM and install the dependencies:
```bash
git clone https://github.com/qinyiyan/accelerator-microbenchmarks.git
pip install -r requirements.txt
```
You can run the benchmarks with a config file:
```bash
cd accelerator-microbenchmarks
python src/run_benchmark.py --config=configs/sample_benchmark_matmul.yaml
```
Create your own config.yaml file to customize the benchmarks and parameters you want to run. You may refer to the src/benchmark_*.py for the benchmarks and tunable parameters, or you may refer to the sample YAML files in configs/ directory.
## Examine the outputs
The benchmarks will print metrics to the terminal. If you wish to dump formatted metrics in a file, you may set this parameter in your YAML file:
* `csv_path`: Dumps the benchmark metrics in a CSV.
Examples can be found in the YAML files under config/ directory.
If you wish to generate the xprof profile, set this parameter in the YAML file:
* `trace_dir`: Dumps the xprof profile to either a local location or GCS bucket.
Examples can be found in the YAML files under config/ directory.