https://github.com/ai-hypercomputer/tpu-recipes
https://github.com/ai-hypercomputer/tpu-recipes
Last synced: 5 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/ai-hypercomputer/tpu-recipes
- Owner: AI-Hypercomputer
- License: apache-2.0
- Created: 2024-09-17T19:20:51.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-26T23:10:11.000Z (about 1 year ago)
- Last Synced: 2024-12-30T04:16:14.206Z (about 1 year ago)
- Language: Shell
- Size: 87.9 KB
- Stars: 5
- Watchers: 10
- Forks: 5
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING
- License: LICENSE
Awesome Lists containing this project
README
# Cloud TPU performance recipes
This repository provides the necessary instructions to reproduce a
specific workload on Google Cloud TPUs. The focus is on reliably achieving
a performance metric (e.g. throughput) that demonstrates the combined hardware
and software stack on TPUs.
## Organization
- `./training`: instructions to reproduce the training performance of
popular LLMs, diffusion, and other models with PyTorch and JAX.
- `./inference`: instructions to reproduce inference performance.
- `./microbenchmarks`: instructions for low-level TPU benchmarks such as
matrix multiplication performance and memory bandwidth.
## Contributor notes
Note: This is not an officially supported Google product. This project is not
eligible for the [Google Open Source Software Vulnerability Rewards
Program](https://bughunters.google.com/open-source-security).