{"id":29202572,"url":"https://github.com/ai-hypercomputer/accelerator-microbenchmarks","last_synced_at":"2025-07-02T13:32:42.052Z","repository":{"id":279334868,"uuid":"878117689","full_name":"AI-Hypercomputer/accelerator-microbenchmarks","owner":"AI-Hypercomputer","description":null,"archived":false,"fork":false,"pushed_at":"2025-05-31T01:37:52.000Z","size":109,"stargazers_count":1,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-31T12:28:21.866Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AI-Hypercomputer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"docs/contributing.md","funding":null,"license":"LICENSE","code_of_conduct":"docs/code-of-conduct.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-10-24T20:09:53.000Z","updated_at":"2025-05-31T01:37:56.000Z","dependencies_parsed_at":null,"dependency_job_id":"d6b3bcc8-bfa8-4356-9a56-44d47a5f6c5e","html_url":"https://github.com/AI-Hypercomputer/accelerator-microbenchmarks","commit_stats":null,"previous_names":["ai-hypercomputer/accelerator-microbenchmarks"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/AI-Hypercomputer/accelerator-microbenchmarks","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Faccelerator-microbenchmarks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Faccelerator-microbenchmarks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Faccelerator-microbenchmarks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Faccelerator-microbenchmarks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AI-Hypercomputer","download_url":"https://codeload.github.com/AI-Hypercomputer/accelerator-microbenchmarks/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Faccelerator-microbenchmarks/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":263148124,"owners_count":23421116,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-02T13:30:29.934Z","updated_at":"2025-07-02T13:32:42.044Z","avatar_url":"https://github.com/AI-Hypercomputer.png","language":"Python","readme":"# Microbenchmarks\nMicrobenchmarks that assess the performance of individual operations and components on accelerators with JAX.\n\n## Setup\n\nSetup the cloud TPU environment. For more information about how to set up a TPU environment, refer to one of the following references:\n\n* GCE: [Manage TPU resources | Google Cloud](https://cloud.google.com/kubernetes-engine/docs/how-to/tpus)\n* GKE: [Deploy TPU workloads in GKE Standard | Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/how-to/tpus)\n\n### Quick Start\n\nThe following command sets up a V4 TPU VM:\n\n```bash\ngcloud compute tpus tpu-vm create $TPU_NAME /\n        --zone=$ZONE /\n        --accelerator-type=v4-8  /\n        --version=tpu-ubuntu2204-base\n```\n\nYou may ssh into the VM for subsequent testing:\n```bash\ngcloud compute ssh $TPU_NAME --zone=$ZONE\n```\n\n## Running the microbenchmarks on VM\n\nNow that you have the VM environment set up, `git clone` the accelerator-microbenchmarks on the VM and install the dependencies:\n```bash\ngit clone https://github.com/qinyiyan/accelerator-microbenchmarks.git\npip install -r requirements.txt\n```\n\nYou can run the benchmarks with a config file:\n\n```bash\ncd accelerator-microbenchmarks\npython src/run_benchmark.py --config=configs/sample_benchmark_matmul.yaml\n```\n\nCreate your own config.yaml file to customize the benchmarks and parameters you want to run. You may refer to the src/benchmark_*.py for the benchmarks and tunable parameters, or you may refer to the sample YAML files in configs/ directory. \n\n\n## Examine the outputs\n\nThe benchmarks will print metrics to the terminal. If you wish to dump formatted metrics in a file, you may set this parameter in your YAML file:\n* `csv_path`: Dumps the benchmark metrics in a CSV.\nExamples can be found in the YAML files under config/ directory.\n\nIf you wish to generate the xprof profile, set this parameter in the YAML file:\n* `trace_dir`: Dumps the xprof profile to either a local location or GCS bucket.\nExamples can be found in the YAML files under config/ directory.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-hypercomputer%2Faccelerator-microbenchmarks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fai-hypercomputer%2Faccelerator-microbenchmarks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-hypercomputer%2Faccelerator-microbenchmarks/lists"}