{"id":18140692,"url":"https://github.com/AI-Hypercomputer/xpk","last_synced_at":"2025-03-31T00:31:03.456Z","repository":{"id":204051071,"uuid":"652846631","full_name":"AI-Hypercomputer/xpk","owner":"AI-Hypercomputer","description":"xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerators such as TPUs and GPUs on GKE.","archived":false,"fork":false,"pushed_at":"2025-02-26T19:51:01.000Z","size":2207,"stargazers_count":105,"open_issues_count":37,"forks_count":31,"subscribers_count":22,"default_branch":"main","last_synced_at":"2025-02-26T20:37:11.010Z","etag":null,"topics":["gcloud","gke","tpu"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AI-Hypercomputer.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"docs/contributing.md","funding":null,"license":"LICENSE","code_of_conduct":"docs/code-of-conduct.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-06-12T23:14:49.000Z","updated_at":"2025-02-25T20:13:00.000Z","dependencies_parsed_at":"2023-12-12T20:33:41.651Z","dependency_job_id":"add00a00-8552-4bbe-98b0-96b5b2a582a8","html_url":"https://github.com/AI-Hypercomputer/xpk","commit_stats":null,"previous_names":["google/xpk","ai-hypercomputer/xpk"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Fxpk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Fxpk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Fxpk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-Hypercomputer%2Fxpk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AI-Hypercomputer","download_url":"https://codeload.github.com/AI-Hypercomputer/xpk/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246399816,"owners_count":20770907,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gcloud","gke","tpu"],"created_at":"2024-11-01T16:02:28.319Z","updated_at":"2025-03-31T00:31:03.434Z","avatar_url":"https://github.com/AI-Hypercomputer.png","language":"Python","readme":"\u003c!--\n Copyright 2023 Google LLC\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n      https://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n --\u003e\n\n[![Build Tests](https://github.com/google/xpk/actions/workflows/build_tests.yaml/badge.svg)](https://github.com/google/xpk/actions/workflows/build_tests.yaml)\n[![Nightly Tests](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml/badge.svg)](https://github.com/google/xpk/actions/workflows/nightly_tests.yaml)\n[![Develop Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml/badge.svg?branch=develop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/build_tests.yaml)\n[![Develop Nightly Tests](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml/badge.svg?branch=develop)](https://github.com/AI-Hypercomputer/xpk/actions/workflows/nightly_tests.yaml)\n\n# Overview\n\nxpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help\nCloud developers to orchestrate training jobs on accelerators such as TPUs and\nGPUs on GKE. xpk handles the \"multihost pods\" of TPUs, GPUs (HGX H100) and CPUs\n(n2-standard-32) as first class citizens.\n\nxpk decouples provisioning capacity from running jobs. There are two structures:\nclusters (provisioned VMs) and workloads (training jobs). Clusters represent the\nphysical resources you have available. Workloads represent training jobs -- at\nany time some of these will be completed, others will be running and some will\nbe queued, waiting for cluster resources to become available.\n\nThe ideal workflow starts by provisioning the clusters for all of the ML\nhardware you have reserved. Then, without re-provisioning, submit jobs as\nneeded. By eliminating the need for re-provisioning between jobs, using Docker\ncontainers with pre-installed dependencies and cross-ahead of time compilation,\nthese queued jobs run with minimal start times. Further, because workloads\nreturn the hardware back to the shared pool when they complete, developers can\nachieve better use of finite hardware resources. And automated tests can run\novernight while resources tend to be underutilized.\n\nxpk supports the following TPU types:\n* v4\n* v5e\n* v5p\n* Trillium (v6e)\n\nand the following GPU types:\n* A100\n* A3-Highgpu (h100)\n* A3-Mega (h100-mega) - [Create cluster](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines)\n* A3-Ultra (h200) - [Create cluster](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines), [Create workloads](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines)\n\nand the following CPU types:\n* n2-standard-32\n\nxpk also supports Google Cloud Storage solutions:\n* [Cloud Storage FUSE](#fuse)\n* [Filestore](#filestore)\n\n# Permissions needed on Cloud Console:\n\n* Artifact Registry Writer\n* Compute Admin\n* Kubernetes Engine Admin\n* Logging Admin\n* Monitoring Admin\n* Service Account User\n* Storage Admin\n* Vertex AI Administrator\n* Filestore Editor (This role is neccessary if you want to run `storage create` command with `--type=gcpfilestore`)\n\n# Prerequisites\n\nFollowing tools must be installed:\n\n- python \u003e= 3.10 (download from [here](https://www.python.org/downloads/))\n- pip ([installation instruction](https://pip.pypa.io/en/stable/installation/))\n- python venv ([installation instruction](https://virtualenv.pypa.io/en/latest/installation.html))\n(all three of above can be installed at once from [here](https://packaging.python.org/en/latest/guides/installing-using-linux-tools/#installing-pip-setuptools-wheel-with-linux-package-managers))\n- gcloud (install from [here](https://cloud.google.com/sdk/gcloud#download_and_install_the))\n  - Run `gcloud init` \n  - [Authenticate](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) to Google Cloud\n- kubectl (install from [here](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_kubectl))\n  - Install `gke-gcloud-auth-plugin` from [here](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)\n- docker ([installation instruction](https://docs.docker.com/engine/install/))\n  - Run `gcloud auth configure-docker` to ensure images can be uploaded to registry \n- make - please run below command.\n```shell\n# sudo may be required\napt-get -y install make\n```\nIn addition, below dependencies can be installed either using provided links or using `make install` command, if xpk is downloaded via `git clone` command:\n- kueuectl (install from [here](https://kueue.sigs.k8s.io/docs/reference/kubectl-kueue/installation/))\n- kjob (installation instructions [here](https://github.com/kubernetes-sigs/kjob/blob/main/docs/installation.md))\n\n# Installation\nTo install xpk, install required tools mentioned in [prerequisites](#prerequisites). [Makefile](https://github.com/AI-Hypercomputer/xpk/blob/main/Makefile) provides a way to install all neccessary tools. XPK can be installed via pip:\n\n```shell\npip install xpk\n```\n\nIf you see an error saying: `This environment is externally managed`, please use a virtual environment.\n\n```shell\n  ## One time step of creating the venv\n  VENV_DIR=~/venvp3\n  python3 -m venv $VENV_DIR\n  ## Enter your venv.\n  source $VENV_DIR/bin/activate\n  ## Clone the repository and installing dependencies.\n  pip install xpk\n```\n\nIf you are running XPK by cloning GitHub repository, first run the\nfollowing commands to begin using XPK commands:\n\n```shell\ngit clone https://github.com/google/xpk.git\ncd xpk\n# Install required dependencies with make\nmake install \u0026\u0026 export PATH=$PATH:$PWD/bin\n```\n\nIf you want to have installed dependecies persist in your PATH please run:\n`echo $PWD/bin` and add its value to `PATH` in .bashrc  or .zshrc\n\nIf you see an error saying: `This environment is externally managed`, please use a virtual environment.\n\nExample:\n\n```shell\n  ## One time step of creating the venv\n  VENV_DIR=~/venvp3\n  python3 -m venv $VENV_DIR\n  ## Enter your venv.\n  source $VENV_DIR/bin/activate\n  ## Clone the repository and installing dependencies.\n  git clone https://github.com/google/xpk.git\n  cd xpk\n  # Install required dependencies with make\n  make install \u0026\u0026 export PATH=$PATH:$PWD/bin\n```\n\n# XPK for Large Scale (\u003e1k VMs)\n\nFollow user instructions in [xpk-large-scale-guide.sh](xpk-large-scale-guide.sh)\nto use xpk for a GKE cluster greater than 1000 VMs. Run these steps to set up a\nGKE cluster with large scale training and high throughput support with XPK, and\nrun jobs with XPK. We recommend you manually copy commands per step and verify\nthe outputs of each step.\n\n# Example usages:\n\nTo get started, be sure to set your GCP Project and Zone as usual via `gcloud\nconfig set`.\n\nBelow are reference commands. A typical journey starts with a `Cluster Create`\nfollowed by many `Workload Create`s. To understand the state of the system you\nmight want to use `Cluster List` or `Workload List` commands. Finally, you can\ncleanup with a `Cluster Delete`.\n\nIf you have failures with workloads not running, use `xpk inspector` to investigate\nmore.\n\nIf you need your Workloads to have persistent storage, use `xpk storage` to find out more.\n\n## Cluster Create\n\nFirst set the project and zone through gcloud config or xpk arguments.\n\n```shell\nPROJECT_ID=my-project-id\nZONE=us-east5-b\n# gcloud config:\ngcloud config set project $PROJECT_ID\ngcloud config set compute/zone $ZONE\n# xpk arguments\nxpk .. --zone $ZONE --project $PROJECT_ID\n```\n\nThe cluster created is a regional cluster to enable the GKE control plane across\nall zones.\n\n*   Cluster Create (provision reserved capacity):\n\n    ```shell\n    # Find your reservations\n    gcloud compute reservations list --project=$PROJECT_ID\n    # Run cluster create with reservation.\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-256 \\\n    --num-slices=2 \\\n    --reservation=$RESERVATION_ID\n    ```\n\n*   Cluster Create (provision on-demand capacity):\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=4 --on-demand\n    ```\n\n*   Cluster Create (provision spot / preemptable capacity):\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=4 --spot\n    ```\n\n* Cluster Create for Pathways:\n    Pathways compatible cluster can be created using `cluster create-pathways`.\n    ```shell\n    python3 xpk.py cluster create-pathways \\\n    --cluster xpk-pw-test \\\n    --num-slices=4 --on-demand \\\n    --tpu-type=v5litepod-16\n    ```\n\n*   Cluster Create for Ray:\n    A cluster with KubeRay enabled and a RayCluster can be created using `cluster create-ray`.\n    ```shell\n    python3 xpk.py cluster create-ray \\\n    --cluster xpk-rc-test \\\n    --ray-version=2.39.0 \\\n    --num-slices=4 --on-demand \\\n    --tpu-type=v5litepod-8\n    ```\n\n*   Cluster Create can be called again with the same `--cluster name` to modify\n    the number of slices or retry failed steps.\n\n    For example, if a user creates a cluster with 4 slices:\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=4  --reservation=$RESERVATION_ID\n    ```\n\n    and recreates the cluster with 8 slices. The command will rerun to create 4\n    new slices:\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=8  --reservation=$RESERVATION_ID\n    ```\n\n    and recreates the cluster with 6 slices. The command will rerun to delete 2\n    slices. The command will warn the user when deleting slices.\n    Use `--force` to skip prompts.\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=6  --reservation=$RESERVATION_ID\n\n    # Skip delete prompts using --force.\n\n    python3 xpk.py cluster create --force \\\n    --cluster xpk-test --tpu-type=v5litepod-16 \\\n    --num-slices=6  --reservation=$RESERVATION_ID\n    ```\n\n    and recreates the cluster with 4 slices of v4-8. The command will rerun to delete\n    6 slices of v5litepod-16 and create 4 slices of v4-8. The command will warn the\n    user when deleting slices. Use `--force` to skip prompts.\n\n    ```shell\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --tpu-type=v4-8 \\\n    --num-slices=4  --reservation=$RESERVATION_ID\n\n    # Skip delete prompts using --force.\n\n    python3 xpk.py cluster create --force \\\n    --cluster xpk-test --tpu-type=v4-8 \\\n    --num-slices=4  --reservation=$RESERVATION_ID\n    ```\n\n### Create Private Cluster\n\nXPK allows you to create a private GKE cluster for enhanced security. In a private cluster, nodes and pods are isolated from the public internet, providing an additional layer of protection for your workloads.\n\nTo create a private cluster, use the following arguments:\n\n**`--private`**\n\nThis flag enables the creation of a private GKE cluster. When this flag is set:\n\n*  Nodes and pods are isolated from the direct internet access.\n*  `master_authorized_networks` is automatically enabled.\n*  Access to the cluster's control plane is restricted to your current machine's IP address by default.\n\n**`--authorized-networks`**\n\nThis argument allows you to specify additional IP ranges (in CIDR notation) that are authorized to access the private cluster's control plane and perform `kubectl` commands. \n\n*  Even if this argument is not set when you have `--private`, your current machine's IP address will always be given access to the control plane.\n*  If this argument is used with an existing private cluster, it will replace the existing authorized networks.\n\n**Example Usage:**\n\n* To create a private cluster and allow access to Control Plane only to your current machine:\n\n  ```shell\n  python3 xpk.py cluster create \\\n    --cluster=xpk-private-cluster \\\n    --tpu-type=v4-8 --num-slices=2 \\\n    --private\n  ```\n\n* To create a private cluster and allow access to Control Plane only to your current machine and the IP ranges `1.2.3.0/24` and `1.2.4.5/32`:\n\n  ```shell\n  python3 xpk.py cluster create \\\n    --cluster=xpk-private-cluster \\\n    --tpu-type=v4-8 --num-slices=2 \\\n    --authorized-networks 1.2.3.0/24 1.2.4.5/32\n\n    # --private is optional when you set --authorized-networks\n  ```\n\n\u003e **Important Notes:** \n\u003e * The argument `--private` is only applicable when creating new clusters. You cannot convert an existing public cluster to a private cluster using these flags.\n\u003e * The argument `--authorized-networks` is applicable when creating new clusters or using an existing _*private*_ cluster. You cannot convert an existing public cluster to a private cluster using these flags.\n\u003e * You need to [set up a Cluster NAT for your VPC network](https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#creating_nat) so that the Nodes and Pods have outbound access to the internet. This is required because XPK installs and configures components such as kueue that need access to external sources like `registry.k8.io`.\n\n\n### Create Vertex AI Tensorboard\n*Note: This feature is available in XPK \u003e= 0.4.0. Enable [Vertex AI API](https://cloud.google.com/vertex-ai/docs/start/cloud-environment#enable_vertexai_apis) in your Google Cloud console to use this feature. Make sure you have\n[Vertex AI Administrator](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.admin) role\nassigned to your user account.*\n\nVertex AI Tensorboard is a fully managed version of open-source Tensorboard. To learn more about Vertex AI Tensorboard, visit [this](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-introduction). Note that Vertex AI Tensorboard is only available in [these](https://cloud.google.com/vertex-ai/docs/general/locations#available-regions) regions.\n\nYou can create a Vertex AI Tensorboard for your cluster with `Cluster Create` command. XPK will create a single Vertex AI Tensorboard instance per cluster.\n\n* Create Vertex AI Tensorboard in default region with default Tensorboard name:\n\n```shell\npython3 xpk.py cluster create \\\n--cluster xpk-test --num-slices=1 --tpu-type=v4-8 \\\n--create-vertex-tensorboard\n```\n\nwill create a Vertex AI Tensorboard with the name `xpk-test-tb-instance` (*\u003cargs.cluster\u003e-tb-instance*) in `us-central1` (*default region*).\n\n* Create Vertex AI Tensorboard in user-specified region with default Tensorboard name:\n\n```shell\npython3 xpk.py cluster create \\\n--cluster xpk-test --num-slices=1 --tpu-type=v4-8 \\\n--create-vertex-tensorboard --tensorboard-region=us-west1\n```\n\nwill create a Vertex AI Tensorboard with the name `xpk-test-tb-instance` (*\u003cargs.cluster\u003e-tb-instance*) in `us-west1`.\n\n* Create Vertex AI Tensorboard in default region with user-specified Tensorboard name:\n\n```shell\npython3 xpk.py cluster create \\\n--cluster xpk-test --num-slices=1 --tpu-type=v4-8 \\\n--create-vertex-tensorboard --tensorboard-name=tb-testing\n```\n\nwill create a Vertex AI Tensorboard with the name `tb-testing` in `us-central1`.\n\n* Create Vertex AI Tensorboard in user-specified region with user-specified Tensorboard name:\n\n```shell\npython3 xpk.py cluster create \\\n--cluster xpk-test --num-slices=1 --tpu-type=v4-8 \\\n--create-vertex-tensorboard --tensorboard-region=us-west1 --tensorboard-name=tb-testing\n```\n\nwill create a Vertex AI Tensorboard instance with the name `tb-testing` in `us-west1`.\n\n* Create Vertex AI Tensorboard in an unsupported region:\n\n```shell\npython3 xpk.py cluster create \\\n--cluster xpk-test --num-slices=1 --tpu-type=v4-8 \\\n--create-vertex-tensorboard --tensorboard-region=us-central2\n```\n\nwill fail the cluster creation process because Vertex AI Tensorboard is not supported in `us-central2`.\n\n## Cluster Delete\n*   Cluster Delete (deprovision capacity):\n\n    ```shell\n    python3 xpk.py cluster delete \\\n    --cluster xpk-test\n    ```\n## Cluster List\n*   Cluster List (see provisioned capacity):\n\n    ```shell\n    python3 xpk.py cluster list\n    ```\n## Cluster Describe\n*   Cluster Describe (see capacity):\n\n    ```shell\n    python3 xpk.py cluster describe \\\n    --cluster xpk-test\n    ```\n\n## Cluster Cacheimage\n*   Cluster Cacheimage (enables faster start times):\n\n    ```shell\n    python3 xpk.py cluster cacheimage \\\n    --cluster xpk-test --docker-image gcr.io/your_docker_image \\\n    --tpu-type=v5litepod-16\n    ```\n\n## Provisioning A3-Ultra and A3-Mega clusters (GPU machines)\nTo create a cluster with A3 machines, run the below command. To create workloads on these clusters see [here](#workloads-for-a3-ultra-and-a3-mega-clusters-gpu-machines).\n  * For A3-Ultra: --device-type=h200-141gb-8\n  * For A3-Mega: --device-type=h100-mega-80gb-8\n\n  ```shell\n  python3 xpk.py cluster create \\\n  --cluster CLUSTER_NAME --device-type=h200-141gb-8 \\\n  --zone=$COMPUTE_ZONE  --project=$PROJECT_ID \\\n  --num-nodes=4 --reservation=$RESERVATION_ID\n  ```\nCurrently, the below flags/arguments are supported for A3-Mega and A3-Ultra machines:\n  * --num-nodes\n  * --default-pool-cpu-machine-type\n  * --default-pool-cpu-num-nodes\n  * --reservation\n  * --spot\n  * --on-demand (only A3-Mega)\n\n\n## Storage\nCurrently XPK supports two types of storages: Cloud Storage FUSE and Google Cloud Filestore.\n\n### FUSE\nA FUSE adapter lets you mount and access Cloud Storage buckets as local file systems, so applications can read and write objects in your bucket using standard file system semantics.\n\nTo use the GCS FUSE with XPK you need to create a [Storage Bucket](https://console.cloud.google.com/storage/).\n\nOnce it's ready you can use `xpk storage attach` with `--type=gcsfuse` command to attach a FUSE storage instance to your cluster:\n\n```shell\npython3 xpk.py storage attach test-fuse-storage --type=gcsfuse \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE \n  --mount-point='/test-mount-point' --readonly=false \\\n  --bucket=test-bucket --size=1 --auto-mount=false\n```\n\nParameters:\n\n- `--type` - type of the storage, currently xpk supports `gcsfuse` and `gcpfilestore` only.\n- `--auto-mount` - if set to true all workloads will have this storage mounted by default.\n- `--mount-point` - the path on which this storage should be mounted for a workload.\n- `--readonly` - if set to true, workload can only read from storage.\n- `--size` - size of the storage in Gb.\n- `--bucket` - name of the storage bucket. If not set then the name of the storage is used as a bucket name.\n- `--manifest` - path to the manifest file containing PersistentVolume and PresistentVolumeClaim definitions. If set, then values from manifest override the following parameters: `--size` and `--bucket`.\n\n### Filestore\n\nA Filestore adapter lets you mount and access [Filestore instances](https://cloud.google.com/filestore/) as local file systems, so applications can read and write objects in your volumes using standard file system semantics.\n\nTo create and attach a GCP Filestore instance to your cluster use `xpk storage create` command with `--type=gcpfilestore`:\n\n```shell\npython3 xpk.py storage create test-fs-storage --type=gcpfilestore \\\n  --auto-mount=false --mount-point=/data-fs --readonly=false \\\n  --size=1024 --tier=BASIC_HDD --access_mode=ReadWriteMany --vol=default \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE\n```\n\nYou can also attach an existing Filestore instance to your cluster using `xpk storage attach` command:\n\n```shell\npython3 xpk.py storage attach test-fs-storage --type=gcpfilestore \\\n  --auto-mount=false --mount-point=/data-fs --readonly=false \\\n  --size=1024 --tier=BASIC_HDD --access_mode=ReadWriteMany --vol=default \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE\n```\n\nThe command above is also useful when attaching multiple volumes from the same Filestore instance.\n\nCommands `xpk storage create` and `xpk storage attach` with `--type=gcpfilestore` accept following arguments:\n- `--type` - type of the storage.\n- `--auto-mount` - if set to true all workloads will have this storage mounted by default.\n- `--mount-point` - the path on which this storage should be mounted for a workload.\n- `--readonly` - if set to true, workload can only read from storage.\n- `--size` - size of the Filestore instance that will be created in Gb.\n- `--tier` - tier of the Filestore instance that will be created. Possible options are: `[BASIC_HDD, BASIC_SSD, ZONAL, REGIONAL, ENTERPRISE]`\n- `--access-mode` - access mode of the Filestore instance that will be created. Possible values are: `[ReadWriteOnce, ReadOnlyMany, ReadWriteMany]`\n- `--vol` - file share name of the Filestore instance that will be created.\n- `--instance` - the name of the Filestore instance. If not set then the name parameter is used as an instance name. Useful when connecting multiple volumes from the same Filestore instance.\n- `--manifest` - path to the manifest file containing PersistentVolume, PresistentVolumeClaim and StorageClass definitions. If set, then values from manifest override the following parameters: `--access-mode`, `--size` and `--volume`.\n\n### List attached storages\n\n```shell\npython3 xpk.py storage list \\\n  --project=$PROJECT --cluster $CLUSTER --zone=$ZONE\n```\n\n### Running workloads with storage\n\nIf you specified `--auto-mount=true` when creating or attaching a storage, then all workloads deployed on the cluster will have the volume attached by default. Otherwise, in order to have the storage attached, you have to add `--storage` parameter to `workload create` command:\n\n```shell\npython3 xpk.py workload create \\\n  --workload xpk-test-workload --command \"echo goodbye\" \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE \\\n  --tpu-type=v5litepod-16 --storage=test-storage\n```\n\n### Detaching storage\n\n```shell\npython3 xpk.py storage detach $STORAGE_NAME \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE\n```\n\n### Deleting storage\n\nXPK allows you to remove Filestore instances easily with `xpk storage delete` command. **Warning:** this deletes all data contained in the Filestore!\n\n```shell\npython3 xpk.py storage delete test-fs-instance \\\n  --project=$PROJECT --cluster=$CLUSTER --zone=$ZONE\n```\n\n## Workload Create\n*   Workload Create (submit training job):\n\n    ```shell\n    python3 xpk.py workload create \\\n    --workload xpk-test-workload --command \"echo goodbye\" \\\n    --cluster xpk-test \\\n    --tpu-type=v5litepod-16 --projet=$PROJECT\n    ```\n\n*   Workload Create for Pathways:\n    Pathways workload can be submitted using `workload create-pathways` on a Pathways enabled cluster (created with `cluster create-pathways`)\n\n    Pathways workload example:\n    ```shell\n    python3 xpk.py workload create-pathways \\\n    --workload xpk-pw-test \\\n    --num-slices=1 \\\n    --tpu-type=v5litepod-16 \\\n    --cluster xpk-pw-test \\\n    --docker-name='user-workload' \\\n    --docker-image=\u003cmaxtext docker image\u003e \\\n    --command='python3 MaxText/train.py MaxText/configs/base.yml base_output_directory=\u003coutput directory\u003e dataset_path=\u003cdataset path\u003e per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1'\n    ```\n\n    Regular workload can also be submitted on a Pathways enabled cluster (created with `cluster create-pathways`)\n\n    Pathways workload example:\n    ```shell\n    python3 xpk.py workload create-pathways \\\n    --workload xpk-regular-test \\\n    --num-slices=1 \\\n    --tpu-type=v5litepod-16 \\\n    --cluster xpk-pw-test \\\n    --docker-name='user-workload' \\\n    --docker-image=\u003cmaxtext docker image\u003e \\\n    --command='python3 MaxText/train.py MaxText/configs/base.yml base_output_directory=\u003coutput directory\u003e dataset_path=\u003cdataset path\u003e per_device_batch_size=1 enable_checkpointing=false enable_profiler=false remat_policy=full global_parameter_scale=4 steps=300 max_target_length=2048 use_iota_embed=true reuse_example_batch=1 dataset_type=synthetic attention=flash gcs_metrics=True run_name=$(USER)-pw-xpk-test-1'\n    ```\n\n    Pathways in headless mode - Pathways now offers the capability to run JAX workloads in Vertex AI notebooks or in GCE VMs!\n    Specify `--headless` with `workload create-pathways` when the user workload is not provided in a docker container.\n    ```shell\n    python3 xpk.py workload create-pathways --headless \\\n    --workload xpk-pw-headless \\\n    --num-slices=1 \\\n    --tpu-type=v5litepod-16 \\\n    --cluster xpk-pw-test\n    ```\n    Executing the command above would provide the address of the proxy that the user job should connect to.\n    ```shell\n    kubectl get pods\n    kubectl port-forward pod/\u003cproxy-pod-name\u003e 29000:29000\n    ```\n    ```shell\n    JAX_PLATFORMS=proxy JAX_BACKEND_TARGET=grpc://127.0.0.1:29000 python -c 'import pathwaysutils; import jax; print(jax.devices())'\n    ```\n    Specify `JAX_PLATFORMS=proxy` and `JAX_BACKEND_TARGET=\u003cproxy address from above\u003e` and `import pathwaysutils` to establish this connection between the user's JAX code and the Pathways proxy. Execute Pathways workloads interactively on Vertex AI notebooks!\n\n### Set `max-restarts` for production jobs\n\n* `--max-restarts \u003cvalue\u003e`: By default, this is 0. This will restart the job \"\"\ntimes when the job terminates. For production jobs, it is recommended to\nincrease this to a large number, say 50. Real jobs can be interrupted due to\nhardware failures and software updates. We assume your job has implemented\ncheckpointing so the job restarts near where it was interrupted.\n\n### Workloads for A3-Ultra and A3-Mega clusters (GPU machines)\nTo submit jobs on a cluster with A3 machines, run the below command. To create a cluster with A3 machines see [here](#provisioning-a3-ultra-and-a3-mega-clusters-gpu-machines).\n  * For A3-Ultra: --device-type=h200-141gb-8\n  * For A3-Mega: --device-type=h100-mega-80gb-8\n\n  ```shell\n  python3 xpk.py workload create \\\n  --workload=$WORKLOAD_NAME --command=\"echo goodbye\" \\\n  --cluster=$CLUSTER_NAME --device-type=h200-141gb-8 \\\n  --zone=$COMPUTE_ZONE  --project=$PROJECT_ID \\\n  --num-nodes=$WOKRKLOAD_NUM_NODES\n  ```\n\u003e The docker image flags/arguments introduced in [workloads section](#workload-create) can be used with A3 machines as well.\n\nIn order to run NCCL test on A3 Ultra machines check out [this guide](/examples/nccl/nccl.md).\n\n### Workload Priority and Preemption\n* Set the priority level of your workload with `--priority=LEVEL`\n\n  We have five priorities defined: [`very-low`, `low`, `medium`, `high`, `very-high`].\n  The default priority is `medium`.\n\n  Priority determines:\n\n  1. Order of queued jobs.\n\n      Queued jobs are ordered by\n      `very-low` \u003c `low` \u003c `medium` \u003c `high` \u003c  `very-high`\n\n  2. Preemption of lower priority workloads.\n\n      A higher priority job will `evict` lower priority jobs.\n      Evicted jobs are brought back to the queue and will re-hydrate appropriately.\n\n  #### General Example:\n  ```shell\n  python3 xpk.py workload create \\\n  --workload xpk-test-medium-workload --command \"echo goodbye\" --cluster \\\n  xpk-test --tpu-type=v5litepod-16 --priority=medium\n  ```\n\n### Create Vertex AI Experiment to upload data to Vertex AI Tensorboard\n*Note: This feature is available in XPK \u003e= 0.4.0. Enable [Vertex AI API](https://cloud.google.com/vertex-ai/docs/start/cloud-environment#enable_vertexai_apis) in your Google Cloud console to use this feature. Make sure you have\n[Vertex AI Administrator](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.admin) role\nassigned to your user account and to the [Compute Engine Service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) attached to the node pools in the cluster.*\n\nVertex AI Experiment is a tool that helps to track and analyze an experiment run on Vertex AI Tensorboard. To learn more about Vertex AI Experiments, visit [this](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments).\n\nXPK will create a Vertex AI Experiment in `workload create` command and attach the Vertex AI Tensorboard created for the cluster during `cluster create`. If there is a cluster created before this feature is released, there will be no Vertex AI Tensorboard created for the cluster and `workload create` will fail. Re-run `cluster create` to create a Vertex AI Tensorboard and then run `workload create` again to schedule your workload.\n\n* Create Vertex AI Experiment with default Experiment name:\n\n```shell\npython3 xpk.py workload create \\\n--cluster xpk-test --workload xpk-workload \\\n--use-vertex-tensorboard\n```\n\nwill create a Vertex AI Experiment with the name `xpk-test-xpk-workload` (*\u003cargs.cluster\u003e-\u003cargs.workload\u003e*).\n\n* Create Vertex AI Experiment with user-specified Experiment name:\n\n```shell\npython3 xpk.py workload create \\\n--cluster xpk-test --workload xpk-workload \\\n--use-vertex-tensorboard --experiment-name=test-experiment\n```\n\nwill create a Vertex AI Experiment with the name `test-experiment`.\n\nCheck out [MaxText example](https://github.com/google/maxtext/pull/570) on how to update your workload to automatically upload logs collected in your Tensorboard directory to the Vertex AI Experiment created by `workload create`.\n\n## Workload Delete\n*   Workload Delete (delete training job):\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --workload xpk-test-workload --cluster xpk-test\n    ```\n\n    This will only delete `xpk-test-workload` workload in `xpk-test` cluster.\n\n*   Workload Delete (delete all training jobs in the cluster):\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster. Deletion will only begin if you type `y` or `yes` at the prompt. Multiple workload deletions are processed in batches for optimized processing.\n\n*   Workload Delete supports filtering. Delete a portion of jobs that match user criteria. Multiple workload deletions are processed in batches for optimized processing.\n    * Filter by Job: `filter-by-job`\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test --filter-by-job=$USER\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster whose names start with `$USER`. Deletion will only begin if you type `y` or `yes` at the prompt.\n\n    * Filter by Status: `filter-by-status`\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test --filter-by-status=QUEUED\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster that have the status as Admitted or Evicted, and the number of running VMs is 0. Deletion will only begin if you type `y` or `yes` at the prompt. Status can be: `EVERYTHING`,`FINISHED`, `RUNNING`, `QUEUED`, `FAILED`, `SUCCESSFUL`.\n\n## Workload List\n*   Workload List (see training jobs):\n\n    ```shell\n    python3 xpk.py workload list \\\n    --cluster xpk-test\n    ```\n\n* Example Workload List Output:\n\n  The below example shows four jobs of different statuses:\n\n  * `user-first-job-failed`: **filter-status** is `FINISHED` and `FAILED`.\n  * `user-second-job-success`: **filter-status** is `FINISHED` and `SUCCESSFUL`.\n  * `user-third-job-running`: **filter-status** is `RUNNING`.\n  * `user-forth-job-in-queue`: **filter-status** is `QUEUED`.\n  * `user-fifth-job-in-queue-preempted`: **filter-status** is `QUEUED`.\n\n  ```\n  Jobset Name                     Created Time           Priority   TPU VMs Needed   TPU VMs Running/Ran   TPU VMs Done      Status     Status Message                                                  Status Time\n  user-first-job-failed           2023-1-1T1:00:00Z      medium     4                4                     \u003cnone\u003e            Finished   JobSet failed                                                   2023-1-1T1:05:00Z\n  user-second-job-success         2023-1-1T1:10:00Z      medium     4                4                     4                 Finished   JobSet finished successfully                                    2023-1-1T1:14:00Z\n  user-third-job-running          2023-1-1T1:15:00Z      medium     4                4                     \u003cnone\u003e            Admitted   Admitted by ClusterQueue cluster-queue                          2023-1-1T1:16:00Z\n  user-forth-job-in-queue         2023-1-1T1:16:05Z      medium     4                \u003cnone\u003e                \u003cnone\u003e            Admitted   couldn't assign flavors to pod set slice-job: insufficient unused quota for google.com/tpu in flavor 2xv4-8, 4 more need   2023-1-1T1:16:10Z\n  user-fifth-job-preempted        2023-1-1T1:10:05Z      low        4                \u003cnone\u003e                \u003cnone\u003e            Evicted    Preempted to accommodate a higher priority Workload             2023-1-1T1:10:00Z\n  ```\n\n* Workload List supports filtering. Observe a portion of jobs that match user criteria.\n\n  * Filter by Status: `filter-by-status`\n\n  Filter the workload list by the status of respective jobs.\n  Status can be: `EVERYTHING`,`FINISHED`, `RUNNING`, `QUEUED`, `FAILED`, `SUCCESSFUL`\n\n  * Filter by Job: `filter-by-job`\n\n  Filter the workload list by the name of a job.\n\n    ```shell\n    python3 xpk.py workload list \\\n    --cluster xpk-test --filter-by-job=$USER\n    ```\n\n* Workload List supports waiting for the completion of a specific job. XPK will follow an existing job until it has finished or the `timeout`, if provided, has been reached  and then list the job. If no `timeout` is specified, the default value is set to the max value, 1 week. You may also set `timeout=0` to poll the job once.\n\n  Wait for a job to complete.\n\n    ```shell\n    python3 xpk.py workload list \\\n    --cluster xpk-test --wait-for-job-completion=xpk-test-workload\n    ```\n\n  Wait for a job to complete with a timeout of 300 seconds.\n\n    ```shell\n    python3 xpk.py workload list \\\n    --cluster xpk-test --wait-for-job-completion=xpk-test-workload \\\n    --timeout=300\n    ```\n\n  Return codes\n    `0`: Workload finished and completed successfully.\n    `124`: Timeout was reached before workload finished.\n    `125`: Workload finished but did not complete successfully.\n    `1`: Other failure.\n\n## Job List\n\n*   Job List (see jobs submitted via batch command):\n\n    ```shell\n    python3 xpk.py job ls --cluster xpk-test\n    ```\n\n* Example Job List Output:\n\n  ```\n    NAME                              PROFILE               LOCAL QUEUE   COMPLETIONS   DURATION   AGE\n    xpk-def-app-profile-slurm-74kbv   xpk-def-app-profile                 1/1           15s        17h\n    xpk-def-app-profile-slurm-brcsg   xpk-def-app-profile                 1/1           9s         3h56m\n    xpk-def-app-profile-slurm-kw99l   xpk-def-app-profile                 1/1           5s         3h54m\n    xpk-def-app-profile-slurm-x99nx   xpk-def-app-profile                 3/3           29s        17h\n  ```\n\n## Job Cancel\n\n*   Job Cancel (delete job submitted via batch command):\n\n    ```shell\n    python3 xpk.py job cancel xpk-def-app-profile-slurm-74kbv --cluster xpk-test\n    ```\n\n## Inspector\n* Inspector provides debug info to understand cluster health, and why workloads are not running.\nInspector output is saved to a file.\n\n    ```shell\n    python3 xpk.py inspector \\\n      --cluster $CLUSTER_NAME \\\n      --project $PROJECT_ID \\\n      --zone $ZONE\n    ```\n\n* Optional Arguments\n  * `--print-to-terminal`:\n    Print command output to terminal as well as a file.\n  * `--workload $WORKLOAD_NAME`\n    Inspector will write debug info related to the workload:`$WORKLOAD_NAME`\n\n* Example Output:\n\n  The output of xpk inspector is in `/tmp/tmp0pd6_k1o` in this example.\n  ```shell\n  [XPK] Starting xpk\n  [XPK] Task: `Set Cluster` succeeded.\n  [XPK] Task: `Local Setup: gcloud version` is implemented by `gcloud version`, hiding output unless there is an error.\n  [XPK] Task: `Local Setup: Project / Zone / Region` is implemented by `gcloud config get project; gcloud config get compute/zone; gcloud config get compute/region`, hiding output unless there is an error.\n  [XPK] Task: `GKE: Cluster Details` is implemented by `gcloud beta container clusters list --project $PROJECT --region $REGION | grep -e NAME -e $CLUSTER_NAME`, hiding output unless there is an error.\n  [XPK] Task: `GKE: Node pool Details` is implemented by `gcloud beta container node-pools list --cluster $CLUSTER_NAME  --project=$PROJECT --region=$REGION`, hiding output unless there is an error.\n  [XPK] Task: `Kubectl: All Nodes` is implemented by `kubectl get node -o custom-columns='NODE_NAME:metadata.name, READY_STATUS:.status.conditions[?(@.type==\"Ready\")].status, NODEPOOL:metadata.labels.cloud\\.google\\.com/gke-nodepool'`, hiding output unless there is an error.\n  [XPK] Task: `Kubectl: Number of Nodes per Node Pool` is implemented by `kubectl get node -o custom-columns=':metadata.labels.cloud\\.google\\.com/gke-nodepool' | sort | uniq -c`, hiding output unless there is an error.\n  [XPK] Task: `Kubectl: Healthy Node Count Per Node Pool` is implemented by `kubectl get node -o custom-columns='NODE_NAME:metadata.name, READY_STATUS:.status.conditions[?(@.type==\"Ready\")].status, NODEPOOL:metadata.labels.cloud\\.google\\.com/gke-nodepool' | grep -w True | awk {'print $3'} | sort | uniq -c`, hiding output unless there is an error.\n  [XPK] Task: `Kueue: ClusterQueue Details` is implemented by `kubectl describe ClusterQueue cluster-queue`, hiding output unless there is an error.\n  [XPK] Task: `Kueue: LocalQueue Details` is implemented by `kubectl describe LocalQueue multislice-queue`, hiding output unless there is an error.\n  [XPK] Task: `Kueue: Kueue Deployment Details` is implemented by `kubectl describe Deployment kueue-controller-manager -n kueue-system`, hiding output unless there is an error.\n  [XPK] Task: `Jobset: Deployment Details` is implemented by `kubectl describe Deployment jobset-controller-manager -n jobset-system`, hiding output unless there is an error.\n  [XPK] Task: `Kueue Manager Logs` is implemented by `kubectl logs deployment/kueue-controller-manager -n kueue-system --tail=100 --prefix=True`, hiding output unless there is an error.\n  [XPK] Task: `Jobset Manager Logs` is implemented by `kubectl logs deployment/jobset-controller-manager -n jobset-system --tail=100 --prefix=True`, hiding output unless there is an error.\n  [XPK] Task: `List Jobs with filter-by-status=EVERYTHING with filter-by-jobs=None` is implemented by `kubectl get workloads -o=custom-columns=\"Jobset Name:.metadata.ownerReferences[0].name,Created Time:.metadata.creationTimestamp,Priority:.spec.priorityClassName,TPU VMs Needed:.spec.podSets[0].count,TPU VMs Running/Ran:.status.admission.podSetAssignments[-1].count,TPU VMs Done:.status.reclaimablePods[0].count,Status:.status.conditions[-1].type,Status Message:.status.conditions[-1].message,Status Time:.status.conditions[-1].lastTransitionTime\"  `, hiding output unless there is an error.\n  [XPK] Task: `List Jobs with filter-by-status=QUEUED with filter-by-jobs=None` is implemented by `kubectl get workloads -o=custom-columns=\"Jobset Name:.metadata.ownerReferences[0].name,Created Time:.metadata.creationTimestamp,Priority:.spec.priorityClassName,TPU VMs Needed:.spec.podSets[0].count,TPU VMs Running/Ran:.status.admission.podSetAssignments[-1].count,TPU VMs Done:.status.reclaimablePods[0].count,Status:.status.conditions[-1].type,Status Message:.status.conditions[-1].message,Status Time:.status.conditions[-1].lastTransitionTime\"  | awk -e 'NR == 1 || ($7 ~ \"Admitted|Evicted|QuotaReserved\" \u0026\u0026 ($5 ~ \"\u003cnone\u003e\" || $5 == 0)) {print $0}' `, hiding output unless there is an error.\n  [XPK] Task: `List Jobs with filter-by-status=RUNNING with filter-by-jobs=None` is implemented by `kubectl get workloads -o=custom-columns=\"Jobset Name:.metadata.ownerReferences[0].name,Created Time:.metadata.creationTimestamp,Priority:.spec.priorityClassName,TPU VMs Needed:.spec.podSets[0].count,TPU VMs Running/Ran:.status.admission.podSetAssignments[-1].count,TPU VMs Done:.status.reclaimablePods[0].count,Status:.status.conditions[-1].type,Status Message:.status.conditions[-1].message,Status Time:.status.conditions[-1].lastTransitionTime\"  | awk -e 'NR == 1 || ($7 ~ \"Admitted|Evicted\" \u0026\u0026 $5 ~ /^[0-9]+$/ \u0026\u0026 $5 \u003e 0) {print $0}' `, hiding output unless there is an error.\n  [XPK] Find xpk inspector output file: /tmp/tmp0pd6_k1o\n  [XPK] Exiting XPK cleanly\n  ```\n\n## Run\n* `xpk run` lets you execute scripts on a cluster with ease. It automates task execution, handles interruptions, and streams job output to your console.\n\n  ```shell\n  python xpk.py run --kind-cluster -n 2 -t 0-2 examples/job.sh \n  ```\n\n* Example Output:\n\n  ```shell\n  [XPK] Starting xpk\n  [XPK] Task: `get current-context` is implemented by `kubectl config current-context`, hiding output unless there is an error.\n  [XPK] No local cluster name specified. Using current-context `kind-kind`\n  [XPK] Task: `run task` is implemented by `kubectl kjob create slurm --profile xpk-def-app-profile --localqueue multislice-queue --wait --rm -- examples/job.sh --partition multislice-queue --ntasks 2 --time 0-2`. Streaming output and input live.\n  job.batch/xpk-def-app-profile-slurm-g4vr6 created\n  configmap/xpk-def-app-profile-slurm-g4vr6 created\n  service/xpk-def-app-profile-slurm-g4vr6 created\n  Starting log streaming for pod xpk-def-app-profile-slurm-g4vr6-1-4rmgk...\n  Now processing task ID: 3\n  Starting log streaming for pod xpk-def-app-profile-slurm-g4vr6-0-bg6dm...\n  Now processing task ID: 1\n  exit\n  exit\n  Now processing task ID: 2\n  exit\n  Job logs streaming finished.[XPK] Task: `run task` terminated with code `0`\n  [XPK] XPK Done.\n  ```\n\n## GPU usage\n\nIn order to use XPK for GPU, you can do so by using `device-type` flag.\n\n*   Cluster Create (provision reserved capacity):\n\n    ```shell\n    # Find your reservations\n    gcloud compute reservations list --project=$PROJECT_ID\n\n    # Run cluster create with reservation.\n    python3 xpk.py cluster create \\\n    --cluster xpk-test --device-type=h100-80gb-8 \\\n    --num-nodes=2 \\\n    --reservation=$RESERVATION_ID\n    ```\n\n*   Cluster Delete (deprovision capacity):\n\n    ```shell\n    python3 xpk.py cluster delete \\\n    --cluster xpk-test\n    ```\n\n*   Cluster List (see provisioned capacity):\n\n    ```shell\n    python3 xpk.py cluster list\n    ```\n\n*   Cluster Describe (see capacity):\n\n    ```shell\n    python3 xpk.py cluster describe \\\n    --cluster xpk-test\n    ```\n\n\n*   Cluster Cacheimage (enables faster start times):\n\n    ```shell\n    python3 xpk.py cluster cacheimage \\\n    --cluster xpk-test --docker-image gcr.io/your_docker_image \\\n    --device-type=h100-80gb-8\n    ```\n\n\n*   [Install NVIDIA GPU device drivers](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus#install)\n    ```shell\n    # List available driver versions\n    gcloud compute ssh $NODE_NAME --command \"sudo cos-extensions list\"\n\n    # Install the default driver\n    gcloud compute ssh $NODE_NAME --command \"sudo cos-extensions install gpu\"\n    # OR install a specific version of the driver\n    gcloud compute ssh $NODE_NAME --command \"sudo cos-extensions install gpu -- -version=DRIVER_VERSION\"\n    ```\n\n*   Run a workload:\n\n    ```shell\n    # Submit a workload\n    python3 xpk.py workload create \\\n    --cluster xpk-test --device-type h100-80gb-8 \\\n    --workload xpk-test-workload \\\n    --command=\"echo hello world\"\n    ```\n\n*   Workload Delete (delete training job):\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --workload xpk-test-workload --cluster xpk-test\n    ```\n\n    This will only delete `xpk-test-workload` workload in `xpk-test` cluster.\n\n*   Workload Delete (delete all training jobs in the cluster):\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster. Deletion will only begin if you type `y` or `yes` at the prompt.\n\n*   Workload Delete supports filtering. Delete a portion of jobs that match user criteria.\n    * Filter by Job: `filter-by-job`\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test --filter-by-job=$USER\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster whose names start with `$USER`. Deletion will only begin if you type `y` or `yes` at the prompt.\n\n    * Filter by Status: `filter-by-status`\n\n    ```shell\n    python3 xpk.py workload delete \\\n    --cluster xpk-test --filter-by-status=QUEUED\n    ```\n\n    This will delete all the workloads in `xpk-test` cluster that have the status as Admitted or Evicted, and the number of running VMs is 0. Deletion will only begin if you type `y` or `yes` at the prompt. Status can be: `EVERYTHING`,`FINISHED`, `RUNNING`, `QUEUED`, `FAILED`, `SUCCESSFUL`.\n\n## CPU usage\n\nIn order to use XPK for CPU, you can do so by using `device-type` flag.\n\n*   Cluster Create (provision on-demand capacity):\n\n    ```shell\n    # Run cluster create with on demand capacity.\n    python3 xpk.py cluster create \\\n    --cluster xpk-test \\\n    --device-type=n2-standard-32-256 \\\n    --num-slices=1 \\\n    --default-pool-cpu-machine-type=n2-standard-32 \\\n    --on-demand\n    ```\n    Note that `device-type` for CPUs is of the format \u003ccpu-machine-type\u003e-\u003cnumber of VMs\u003e, thus in the above example, user requests for 256 VMs of type n2-standard-32.\n    Currently workloads using \u003c 1000 VMs are supported.\n\n*   Run a workload:\n\n    ```shell\n    # Submit a workload\n    python3 xpk.py workload create \\\n    --cluster xpk-test \\\n    --num-slices=1 \\\n    --device-type=n2-standard-32-256 \\\n    --workload xpk-test-workload \\\n    --command=\"echo hello world\"\n    ```\n\n# Autoprovisioning with XPK\nXPK can dynamically allocate cluster capacity using [Node Auto Provisioning, (NAP)](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#use_accelerators_for_new_auto-provisioned_node_pools) support.\n\nAllow several topology sizes to be supported from one XPK cluster, and be dynamically provisioned based on incoming workload requests. XPK users will not need to re-provision the cluster manually.\n\nEnabling autoprovisioning will take the cluster around initially up to **30 minutes to upgrade**.\n\n## Create a cluster with autoprovisioning:\n\nAutoprovisioning will be enabled on the below cluster with [0, 8] chips of v4 TPU (up to 1xv4-16) to scale\nbetween.\n\nXPK doesn't currently support different generations of accelerators in the same cluster (like v4 and v5p TPUs).\n\n```shell\nCLUSTER_NAME=my_cluster\nNUM_SLICES=2\nDEVICE_TYPE=v4-8\nRESERVATION=reservation_id\nPROJECT=my_project\nZONE=us-east5-b\n\npython3 xpk.py cluster create \\\n  --cluster $CLUSTER_NAME \\\n  --num-slices=$NUM_SLICES \\\n    --device-type=$DEVICE_TYPE \\\n  --zone=$ZONE \\\n  --project=$PROJECT \\\n  --reservation=$RESERVATION \\\n  --enable-autoprovisioning\n```\n\n1. Define the starting accelerator configuration and capacity type.\n\n    ```shell\n    --device-type=$DEVICE_TYPE \\\n    --num-slice=$NUM_SLICES\n    ```\n2. Optionally set custom `minimum` / `maximum` chips. NAP will rescale the cluster with `maximum` - `minimum` chips. By default, `maximum` is set to the current cluster configuration size, and `minimum` is set to 0. This allows NAP to rescale with all the resources.\n\n    ```shell\n    --autoprovisioning-min-chips=$MIN_CHIPS \\\n    --autoprovisioning-max-chips=$MAX_CHIPS\n    ```\n\n3. `FEATURE TO COME SOON:` Set the timeout period for when node pools will automatically be deleted\nif no incoming workloads are run. This is 10 minutes currently.\n\n4. `FEATURE TO COME:` Set the timeout period to infinity. This will keep the idle node pool configuration always running until updated by new workloads.\n\n### Update a cluster with autoprovisioning:\n```shell\nCLUSTER_NAME=my_cluster\nNUM_SLICES=2\nDEVICE_TYPE=v4-8\nRESERVATION=reservation_id\nPROJECT=my_project\nZONE=us-east5-b\n\npython3 xpk.py cluster create \\\n  --cluster $CLUSTER_NAME \\\n  --num-slices=$NUM_SLICES \\\n    --device-type=$DEVICE_TYPE \\\n  --zone=$ZONE \\\n  --project=$PROJECT \\\n  --reservation=$RESERVATION \\\n  --enable-autoprovisioning\n```\n\n### Update a previously autoprovisioned cluster with different amount of chips:\n\n* Option 1: By creating a new cluster nodepool configuration.\n\n```shell\nCLUSTER_NAME=my_cluster\nNUM_SLICES=2\nDEVICE_TYPE=v4-16\nRESERVATION=reservation_id\nPROJECT=my_project\nZONE=us-east5-b\n\n# This will create 2x v4-16 node pools and set the max autoprovisioned chips to 16.\npython3 xpk.py cluster create \\\n  --cluster $CLUSTER_NAME \\\n  --num-slices=$NUM_SLICES \\\n    --device-type=$DEVICE_TYPE \\\n  --zone=$ZONE \\\n  --project=$PROJECT \\\n  --reservation=$RESERVATION \\\n  --enable-autoprovisioning\n```\n\n* Option 2: By increasing the `--autoprovisioning-max-chips`.\n```shell\nCLUSTER_NAME=my_cluster\nNUM_SLICES=0\nDEVICE_TYPE=v4-16\nRESERVATION=reservation_id\nPROJECT=my_project\nZONE=us-east5-b\n\n# This will clear the node pools if they exist in the cluster and set the max autoprovisioned chips to 16\npython3 xpk.py cluster create \\\n  --cluster $CLUSTER_NAME \\\n  --num-slices=$NUM_SLICES \\\n    --device-type=$DEVICE_TYPE \\\n  --zone=$ZONE \\\n  --project=$PROJECT \\\n  --reservation=$RESERVATION \\\n  --enable-autoprovisioning \\\n  --autoprovisioning-max-chips 16\n```\n\n## Run workloads on the cluster with autoprovisioning:\nReconfigure the `--device-type` and `--num-slices`\n  ```shell\n  CLUSTER_NAME=my_cluster\n  NUM_SLICES=2\n  DEVICE_TYPE=v4-8\n  NEW_RESERVATION=new_reservation_id\n  PROJECT=my_project\n  ZONE=us-east5-b\n  # Create a 2x v4-8 TPU workload.\n  python3 xpk.py workload create \\\n    --cluster $CLUSTER \\\n    --workload ${USER}-nap-${NUM_SLICES}x${DEVICE_TYPE}_$(date +%H-%M-%S) \\\n    --command \"echo hello world from $NUM_SLICES $DEVICE_TYPE\" \\\n    --device-type=$DEVICE_TYPE \\\n    --num-slices=$NUM_SLICES \\\n    --zone=$ZONE \\\n    --project=$PROJECT\n\n  NUM_SLICES=1\n  DEVICE_TYPE=v4-16\n\n  # Create a 1x v4-16 TPU workload.\n  python3 xpk.py workload create \\\n    --cluster $CLUSTER \\\n    --workload ${USER}-nap-${NUM_SLICES}x${DEVICE_TYPE}_$(date +%H-%M-%S) \\\n    --command \"echo hello world from $NUM_SLICES $DEVICE_TYPE\" \\\n    --device-type=$DEVICE_TYPE \\\n    --num-slices=$NUM_SLICES \\\n    --zone=$ZONE \\\n    --project=$PROJECT\n\n  # Use a different reservation from what the cluster was created with.\n  python3 xpk.py workload create \\\n    --cluster $CLUSTER \\\n    --workload ${USER}-nap-${NUM_SLICES}x${DEVICE_TYPE}_$(date +%H-%M-%S) \\\n    --command \"echo hello world from $NUM_SLICES $DEVICE_TYPE\" \\\n    --device-type=$DEVICE_TYPE \\\n    --num-slices=$NUM_SLICES \\\n    --zone=$ZONE \\\n    --project=$PROJECT \\\n    --reservation=$NEW_RESERVATION\n  ```\n\n1. (Optional) Define the capacity type. By default, the capacity type will\nmatch with what the cluster was created with.\n\n    ```shell\n    --reservation=my-reservation-id | --on-demand | --spot\n    ```\n\n2. Set the topology of your workload using --device-type.\n\n    ```shell\n    NUM_SLICES=1\n    DEVICE_TYPE=v4-8\n    --device-type=$DEVICE_TYPE \\\n    --num-slices=$NUM_SLICES \\\n    ```\n\n\n# How to add docker images to a xpk workload\n\nThe default behavior is `xpk workload create` will layer the local directory (`--script-dir`) into\nthe base docker image (`--base-docker-image`) and run the workload command.\nIf you don't want this layering behavior, you can directly use `--docker-image`. Do not mix arguments from the two flows in the same command.\n\n## Recommended / Default Docker Flow: `--base-docker-image` and `--script-dir`\nThis flow pulls the `--script-dir` into the `--base-docker-image` and runs the new docker image.\n\n* The below arguments are optional by default. xpk will pull the local\n  directory with a generic base docker image.\n\n  - `--base-docker-image` sets the base image that xpk will start with.\n\n  - `--script-dir` sets which directory to pull into the image. This defaults to the current working directory.\n\n  See `python3 xpk.py workload create --help` for more info.\n\n* Example with defaults which pulls the local directory into the base image:\n  ```shell\n  echo -e '#!/bin/bash \\n echo \"Hello world from a test script!\"' \u003e test.sh\n  python3 xpk.py workload create --cluster xpk-test \\\n  --workload xpk-test-workload-base-image --command \"bash test.sh\" \\\n  --tpu-type=v5litepod-16 --num-slices=1\n  ```\n\n* Recommended Flow For Normal Sized Jobs (fewer than 10k accelerators):\n  ```shell\n  python3 xpk.py workload create --cluster xpk-test \\\n  --workload xpk-test-workload-base-image --command \"bash custom_script.sh\" \\\n  --base-docker-image=gcr.io/your_dependencies_docker_image \\\n  --tpu-type=v5litepod-16 --num-slices=1\n  ```\n\n## Optional Direct Docker Image Configuration: `--docker-image`\nIf a user wants to directly set the docker image used and not layer in the\ncurrent working directory, set `--docker-image` to the image to be use in the\nworkload.\n\n* Running with `--docker-image`:\n  ```shell\n  python3 xpk.py workload create --cluster xpk-test \\\n  --workload xpk-test-workload-base-image --command \"bash test.sh\" \\\n  --tpu-type=v5litepod-16 --num-slices=1 --docker-image=gcr.io/your_docker_image\n  ```\n\n* Recommended Flow For Large Sized Jobs (more than 10k accelerators):\n  ```shell\n  python3 xpk.py cluster cacheimage \\\n  --cluster xpk-test --docker-image gcr.io/your_docker_image\n  # Run workload create with the same image.\n  python3 xpk.py workload create --cluster xpk-test \\\n  --workload xpk-test-workload-base-image --command \"bash test.sh\" \\\n  --tpu-type=v5litepod-16 --num-slices=1 --docker-image=gcr.io/your_docker_image\n  ```\n\n# More advanced facts:\n\n* Workload create has two mutually exclusive ways to override the environment of a workload:\n  *  a `--env` flag to specify each environment variable separately. The format is:\n\n     `--env VARIABLE1=value --env VARIABLE2=value`\n\n  *  a `--env-file` flag to allow specifying the container's\nenvironment from a file. Usage is the same as Docker's\n[--env-file flag](https://docs.docker.com/engine/reference/commandline/run/#env)\n\n    Example Env File:\n    ```shell\n    LIBTPU_INIT_ARGS=--my-flag=true --performance=high\n    MY_ENV_VAR=hello\n    ```\n\n* Workload create accepts a --debug-dump-gcs flag which is a path to GCS bucket.\nPassing this flag sets the XLA_FLAGS='--xla_dump_to=/tmp/xla_dump/' and uploads\nhlo dumps to the specified GCS bucket for each worker.\n\n# Integration Test Workflows\nThe repository code is tested through Github Workflows and Actions. Currently three kinds of tests are performed:\n* A nightly build that runs every 24 hours\n* A build that runs on push to `main` branch\n* A build that runs for every PR approval\n\nMore information is documented [here](https://github.com/google/xpk/tree/main/.github/workflows)\n\n# Troubleshooting\n\n## `Invalid machine type` for CPUs.\nXPK will create a regional GKE cluster. If you see issues like\n\n```shell\nInvalid machine type e2-standard-32 in zone $ZONE_NAME\n```\n\nPlease select a CPU type that exists in all zones in the region.\n\n```shell\n# Find CPU Types supported in zones.\ngcloud compute machine-types list --zones=$ZONE_LIST\n# Adjust default cpu machine type.\npython3 xpk.py cluster create --default-pool-cpu-machine-type=CPU_TYPE ...\n```\n\n## Workload creation fails\n\nSome XPK cluster configuration might be missing, if workload creation fails with the below error.\n\n`[XPK] b'error: the server doesn\\'t have a resource type \"workloads\"\\n'`\n\nMitigate this error by re-running your `xpk.py cluster create ...` command, to refresh the cluster configurations.\n\n## Permission Issues: `requires one of [\"permission_name\"] permission(s)`.\n\n1) Determine the role needed based on the permission error:\n\n    ```shell\n    # For example: `requires one of [\"container.*\"] permission(s)`\n    # Add [Kubernetes Engine Admin](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles) to your user.\n    ```\n\n2) Add the role to the user in your project.\n\n    Go to [iam-admin](https://console.cloud.google.com/iam-admin/) or use gcloud cli:\n    ```shell\n    PROJECT_ID=my-project-id\n    CURRENT_GKE_USER=$(gcloud config get account)\n    ROLE=roles/container.admin  # container.admin is the role needed for Kubernetes Engine Admin\n    gcloud projects add-iam-policy-binding $PROJECT_ID --member user:$CURRENT_GKE_USER --role=$ROLE\n    ```\n\n3) Check the permissions are correct for the users.\n\n    Go to [iam-admin](https://console.cloud.google.com/iam-admin/) or use gcloud cli:\n\n    ```shell\n    PROJECT_ID=my-project-id\n    CURRENT_GKE_USER=$(gcloud config get account)\n    gcloud projects get-iam-policy $PROJECT_ID --filter=\"bindings.members:$CURRENT_GKE_USER\" --flatten=\"bindings[].members\"\n    ```\n\n4) Confirm you have logged in locally with the correct user.\n\n    ```shell\n    gcloud auth login\n    ```\n\n### Roles needed based on permission errors:\n\n* `requires one of [\"container.*\"] permission(s)`\n\n  Add [Kubernetes Engine Admin](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles) to your user.\n\n* `ERROR: (gcloud.monitoring.dashboards.list) User does not have permission to access projects instance (or it may not exist)`\n\n  Add [Monitoring Viewer](https://cloud.google.com/iam/docs/understanding-roles#monitoring.viewer) to your user.\n\n\n## Reservation Troubleshooting:\n\n### How to determine your reservation and its size / utilization:\n\n```shell\nPROJECT_ID=my-project\nZONE=us-east5-b\nRESERVATION=my-reservation-name\n# Find the reservations in your project\ngcloud beta compute reservations list --project=$PROJECT_ID\n# Find the tpu machine type and current utilization of a reservation.\ngcloud beta compute reservations describe $RESERVATION --project=$PROJECT_ID --zone=$ZONE\n```\n\n## 403 error on workload create when using `--base-docker-image` flag\nYou need authority to push to the registry from your local machine. Try running `gcloud auth configure-docker`.\n## `Kubernetes API exception` - 404 error\nIf error of this kind appeared after updating xpk version it's possible that you need to rerun `cluster create` command in order to update resource definitions.\n\n# TPU Workload Debugging\n\n## Verbose Logging\nIf you are having trouble with your workload, try setting the `--enable-debug-logs` when you schedule it. This will give you more detailed logs to help pinpoint the issue. For example:\n```shell\npython3 xpk.py workload create \\\n--cluster --workload xpk-test-workload \\\n--command=\"echo hello world\" --enable-debug-logs\n```\nPlease check [libtpu logging](https://cloud.google.com/tpu/docs/troubleshooting/trouble-tf#debug_logs) and [Tensorflow logging](https://deepreg.readthedocs.io/en/latest/docs/logging.html#tensorflow-logging) for more information about the flags that are enabled to get the logs.\n\n## Collect Stack Traces\n[cloud-tpu-diagnostics](https://pypi.org/project/cloud-tpu-diagnostics/) PyPI package can be used to generate stack traces for workloads running in GKE. This package dumps the Python traces when a fault such as segmentation fault, floating-point exception, or illegal operation exception occurs in the program. Additionally, it will also periodically collect stack traces to help you debug situations when the program is unresponsive. You must make the following changes in the docker image running in a Kubernetes main container to enable periodic stack trace collection.\n```shell\n# main.py\n\nfrom cloud_tpu_diagnostics import diagnostic\nfrom cloud_tpu_diagnostics.configuration import debug_configuration\nfrom cloud_tpu_diagnostics.configuration import diagnostic_configuration\nfrom cloud_tpu_diagnostics.configuration import stack_trace_configuration\n\nstack_trace_config = stack_trace_configuration.StackTraceConfig(\n                      collect_stack_trace = True,\n                      stack_trace_to_cloud = True)\ndebug_config = debug_configuration.DebugConfig(\n                stack_trace_config = stack_trace_config)\ndiagnostic_config = diagnostic_configuration.DiagnosticConfig(\n                      debug_config = debug_config)\n\nwith diagnostic.diagnose(diagnostic_config):\n\tmain_method()  # this is the main method to run\n```\nThis configuration will start collecting stack traces inside the `/tmp/debugging` directory on each Kubernetes Pod.\n\n### Explore Stack Traces\nTo explore the stack traces collected in a temporary directory in Kubernetes Pod, you can run the following command to configure a sidecar container that will read the traces from `/tmp/debugging` directory.\n ```shell\n python3 xpk.py workload create \\\n  --workload xpk-test-workload --command \"python3 main.py\" --cluster \\\n  xpk-test --tpu-type=v5litepod-16 --deploy-stacktrace-sidecar\n ```\n\n### Get information about jobs, queues and resources.\n\nTo list available resources and queues use ```xpk info``` command. It allows to see localqueues and clusterqueues and check for available resources.\n\nTo see queues with usage and workload info use:\n```shell\npython3 xpk.py info --cluster my-cluster\n```\n\nYou can specify what kind of resources(clusterqueue or localqueue) you want to see using flags --clusterqueue or --localqueue.\n```shell\npython3 xpk.py info --cluster my-cluster --localqueue\n```\n\n# Local testing with Kind\n\nTo facilitate development and testing locally, we have integrated support for testing with `kind`. This enables you to simulate a Kubernetes environment on your local machine.\n\n## Prerequisites\n\n- Install kind on your local machine. Follow the official documentation here: [Kind Installation Guide.](https://kind.sigs.k8s.io/docs/user/quick-start#installation)\n\n## Usage\n\nxpk interfaces seamlessly with kind to manage Kubernetes clusters locally, facilitating the orchestration and management of workloads. Below are the commands for managing clusters:\n\n### Cluster Create\n*   Cluster create:\n\n    ```shell\n    python3 xpk.py kind create \\\n    --cluster xpk-test\n    ```\n\n### Cluster Delete\n*   Cluster Delete:\n\n    ```shell\n    python3 xpk.py kind delete \\\n    --cluster xpk-test\n    ```\n\n### Cluster List\n*   Cluster List:\n\n    ```shell\n    python3 xpk.py kind list\n    ```\n\n## Local Testing Basics\n\nLocal testing is available exclusively through the `batch` and `job` commands of xpk with the `--kind-cluster` flag. This allows you to simulate training jobs locally:\n\n```shell\npython xpk.py batch [other-options] --kind-cluster script\n```\n\nPlease note that all other xpk subcommands are intended for use with cloud systems on Google Cloud Engine (GCE) and don't support local testing. This includes commands like cluster, info, inspector, etc.\n\n# Other advanced usage\n[Use a Jupyter notebook to interact with a Cloud TPU cluster](xpk-notebooks.md)\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAI-Hypercomputer%2Fxpk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FAI-Hypercomputer%2Fxpk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAI-Hypercomputer%2Fxpk/lists"}