{"id":13633266,"url":"https://github.com/dstackai/dstack","last_synced_at":"2026-01-21T12:10:08.276Z","repository":{"id":37497977,"uuid":"444377346","full_name":"dstackai/dstack","owner":"dstackai","description":"dstack is an open-source alternative to Kubernetes and Slurm, designed to simplify GPU allocation and AI workload orchestration for ML teams across top clouds, on-prem clusters, and accelerators.","archived":false,"fork":false,"pushed_at":"2025-04-26T20:46:07.000Z","size":119525,"stargazers_count":1764,"open_issues_count":113,"forks_count":168,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-04-27T04:42:50.956Z","etag":null,"topics":["amd","aws","azure","cloud","docker","fine-tuning","gcp","gpu","inference","k8s","kubernetes","llms","machine-learning","nvidia","orchestration","python","slurm","training"],"latest_commit_sha":null,"homepage":"https://dstack.ai/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dstackai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-01-04T10:29:46.000Z","updated_at":"2025-04-26T19:31:33.000Z","dependencies_parsed_at":"2024-01-22T15:47:04.389Z","dependency_job_id":"537a0739-9110-40c1-ae75-2c6441a6bd31","html_url":"https://github.com/dstackai/dstack","commit_stats":{"total_commits":2315,"total_committers":49,"mean_commits":"47.244897959183675","dds":0.6211663066954644,"last_synced_commit":"416e27fece876f4372d96f407030bfadf6ec4766"},"previous_names":[],"tags_count":189,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dstackai%2Fdstack","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dstackai%2Fdstack/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dstackai%2Fdstack/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dstackai%2Fdstack/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dstackai","download_url":"https://codeload.github.com/dstackai/dstack/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251089406,"owners_count":21534511,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["amd","aws","azure","cloud","docker","fine-tuning","gcp","gpu","inference","k8s","kubernetes","llms","machine-learning","nvidia","orchestration","python","slurm","training"],"created_at":"2024-08-01T23:00:32.248Z","updated_at":"2026-01-08T20:01:35.171Z","avatar_url":"https://github.com/dstackai.png","language":"Python","readme":"\u003cdiv style=\"text-align: center;\"\u003e\n\u003ch2\u003e\n  \u003ca target=\"_blank\" href=\"https://dstack.ai\"\u003e\n    \u003cpicture\u003e\n      \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://raw.githubusercontent.com/dstackai/dstack/master/docs/assets/images/dstack-logo-dark.svg\"/\u003e\n      \u003cimg alt=\"dstack\" src=\"https://raw.githubusercontent.com/dstackai/dstack/master/docs/assets/images/dstack-logo.svg\" width=\"350px\"/\u003e\n    \u003c/picture\u003e\n  \u003c/a\u003e\n\u003c/h2\u003e\n\n[![Last commit](https://img.shields.io/github/last-commit/dstackai/dstack?style=flat-square)](https://github.com/dstackai/dstack/commits/)\n[![PyPI - License](https://img.shields.io/pypi/l/dstack?style=flat-square\u0026color=blue)](https://github.com/dstackai/dstack/blob/master/LICENSE.md)\n[![Discord](https://img.shields.io/discord/1106906313969123368?style=flat-square)](https://discord.gg/u8SmfwPpMd)\n\n\u003c/div\u003e\n\n`dstack` is a unified control plane for GPU provisioning and orchestration that works with any GPU cloud, Kubernetes, or on-prem clusters. \n\nIt streamlines development, training, and inference, and is compatible with any hardware, open-source tools, and frameworks.\n\n#### Hardware\n\n`dstack` supports `NVIDIA`, `AMD`, `Google TPU`, `Intel Gaudi`, and `Tenstorrent` accelerators out of the box.\n\n## Latest news ✨\n- [2025/12] [dstack 0.20.0: Fleet-first UX, Events, and more](https://github.com/dstackai/dstack/releases/tag/0.20.0)\n- [2025/11] [dstack 0.19.38: Routers, SGLang Model Gateway integration](https://github.com/dstackai/dstack/releases/tag/0.19.38)\n- [2025/10] [dstack 0.19.31: Kubernetes, GCP A4 spot](https://github.com/dstackai/dstack/releases/tag/0.19.31)\n- [2025/08] [dstack 0.19.26: Repos](https://github.com/dstackai/dstack/releases/tag/0.19.26)\n- [2025/08] [dstack 0.19.22: Service probes, GPU health-checks, Tenstorrent Galaxy](https://github.com/dstackai/dstack/releases/tag/0.19.22)\n- [2025/07] [dstack 0.19.21: Scheduled tasks](https://github.com/dstackai/dstack/releases/tag/0.19.21)\n- [2025/07] [dstack 0.19.17: Secrets, Files, Rolling deployment](https://github.com/dstackai/dstack/releases/tag/0.19.17)\n\n## How does it work?\n\n\u003cpicture\u003e\n  \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v11-dark.svg\"/\u003e\n  \u003cimg src=\"https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v11.svg\" width=\"750\" /\u003e\n\u003c/picture\u003e\n\n### Installation\n\n\u003e Before using `dstack` through CLI or API, set up a `dstack` server. If you already have a running `dstack` server, you only need to [set up the CLI](#set-up-the-cli).\n\n#### Set up the server\n\n##### Configure backends\n\nTo orchestrate compute across cloud providers or existing Kubernetes clusters, you need to configure backends.\n\nBackends can be set up in `~/.dstack/server/config.yml` or through the [project settings page](https://dstack.ai/docs/concepts/projects#backends) in the UI.\n\nFor more details, see [Backends](https://dstack.ai/docs/concepts/backends).\n\n\u003e When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up.\n\n##### Start the server\n\nYou can install the server on Linux, macOS, and Windows (via WSL 2). It requires Git and\nOpenSSH.\n\n##### uv\n\n```shell\n$ uv tool install \"dstack[all]\" -U\n```\n\n##### pip\n\n```shell\n$ pip install \"dstack[all]\" -U\n```\n\nOnce it's installed, go ahead and start the server.\n\n```shell\n$ dstack server\nApplying ~/.dstack/server/config.yml...\n\nThe admin token is \"bbae0f28-d3dd-4820-bf61-8f4bb40815da\"\nThe server is running at http://127.0.0.1:3000/\n```\n\n\u003e For more details on server configuration options, see the\n[Server deployment](https://dstack.ai/docs/guides/server-deployment) guide.\n\n\n\u003cdetails\u003e\u003csummary\u003eSet up the CLI\u003c/summary\u003e\n\n#### Set up the CLI\n\nOnce the server is up, you can access it via the `dstack` CLI. \n\nThe CLI can be installed on Linux, macOS, and Windows. It requires Git and OpenSSH.\n\n##### uv\n\n```shell\n$ uv tool install dstack -U\n```\n\n##### pip\n\n```shell\n$ pip install dstack -U\n```\n\nTo point the CLI to the `dstack` server, configure it\nwith the server address, user token, and project name:\n\n```shell\n$ dstack project add \\\n    --name main \\\n    --url http://127.0.0.1:3000 \\\n    --token bbae0f28-d3dd-4820-bf61-8f4bb40815da\n    \nConfiguration is updated at ~/.dstack/config.yml\n```\n\n\u003c/details\u003e\n\n### Define configurations\n\n`dstack` supports the following configurations:\n   \n* [Dev environments](https://dstack.ai/docs/dev-environments) \u0026mdash; for interactive development using a desktop IDE\n* [Tasks](https://dstack.ai/docs/tasks) \u0026mdash; for scheduling jobs (incl. distributed jobs) or running web apps\n* [Services](https://dstack.ai/docs/services) \u0026mdash; for deployment of models and web apps (with auto-scaling and authorization)\n* [Fleets](https://dstack.ai/docs/fleets) \u0026mdash; for managing cloud and on-prem clusters\n* [Volumes](https://dstack.ai/docs/concepts/volumes) \u0026mdash; for managing persisted volumes\n* [Gateways](https://dstack.ai/docs/concepts/gateways) \u0026mdash; for configuring the ingress traffic and public endpoints\n\nConfiguration can be defined as YAML files within your repo.\n\n### Apply configurations\n\nApply the configuration either via the `dstack apply` CLI command or through a programmatic API.\n\n`dstack` automatically manages provisioning, job queuing, auto-scaling, networking, volumes, run failures,\nout-of-capacity errors, port-forwarding, and more \u0026mdash; across clouds and on-prem clusters.\n\n## Useful links\n\nFor additional information, see the following links:\n\n* [Docs](https://dstack.ai/docs)\n* [Examples](https://dstack.ai/examples)\n* [Discord](https://discord.gg/u8SmfwPpMd)\n\n## Contributing\n\nYou're very welcome to contribute to `dstack`. \nLearn more about how to contribute to the project at [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## License\n\n[Mozilla Public License 2.0](LICENSE.md)\n","funding_links":[],"categories":["LLMOps","Python","Model Training and Orchestration","Workflow Tools"],"sub_categories":["Observability","Popular-LLM"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdstackai%2Fdstack","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdstackai%2Fdstack","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdstackai%2Fdstack/lists"}