https://github.com/cedana/cedana-cli
Cedana: Access and run on compute anywhere in the world, on any provider. Migrate seamlessly between providers, arbitraging price/performance in realtime to maximize pure runtime.
https://github.com/cedana/cedana-cli
ai checkpointing cpu docker gpu linux
Last synced: 3 months ago
JSON representation
Cedana: Access and run on compute anywhere in the world, on any provider. Migrate seamlessly between providers, arbitraging price/performance in realtime to maximize pure runtime.
- Host: GitHub
- URL: https://github.com/cedana/cedana-cli
- Owner: cedana
- License: agpl-3.0
- Created: 2023-07-12T19:01:36.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2025-05-05T16:37:32.000Z (11 months ago)
- Last Synced: 2025-05-05T16:48:25.798Z (11 months ago)
- Topics: ai, checkpointing, cpu, docker, gpu, linux
- Language: Go
- Homepage: https://docs.cedana.ai
- Size: 31.3 MB
- Stars: 58
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# cedana-cli
[Cedana](https://cedana.ai) is a framework for the democritization and (eventually) commodification of compute. We achieve this by leveraging checkpoint/restore to seamlessly migrate work across machines, clouds and beyond.
This repo contains a CLI tool to allow developers to experiment with our system.
## Usage
To build & install from source:
```bash
make
```
To get started:
```bash
export CEDANA_URL="https://sandbox.cedana.ai/v1"
export CEDANA_AUTH_TOKEN=
cedana-cli --help
```
## Documentation
We are still working on the documentation.
## Deprecation Notice
`cedana-cli` used to have a self-serve tool, but it has been retired in favor of fulltime development on our managed platform. If you still wish to use it however, you can revert to previous versions (<=v0.2.8).
### Deprecated functionality description
With it, you can:
- Launch instances anywhere, with guaranteed price and capacity optimization. We look across your configured providers (AWS, Paperspace, etc.) to select the optimal instance defined in a provided job spec. This abstracts away cloud infra burdens. (On older versions only)
- Deploy and manage any kind of job, whether a pyTorch training job, a webservice or a multibody physics simulation on kubernetes.
Our managed system layers many more capabilities on top of this, such as: lifecycle management, policy systems, auto migration (through our novel checkpointing system (see [here](https://github.com/cedana/cedana))) and much more.
To access our managed service, contact .
## Contributing
See CONTRIBUTING.md for guidelines.