https://github.com/dstackai/llm-weaver
This Streamlit app helps deploy LLMs to your cloud (AWS, GCP, Azure, Lambda Cloud) via user interface and access them for inference.
https://github.com/dstackai/llm-weaver
Last synced: 9 months ago
JSON representation
This Streamlit app helps deploy LLMs to your cloud (AWS, GCP, Azure, Lambda Cloud) via user interface and access them for inference.
- Host: GitHub
- URL: https://github.com/dstackai/llm-weaver
- Owner: dstackai
- License: apache-2.0
- Created: 2023-10-25T09:20:49.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-11-16T21:37:35.000Z (about 2 years ago)
- Last Synced: 2025-05-11T17:58:26.140Z (9 months ago)
- Language: Python
- Homepage:
- Size: 1.07 MB
- Stars: 8
- Watchers: 1
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM Weaver
This Streamlit app helps fine-tune and deploy LLMs using your cloud (AWS, GCP, Azure, Lambda Cloud, TensorDock, Vast.ai, etc.) via user interface and access them for inference.

To run workloads in the cloud, the app uses [`dstack`](https://github.com/dstackai/dstack).
## Get started
### 1. Install requirements
```shell
pip install -r requirements.txt
```
### 2. Set up the `dstack` server
> If you have default AWS, GCP, or Azure credentials on your machine, the dstack server will pick them up automatically.
Otherwise, you need to manually specify the cloud credentials in `~/.dstack/server/config.yml`. For further details, refer to [server configuration](https://dstack.ai/docs/configuration/server/).
Once clouds are configured, start it:
```shell
dstack server
```
Now you're good to run the app.
### 3. Run the app
```shell
streamlit run Inference.py
```