https://github.com/dsba6010-llm-applications/modal-streamlit-chat
https://github.com/dsba6010-llm-applications/modal-streamlit-chat
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/dsba6010-llm-applications/modal-streamlit-chat
- Owner: dsba6010-llm-applications
- License: mit
- Created: 2024-07-29T17:21:23.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-26T02:04:39.000Z (about 1 year ago)
- Last Synced: 2024-08-27T02:56:53.868Z (about 1 year ago)
- Language: Python
- Size: 12.7 KB
- Stars: 0
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# modal-streamlit-chat
First, create a `.streamlit/secrets.toml` file such that:
```toml
# fill in
DSBA_LLAMA3_KEY=""
MODAL_BASE_URL="https://--vllm-openai-compatible-serve.modal.run"
```
# To run locally:
```bash
$ python3.11 -m venv venv
$ source venv/bin/activate
$ python -m pip install -r requirements.txt
$ python -m streamlit run app.py
```
# To run on modal:
Make sure you have a [Modal account](https://modal.com/).
First, sign in:
```bash
# sign in
$ python -m modal setup
```
Then set Modal secrets first as `dsba-llama3-key` with the secret name `DSBA_LLAMA3_KEY` and `modal-base-url` as `MODAL_BASE_URL` which is your LLM serving endpoint (not including `v1/`).
You can run a temporary "dev" environment to test:
```bash
# to test
$ modal serve modal/serve_streamlit.py
```
Or deploy it as a new app to modal:
```bash
# when ready to deploy
$ modal deploy modal/serve_streamlit.py
```