https://github.com/coqui-ai/xtts-streaming-server
https://github.com/coqui-ai/xtts-streaming-server
Last synced: 6 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/coqui-ai/xtts-streaming-server
- Owner: coqui-ai
- License: mpl-2.0
- Created: 2023-10-25T09:31:30.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-06-26T21:25:40.000Z (over 1 year ago)
- Last Synced: 2025-03-29T15:04:42.236Z (7 months ago)
- Language: Python
- Size: 291 KB
- Stars: 318
- Watchers: 10
- Forks: 94
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# XTTS streaming server
*Warning: XTTS-streaming-server doesn't support concurrent streaming requests, it's a demo server, not meant for production.*https://github.com/coqui-ai/xtts-streaming-server/assets/17219561/7220442a-e88a-4288-8a73-608c4b39d06c
## 1) Run the server
### Use a pre-built image
CUDA 12.1:
```bash
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121
```CUDA 11.8 (for older cards):
```bash
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
```CPU (not recommended):
```bash
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu
```Run with a fine-tuned model:
Make sure the model folder `/path/to/model/folder` contains the following files:
- `config.json`
- `model.pth`
- `vocab.json````bash
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest`
```Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml))### Build the image yourself
To build the Docker container Pytorch 2.1 and CUDA 11.8 :
`DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile.
```bash
$ git clone git@github.com:coqui-ai/xtts-streaming-server.git
$ cd xtts-streaming-server/server
$ docker build -t xtts-stream . -f DOCKERFILE
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream
```Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml))## 2) Testing the running server
Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal.
### Clone `xtts-streaming-server` if you haven't already
```bash
$ git clone git@github.com:coqui-ai/xtts-streaming-server.git
```### Using the gradio demo
```bash
$ cd xtts-streaming-server
$ python -m pip install -r test/requirements.txt
$ python demo.py
```### Using the test script
```bash
$ cd xtts-streaming-server/test
$ python -m pip install -r requirements.txt
$ python test_streaming.py
```