https://github.com/stellarbear/whisper.cpp.docker
run whisper.cpp in docker
https://github.com/stellarbear/whisper.cpp.docker
docker docker-compose gpu whisper-cpp
Last synced: about 1 year ago
JSON representation
run whisper.cpp in docker
- Host: GitHub
- URL: https://github.com/stellarbear/whisper.cpp.docker
- Owner: stellarbear
- License: apache-2.0
- Created: 2024-01-03T08:23:45.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-14T09:12:46.000Z (over 1 year ago)
- Last Synced: 2025-01-11T17:47:13.479Z (about 1 year ago)
- Topics: docker, docker-compose, gpu, whisper-cpp
- Language: Shell
- Homepage:
- Size: 11.7 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Run the [whisper.cpp](https://github.com/ggerganov/) in a Docker container with GPU support.
## TLDR
```
docker compose up
```
or
```
MODEL=large-v2 LANGUAGE=ru docker compose up
```
## Step by step
### 1. Build CUDA image (single run)
```
docker compose build --progress=plain
```
### 2. Download models (single run)
You may want to do it manually in order to see the progress
```
./models/download.sh large-v2
```
This script is a plain copy of [download-ggml-model.sh](https://github.com/ggerganov/whisper.cpp/blob/master/models/download-ggml-model.sh).
You may find additional information and configurations [here](https://github.com/ggerganov/whisper.cpp/tree/master/models)
### 3. Prepare your files
Place all the files in the ```./volume/input/``` directory
### 4. Run the docker compose
```
docker compose up
```
Configure defaults
```
MODEL=large-v2 LANGUAGE=ru docker compose up
MODEL=large-v3 LANGUAGE=ru docker compose up
MODEL=large-v3-turbo LANGUAGE=ru docker compose up
```
| Argument | Values | Defaults |
| -------- | ------- |------- |
| model | base, medium, large, [other options](https://github.com/ggerganov/whisper.cpp/blob/master/models/download-ggml-model.sh#L25) | large-v2
| language | rn, ru, fr, etc. (depends on the model) | ru
### 5. Result
You can find the result in the ```./volume/output/``` directory