Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/collabora/WhisperFusion
WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
https://github.com/collabora/WhisperFusion
Last synced: about 2 months ago
JSON representation
WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
- Host: GitHub
- URL: https://github.com/collabora/WhisperFusion
- Owner: collabora
- Created: 2023-12-15T17:37:49.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-31T13:28:41.000Z (5 months ago)
- Last Synced: 2024-10-16T01:41:37.203Z (2 months ago)
- Language: Python
- Homepage:
- Size: 510 KB
- Stars: 1,520
- Watchers: 17
- Forks: 107
- Open Issues: 20
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- AiTreasureBox - collabora/WhisperFusion - 12-20_1563_0](https://img.shields.io/github/stars/collabora/WhisperFusion.svg)|WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.| (Repos)
- StarryDivineSky - collabora/WhisperFusion
README
# WhisperFusion
Seamless conversations with AI (with ultra-low latency)
Welcome to WhisperFusion. WhisperFusion builds upon the capabilities of
the [WhisperLive](https://github.com/collabora/WhisperLive) and
[WhisperSpeech](https://github.com/collabora/WhisperSpeech) by
integrating Mistral, a Large Language Model (LLM), on top of the
real-time speech-to-text pipeline. Both LLM and
Whisper are optimized to run efficiently as TensorRT engines, maximizing
performance and real-time processing capabilities. While WhiperSpeech is
optimized with torch.compile.## Features
- **Real-Time Speech-to-Text**: Utilizes OpenAI WhisperLive to convert
spoken language into text in real-time.- **Large Language Model Integration**: Adds Mistral, a Large Language
Model, to enhance the understanding and context of the transcribed
text.- **TensorRT Optimization**: Both LLM and Whisper are optimized to
run as TensorRT engines, ensuring high-performance and low-latency
processing.
- **torch.compile**: WhisperSpeech uses torch.compile to speed up
inference which makes PyTorch code run faster by JIT-compiling PyTorch
code into optimized kernels.## Hardware Requirements
- A GPU with at least 24GB of RAM
- For optimal latency, the GPU should have a similar FP16 (half) TFLOPS as the RTX 4090. Here are the [hardware specifications](https://www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889) for the RTX 4090.The demo was run on a single RTX 4090 GPU. WhisperFusion uses the Nvidia TensorRT-LLM library for CUDA optimized versions of popular LLM models. TensorRT-LLM supports multiple GPUs, so it should be possible to run WhisperFusion for even better performance on multiple GPUs.
## Getting Started
We provide a Docker Compose setup to streamline the deployment of the pre-built TensorRT-LLM docker container. This setup includes both Whisper and Phi converted to TensorRT engines, and the WhisperSpeech model is pre-downloaded to quickly start interacting with WhisperFusion. Additionally, we include a simple web server for the Web GUI.- Build and Run with docker compose
```bash
mkdir docker/scratch-space
cp docker/scripts/build-* docker/scripts/run-whisperfusion.sh docker/scratch-space/docker compose build
export MODEL=Phi-3-mini-4k-instruct #Phi-3-mini-128k-instruct or phi-2, By default WhisperFusion uses phi-2
docker compose up
```- Start Web GUI on `http://localhost:8000`
**NOTE**
## Contact Us
For questions or issues, please open an issue. Contact us at:
[email protected], [email protected],
[email protected]