https://github.com/shamspias/chattertwin
ChatterTwin makes your computer talk using your voice and text files. Just add your audio and text, then run it.
https://github.com/shamspias/chattertwin
Last synced: 3 months ago
JSON representation
ChatterTwin makes your computer talk using your voice and text files. Just add your audio and text, then run it.
- Host: GitHub
- URL: https://github.com/shamspias/chattertwin
- Owner: shamspias
- License: mit
- Created: 2025-07-20T20:18:39.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-07-20T20:30:35.000Z (3 months ago)
- Last Synced: 2025-07-20T22:14:04.576Z (3 months ago)
- Language: Python
- Size: 7.81 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ChatterTwin
**Personalized voice cloning & text-to-speech using [Chatterbox TTS](https://huggingface.co/ResembleAI/chatterbox).**
## Features
- Generate speech in your voice (via prompt audio)
- Easy batch synthesis: Put text files in `texts/` and reference audio in `audio_prompts/`
- Supports emotion exaggeration and CFG control
- Automatic device detection (CUDA GPU, Apple Silicon, or CPU)
## Quick Start
1. Put a clean WAV/MP3 of your voice in `audio_prompts/` (e.g. `me.wav`)
2. Put `.txt` files you want to synthesize in `texts/`
3. Install requirements:
```bash
pip install -r requirements.txt
```
4. Run:
```bash
python main.py synthesize --audio_prompt audio_prompts/me.wav
```
(WAVs will appear in `outputs/`)
## Advanced
- Set emotion: `--exaggeration 0.8`
- Change CFG: `--cfg 0.3`
- Force device: `--device cuda` or `--device mps`
## Project Structure
```
ChatterTwin/
├── audio_prompts/
├── texts/
├── outputs/
├── src/
│ ├── synthesizer.py
│ └── utils.py
├── main.py
├── requirements.txt
└── README.md
```
---
**Made with Pain using ResembleAI/Chatterbox**