https://github.com/ethicalabs-ai/kurtis-e1-mlx-voice-agent
A lightweight voice companion, optimized for macOS.
https://github.com/ethicalabs-ai/kurtis-e1-mlx-voice-agent
coqui-tts edge-ai llm llm-inference llms macos macosx mlx openai-whisper small-language-models smollm2 sst translation tts voice-assistant
Last synced: 3 months ago
JSON representation
A lightweight voice companion, optimized for macOS.
- Host: GitHub
- URL: https://github.com/ethicalabs-ai/kurtis-e1-mlx-voice-agent
- Owner: ethicalabs-ai
- License: mit
- Created: 2025-02-13T02:21:26.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-03-24T22:49:12.000Z (3 months ago)
- Last Synced: 2025-03-24T23:28:43.619Z (3 months ago)
- Topics: coqui-tts, edge-ai, llm, llm-inference, llms, macos, macosx, mlx, openai-whisper, small-language-models, smollm2, sst, translation, tts, voice-assistant
- Language: Python
- Homepage: https://www.ethicalabs.ai
- Size: 157 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# 🧠 Kurtis-E1-MLX-Voice-Agent
A privacy-focused, **offline voice assistant for macOS**, powered by:
- 🧠 Local LLM inference via **Ollama** (soon replaceable with [LM Studio](https://lmstudio.ai) for MLX backend)
- 🎤 Speech-to-text via [MLX Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper)
- 🌍 Offline translations via [Unbabel/TowerInstruct-13B-v0.1](https://huggingface.co/Unbabel/TowerInstruct-13B-v0.1)
- 🗣️ High-quality multilingual TTS (currently using XTTS v2)This project is designed **specifically for Apple Silicon Macs**.
It prioritizes simplicity, speed, and on-device privacy for empathetic mental health conversations.
---
## 🚀 Quick Usage
We recommend using [`uv`](https://github.com/astral-sh/uv) as the Python runner:
```bash
uv run python3 -m kurtis_mlx
```You can customize:
- `--language`: Select between `english`, `italian`, etc.
- `--translate`: Use your native language while chatting with an English-only LLM
- `--llm-model`: Defaults to Kurtis-E1 via Ollama
- `--tts-model`: Use a different voice model (e.g., XTTS v2)
- `--whisper-model`: Switch out Whisper variants---
## 🔄 Goals
- ✅ Replace Ollama with **LM Studio's MLX endpoints**
- ✅ Faster startup and playback (TTS runs in background worker)
- 🔐 100% offline: STT, LLMs and TTS run locally
- ☁️ Optional offline translation (only when `--translate` is enabled)