Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/fofsinx/echoollama
๐ฆ echoOLlama: A real-time voice AI platform powered by local LLMs. Features WebSocket streaming, voice interactions, and OpenAI API compatibility. Built with FastAPI, Redis, and PostgreSQL. Perfect for private AI conversations and custom voice assistants.
https://github.com/fofsinx/echoollama
agent docker docker-compose fastapi lgm llama llama3 llm multimodel-large-language-model ollama openai realtime-api
Last synced: 20 days ago
JSON representation
๐ฆ echoOLlama: A real-time voice AI platform powered by local LLMs. Features WebSocket streaming, voice interactions, and OpenAI API compatibility. Built with FastAPI, Redis, and PostgreSQL. Perfect for private AI conversations and custom voice assistants.
- Host: GitHub
- URL: https://github.com/fofsinx/echoollama
- Owner: fofsinx
- Created: 2024-11-01T18:12:51.000Z (about 2 months ago)
- Default Branch: openai
- Last Pushed: 2024-11-09T11:51:02.000Z (about 1 month ago)
- Last Synced: 2024-11-22T01:25:02.290Z (about 1 month ago)
- Topics: agent, docker, docker-compose, fastapi, lgm, llama, llama3, llm, multimodel-large-language-model, ollama, openai, realtime-api
- Language: Jupyter Notebook
- Homepage:
- Size: 8.2 MB
- Stars: 59
- Watchers: 2
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ๐ฆ `echoOLlama`: Reverse-engineered OpenAIโs [Realtime API]
> ๐ Talk to your local LLM models in human voice and get responses in realtime!![๐ฆ EchoOLlama Banner](https://github.com/user-attachments/assets/d2422917-b03a-48aa-88c8-d40f0884bd5e)
> โ ๏ธ **Active Development Alert!** โ ๏ธ
>
> We're cooking up something amazing! While the core functionality is taking shape, some features are still in the oven. Perfect for experiments, but maybe hold off on that production deployment for now! ๐## ๐ฏ What's `echoOLlama`?
`echoOLlama` is a cool project that lets you talk to AI models using your voice, just like you'd talk to a real person! ๐ฃ๏ธHere's what makes it special:
- ๐ค You can speak naturally and the AI understands you
- ๐ค It works with local AI models (through Ollama) so your data stays private
- โก Super fast responses in real-time
- ๐ The AI talks back to you with a natural voice
- ๐ Works just like OpenAI's API but with your own modelsThink of it like having a smart assistant that runs completely on your computer. You can have natural conversations with it, ask questions, get help with tasks - all through voice! And because it uses local AI models, you don't need to worry about your conversations being stored in the cloud.
Perfect for developers who want to:
- Build voice-enabled AI applications
- Create custom AI assistants
- Experiment with local language models
- Have private AI conversations### ๐ What's Working Now:
![๐ฆ EchoOLlama Banner](https://github.com/user-attachments/assets/5ce20abf-6982-4b6b-a824-58f7d91ef7cd)
- โ Connection handling and session management
- โ Real-time event streaming
- โ Redis-based session storage
- โ Basic database interactions
- โ OpenAI compatibility layer
- โ Core WebSocket infrastructure### ๐ง On the Roadmap:
- ๐ Message processing pipeline (In Progress)
- ๐ค Advanced response generation with client events
- ๐ฏ Function calling implementation with client events
- ๐ Audio transcription service connection with client events
- ๐ฃ๏ธ Text-to-speech integration with client events
- ๐ Usage analytics dashboard
- ๐ Enhanced authentication system## ๐ Features & Capabilities
### ๐ฎ Core Services
- **Real-time Chat** ๐ฌ
- Streaming responses via websockets
- Multi-model support via Ollama
- Session persistence
- ๐ค Audio Transcription (FASTER_Whisper)
- ๐ฃ๏ธ Text-to-Speech (OpenedAI/Speech)- **Coming Soon** ๐
- ๐ง Function Calling System
- ๐ Advanced Analytics### ๐ ๏ธ Technical Goodies
- โก Lightning-fast response times
- ๐ Built-in rate limiting
- ๐ Usage tracking ready
- โ๏ธ Load balancing for scale
- ๐ฏ 100% OpenAI API compatibility## ๐๏ธ System Architecture
> Click on the image to view the interactive version on Excalidraw!
## ๐ป Tech Stack Spotlight
### ๐ฏ Backend Champions
- ๐ FastAPI - Lightning-fast API framework
- ๐ Redis - Blazing-fast caching & session management
- ๐ PostgreSQL - Rock-solid data storage### ๐ค AI Powerhouse
- ๐ฆ Ollama - Local LLM inference
- ๐ค faster_whisper - Speech recognition (coming soon)
- ๐ฃ๏ธ OpenedAI TTS - Voice synthesis (coming soon)## ๐ Get Started in 3, 2, 1...
1. **Clone & Setup** ๐ฆ
```bash
git clone https://github.com/iamharshdev/EchoOLlama.git
cd EchoOLlama
python -m venv .venv
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
pip install -r requirements.txt
```2. **Environment Setup** โ๏ธ
```bash
cp .env.example .env
# Update .env with your config - check .env.example for all options!
make migrate # create db and apply migrations
```3. **Launch Time** ๐
```bash
# Fire up the services
docker-compose up -d# Start the API server
uvicorn app.main:app --reload
```## ๐ค Join the EchoOLlama Family
Got ideas? Found a bug? Want to contribute? Check out our [CONTRIBUTING.md](CONTRIBUTING.md) guide and become part of something awesome! We love pull requests! ๐## ๐ก Project Status Updates
- ๐ข **Working**: Connection handling, session management, event streaming
- ๐ก **In Progress**: Message processing, response generation
- ๐ด **Planned**: Audio services, function calling, analytics## ๐ License
MIT Licensed - Go wild! See [LICENSE](LICENSE) for the legal stuff.---
*Built with ๐ by the community, for the community**PS: Star โญ us on GitHub if you like what we're building!*