https://github.com/kazkozdev/murmur
🔄 Sophisticated multi-agent LLM system orchestrating specialized AI agents for high-accuracy processing. Integrates Interpreter, Reasoner, Generator, and Critic agents using Gemma, Mistral and Llama models.
https://github.com/kazkozdev/murmur
agents ai aiohttp artificial-intelligence async-python large-language-models llm local local-llm machine-learning multi-agent-systems nlp ollama orchestrator python
Last synced: about 1 month ago
JSON representation
🔄 Sophisticated multi-agent LLM system orchestrating specialized AI agents for high-accuracy processing. Integrates Interpreter, Reasoner, Generator, and Critic agents using Gemma, Mistral and Llama models.
- Host: GitHub
- URL: https://github.com/kazkozdev/murmur
- Owner: KazKozDev
- License: mit
- Created: 2024-07-11T10:59:11.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-11-10T15:27:41.000Z (6 months ago)
- Last Synced: 2025-04-12T10:13:15.509Z (about 1 month ago)
- Topics: agents, ai, aiohttp, artificial-intelligence, async-python, large-language-models, llm, local, local-llm, machine-learning, multi-agent-systems, nlp, ollama, orchestrator, python
- Language: Python
- Homepage: https://ollama.com
- Size: 1.98 MB
- Stars: 3
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Murmur
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://github.com/KazKozDev/murmur/issues)
[](https://github.com/KazKozDev/murmur/stargazers)
[](https://github.com/KazKozDev/murmur/network)
[](https://github.com/KazKozDev/murmur/pulls)
[](https://github.com/psf/black)
[](https://docs.aiohttp.org/)A sophisticated multi-agent system that orchestrates different specialized AI agents using local LLM models to process and respond to user queries. The system implements a pipeline of Interpreter, Reasoner, Generator, and Critic agents to provide well-thought-out and refined responses.
## 🌟 Features
- **Multi-Agent Architecture**: Four specialized agents working in concert:
- **Interpreter**: Analyzes user intent and context
- **Reasoner**: Develops logical approach to the problem
- **Generator**: Creates initial responses
- **Critic**: Reviews and refines generated content- **Local LLM Integration**: Works with locally hosted language models through a REST API
- **Asynchronous Processing**: Built with `asyncio` for efficient concurrent operations
- **Robust Error Handling**: Comprehensive error management with retries and graceful fallbacks
- **Conversation Memory**: Maintains context through conversation history
- **Confidence Scoring**: Evaluates response quality with multiple metrics## 🔧 Prerequisites
- Python 3.7+
- Local LLM server (compatible with Ollama API)
- Required Python packages:
```
aiohttp
asyncio
```## 🚀 Installation
1. Clone the repository:
```bash
git clone https://github.com/KazKozDev/murmur.git
cd murmur
```2. Install dependencies:
```bash
pip install -r requirements.txt
```3. Ensure your local LLM server is running (default: http://localhost:11434)
## 💻 Usage
1. Navigate to the project directory and run:
```bash
cd murmur
python src/main.py
```Or navigate directly to the source directory:
```bash
cd murmur/src
python main.py
```2. Enter your queries when prompted. Type 'quit' to exit.
Example interaction:
```python
Enter your message: What is the capital of France?Response: The capital of France is Paris.
Confidence: 0.95
```## 🏗️ Architecture
The system follows a pipeline architecture:
1. **User Input** → **Interpreter Agent**
- Analyzes core intent and context
- Identifies implicit requirements2. **Interpreted Message** → **Reasoner Agent**
- Breaks down the problem
- Develops logical approach3. **Reasoning** → **Generator Agent**
- Creates initial response
- Structures content clearly4. **Generated Content** → **Critic Agent**
- Reviews for accuracy and completeness
- Suggests improvements
- Produces final version## ⚙️ Configuration
The system uses the following default models:
- Interpreter: mistral-nemo:latest
- Reasoner: llama3.2-vision:11b
- Generator: gemma2:9b
- Critic: llama3.2-vision:11bModels can be configured by modifying the `AgentOrchestrator` initialization.
## 🔐 Error Handling
The system implements multiple layers of error handling:
- Connection retries (max 3 attempts)
- Timeout management
- Graceful degradation
- Comprehensive error logging## 🤝 Contributing
1. Fork the repository (https://github.com/KazKozDev/murmur/fork)
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request## 📝 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🪲 Known Issues
- High CPU usage with multiple concurrent requests
- Memory consumption may increase with long conversations
- Some LLM models may require significant local resources## 🔜 Future Improvements
- [ ] Add support for streaming responses
- [ ] Implement agent personality customization
- [ ] Add websocket support for real-time communication
- [ ] Enhance conversation memory management
- [ ] Add support for more LLM providers
- [ ] Implement response caching## 📞 Support
For support, please open an issue in the [GitHub repository](https://github.com/KazKozDev/murmur/issues) or contact the maintainers.
## 🙏 Acknowledgments
- Thanks to the Ollama team for their local LLM server
- Inspired by multi-agent architectures in AI systems