{"id":22703791,"url":"https://github.com/kazkozdev/murmur","last_synced_at":"2025-04-12T10:13:25.252Z","repository":{"id":247945550,"uuid":"827285845","full_name":"KazKozDev/murmur","owner":"KazKozDev","description":"🔄 Sophisticated multi-agent LLM system orchestrating specialized AI agents for high-accuracy processing. Integrates Interpreter, Reasoner, Generator, and Critic agents using Gemma, Mistral and Llama models.","archived":false,"fork":false,"pushed_at":"2024-11-10T15:27:41.000Z","size":2079,"stargazers_count":3,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-12T10:13:15.509Z","etag":null,"topics":["agents","ai","aiohttp","artificial-intelligence","async-python","large-language-models","llm","local","local-llm","machine-learning","multi-agent-systems","nlp","ollama","orchestrator","python"],"latest_commit_sha":null,"homepage":"https://ollama.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/KazKozDev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-07-11T10:59:11.000Z","updated_at":"2025-03-19T01:40:13.000Z","dependencies_parsed_at":"2024-07-11T13:24:45.432Z","dependency_job_id":"49c5103b-cb82-4553-9475-e89bdd1b2603","html_url":"https://github.com/KazKozDev/murmur","commit_stats":null,"previous_names":["kazkozdev/murmur"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/KazKozDev%2Fmurmur","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/KazKozDev%2Fmurmur/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/KazKozDev%2Fmurmur/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/KazKozDev%2Fmurmur/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/KazKozDev","download_url":"https://codeload.github.com/KazKozDev/murmur/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248550633,"owners_count":21122934,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","aiohttp","artificial-intelligence","async-python","large-language-models","llm","local","local-llm","machine-learning","multi-agent-systems","nlp","ollama","orchestrator","python"],"created_at":"2024-12-10T08:12:41.900Z","updated_at":"2025-04-12T10:13:25.223Z","avatar_url":"https://github.com/KazKozDev.png","language":"Python","readme":"# Murmur\n![Murmur](https://raw.githubusercontent.com/KazKozDev/murmur/main/murmur-banner.png)\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)\n[![GitHub issues](https://img.shields.io/github/issues/KazKozDev/murmur)](https://github.com/KazKozDev/murmur/issues)\n[![GitHub stars](https://img.shields.io/github/stars/KazKozDev/murmur)](https://github.com/KazKozDev/murmur/stargazers)\n[![GitHub forks](https://img.shields.io/github/forks/KazKozDev/murmur)](https://github.com/KazKozDev/murmur/network)\n[![GitHub pull requests](https://img.shields.io/github/issues-pr/KazKozDev/murmur)](https://github.com/KazKozDev/murmur/pulls)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![aiohttp](https://img.shields.io/badge/aiohttp-3.8+-blue.svg)](https://docs.aiohttp.org/)\n\nA sophisticated multi-agent system that orchestrates different specialized AI agents using local LLM models to process and respond to user queries. The system implements a pipeline of Interpreter, Reasoner, Generator, and Critic agents to provide well-thought-out and refined responses.\n\n## 🌟 Features\n\n- **Multi-Agent Architecture**: Four specialized agents working in concert:\n  - **Interpreter**: Analyzes user intent and context\n  - **Reasoner**: Develops logical approach to the problem\n  - **Generator**: Creates initial responses\n  - **Critic**: Reviews and refines generated content\n\n- **Local LLM Integration**: Works with locally hosted language models through a REST API\n- **Asynchronous Processing**: Built with `asyncio` for efficient concurrent operations\n- **Robust Error Handling**: Comprehensive error management with retries and graceful fallbacks\n- **Conversation Memory**: Maintains context through conversation history\n- **Confidence Scoring**: Evaluates response quality with multiple metrics\n\n## 🔧 Prerequisites\n\n- Python 3.7+\n- Local LLM server (compatible with Ollama API)\n- Required Python packages:\n  ```\n  aiohttp\n  asyncio\n  ```\n\n## 🚀 Installation\n\n1. Clone the repository:\n```bash\ngit clone https://github.com/KazKozDev/murmur.git\ncd murmur\n```\n\n2. Install dependencies:\n```bash\npip install -r requirements.txt\n```\n\n3. Ensure your local LLM server is running (default: http://localhost:11434)\n\n## 💻 Usage\n\n1. Navigate to the project directory and run:\n```bash\ncd murmur\npython src/main.py\n```\n\nOr navigate directly to the source directory:\n```bash\ncd murmur/src\npython main.py\n```\n\n2. Enter your queries when prompted. Type 'quit' to exit.\n\nExample interaction:\n```python\nEnter your message: What is the capital of France?\n\nResponse: The capital of France is Paris.\nConfidence: 0.95\n```\n\n## 🏗️ Architecture\n\nThe system follows a pipeline architecture:\n\n1. **User Input** → **Interpreter Agent**\n   - Analyzes core intent and context\n   - Identifies implicit requirements\n\n2. **Interpreted Message** → **Reasoner Agent**\n   - Breaks down the problem\n   - Develops logical approach\n\n3. **Reasoning** → **Generator Agent**\n   - Creates initial response\n   - Structures content clearly\n\n4. **Generated Content** → **Critic Agent**\n   - Reviews for accuracy and completeness\n   - Suggests improvements\n   - Produces final version\n\n## ⚙️ Configuration\n\nThe system uses the following default models:\n- Interpreter: mistral-nemo:latest\n- Reasoner: llama3.2-vision:11b\n- Generator: gemma2:9b\n- Critic: llama3.2-vision:11b\n\nModels can be configured by modifying the `AgentOrchestrator` initialization.\n\n## 🔐 Error Handling\n\nThe system implements multiple layers of error handling:\n- Connection retries (max 3 attempts)\n- Timeout management\n- Graceful degradation\n- Comprehensive error logging\n\n## 🤝 Contributing\n\n1. Fork the repository (https://github.com/KazKozDev/murmur/fork)\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## 📝 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## 🪲 Known Issues\n\n- High CPU usage with multiple concurrent requests\n- Memory consumption may increase with long conversations\n- Some LLM models may require significant local resources\n\n## 🔜 Future Improvements\n\n- [ ] Add support for streaming responses\n- [ ] Implement agent personality customization\n- [ ] Add websocket support for real-time communication\n- [ ] Enhance conversation memory management\n- [ ] Add support for more LLM providers\n- [ ] Implement response caching\n\n## 📞 Support\n\nFor support, please open an issue in the [GitHub repository](https://github.com/KazKozDev/murmur/issues) or contact the maintainers.\n\n## 🙏 Acknowledgments\n\n- Thanks to the Ollama team for their local LLM server\n- Inspired by multi-agent architectures in AI systems","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkazkozdev%2Fmurmur","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkazkozdev%2Fmurmur","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkazkozdev%2Fmurmur/lists"}