{"id":24641325,"url":"https://github.com/bessouat40/llmchat","last_synced_at":"2026-04-09T08:52:02.105Z","repository":{"id":273935346,"uuid":"907277041","full_name":"Bessouat40/LLMChat","owner":"Bessouat40","description":"Minimal chat frontend to test your llm and visualize your conversations with them.","archived":false,"fork":false,"pushed_at":"2025-02-17T22:19:09.000Z","size":14153,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-02-17T23:25:07.430Z","etag":null,"topics":["artificial-intelligence","chat-app","chat-application","chatgpt","llm","ollama","ollama-rag","ollama-ui","rag","retrieval-augmented-generation"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Bessouat40.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-12-23T08:18:58.000Z","updated_at":"2025-02-17T22:19:13.000Z","dependencies_parsed_at":"2025-02-17T23:33:13.284Z","dependency_job_id":null,"html_url":"https://github.com/Bessouat40/LLMChat","commit_stats":null,"previous_names":["bessouat40/llmchat"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bessouat40%2FLLMChat","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bessouat40%2FLLMChat/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bessouat40%2FLLMChat/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bessouat40%2FLLMChat/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Bessouat40","download_url":"https://codeload.github.com/Bessouat40/LLMChat/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244609430,"owners_count":20480782,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","chat-app","chat-application","chatgpt","llm","ollama","ollama-rag","ollama-ui","rag","retrieval-augmented-generation"],"created_at":"2025-01-25T12:12:55.891Z","updated_at":"2026-04-09T08:51:57.084Z","avatar_url":"https://github.com/Bessouat40.png","language":"TypeScript","readme":"# LLMChat 🎉\n\nLLMChat is a minimalist application designed to test and interact with your LLM in a user-friendly way. Seamlessly integrate local and GitHub-based knowledge to enhance your AI's contextual capabilities. 🌟\n\n[[🎥 Demonstration]](./media/raglight_chat.mov)\n\n---\n\n## Features 🚀\n\n- **Interactive Interface:** Use LLMChat like ChatGPT but tailored to your specific knowledge base. 💬\n- **Custom Knowledge Sources:** Link local folders and GitHub repositories to create a dynamic, up-to-date context for your LLM. 📂\n- **Privacy-Friendly:** Runs locally, ensuring complete control over your data. 🔒\n\n---\n\n## Installation ⚙️\n\n### Docker Usage 🐳\n\nTo simplify deployment, you can use Docker Compose to run both the frontend and backend.\n\n#### Prerequisites\n\n- Install Docker and Docker Compose.\n- Ensure that Ollama is running locally on your machine and accessible at `http://localhost:11434` (default configuration).\n\n#### Build and Run with Docker Compose\n\n- Clone the repository:\n\n```bash\ngit clone https://github.com/Bessouat40/LLMChat.git\ncd LLMChat\n```\n\n- Start the application with Docker Compose:\n\n```bash\ndocker-compose up --build\n```\n\nThe application will be accessible at:\n\n- Frontend: `http://localhost:3000`\n- Backend API: `http://localhost:8000`\n\n### Manual Installation\n\n1. Install dependencies and start the backend:\n\n```bash\npython -m pip install -r api_example/requirements.txt\npython api_example/main.py\n```\n\n2. Install dependencies and start the frontend:\n\n```bash\nnpm i \u0026\u0026 npm run start\n```\n\n## How It Works 🤔\n\nLLMChat leverages [RAGLight](https://github.com/Bessouat40/RAGLight) to index and process knowledge bases, making them available for your LLM to query. It supports:\n\n- GitHub repositories 🧑‍💻\n- Local folders with PDFs, code, and more 📄\n\n### Example Usage 📜\n\n- Setting Up a Pipeline:\n\n```python\nfrom raglight.rag.simple_rag_api import RAGPipeline\nfrom raglight.models.data_source_model import FolderSource, GitHubSource\n\npipeline = RAGPipeline(knowledge_base=[\n    FolderSource(path=\"\u003cpath to folder\u003e/knowledge_base\"),\n    GitHubSource(url=\"https://github.com/Bessouat40/RAGLight\")\n], model_name=\"llama3\")\n\npipeline.build()\nresponse = pipeline.generate(\"What is LLMChat and how does it work?\")\nprint(response)\n```\n\n### API Example 🖥️\n\nYou can find an API example in the `api_example/main.py` file. This shows how the backend handles requests and interacts with the LLM.\n\n🚀 Get started with LLMChat today and enhance your LLM with custom knowledge bases!\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbessouat40%2Fllmchat","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbessouat40%2Fllmchat","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbessouat40%2Fllmchat/lists"}