https://github.com/bessouat40/llmchat
Minimal chat frontend to test your llm and visualize your conversations with them.
https://github.com/bessouat40/llmchat
artificial-intelligence chat-app chat-application chatgpt llm ollama ollama-rag ollama-ui rag retrieval-augmented-generation
Last synced: 2 months ago
JSON representation
Minimal chat frontend to test your llm and visualize your conversations with them.
- Host: GitHub
- URL: https://github.com/bessouat40/llmchat
- Owner: Bessouat40
- Created: 2024-12-23T08:18:58.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-02-17T22:19:09.000Z (3 months ago)
- Last Synced: 2025-02-17T23:25:07.430Z (3 months ago)
- Topics: artificial-intelligence, chat-app, chat-application, chatgpt, llm, ollama, ollama-rag, ollama-ui, rag, retrieval-augmented-generation
- Language: TypeScript
- Homepage:
- Size: 13.5 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LLMChat 🎉
LLMChat is a minimalist application designed to test and interact with your LLM in a user-friendly way. Seamlessly integrate local and GitHub-based knowledge to enhance your AI's contextual capabilities. 🌟
[[🎥 Demonstration]](./media/raglight_chat.mov)
---
## Features 🚀
- **Interactive Interface:** Use LLMChat like ChatGPT but tailored to your specific knowledge base. 💬
- **Custom Knowledge Sources:** Link local folders and GitHub repositories to create a dynamic, up-to-date context for your LLM. 📂
- **Privacy-Friendly:** Runs locally, ensuring complete control over your data. 🔒---
## Installation ⚙️
### Docker Usage 🐳
To simplify deployment, you can use Docker Compose to run both the frontend and backend.
#### Prerequisites
- Install Docker and Docker Compose.
- Ensure that Ollama is running locally on your machine and accessible at `http://localhost:11434` (default configuration).#### Build and Run with Docker Compose
- Clone the repository:
```bash
git clone https://github.com/Bessouat40/LLMChat.git
cd LLMChat
```- Start the application with Docker Compose:
```bash
docker-compose up --build
```The application will be accessible at:
- Frontend: `http://localhost:3000`
- Backend API: `http://localhost:8000`### Manual Installation
1. Install dependencies and start the backend:
```bash
python -m pip install -r api_example/requirements.txt
python api_example/main.py
```2. Install dependencies and start the frontend:
```bash
npm i && npm run start
```## How It Works 🤔
LLMChat leverages [RAGLight](https://github.com/Bessouat40/RAGLight) to index and process knowledge bases, making them available for your LLM to query. It supports:
- GitHub repositories 🧑💻
- Local folders with PDFs, code, and more 📄### Example Usage 📜
- Setting Up a Pipeline:
```python
from raglight.rag.simple_rag_api import RAGPipeline
from raglight.models.data_source_model import FolderSource, GitHubSourcepipeline = RAGPipeline(knowledge_base=[
FolderSource(path="/knowledge_base"),
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
], model_name="llama3")pipeline.build()
response = pipeline.generate("What is LLMChat and how does it work?")
print(response)
```### API Example 🖥️
You can find an API example in the `api_example/main.py` file. This shows how the backend handles requests and interacts with the LLM.
🚀 Get started with LLMChat today and enhance your LLM with custom knowledge bases!