https://github.com/mcaimi/llm-frontend
A Frontend Application to the RAG Demo
https://github.com/mcaimi/llm-frontend
chromadb demo-app langchain-python learning-by-doing llm-inference mistral-7b ollama-client rag wip
Last synced: 4 months ago
JSON representation
A Frontend Application to the RAG Demo
- Host: GitHub
- URL: https://github.com/mcaimi/llm-frontend
- Owner: mcaimi
- License: gpl-3.0
- Created: 2024-09-24T14:25:10.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-02-18T16:12:06.000Z (4 months ago)
- Last Synced: 2025-02-18T17:24:51.640Z (4 months ago)
- Topics: chromadb, demo-app, langchain-python, learning-by-doing, llm-inference, mistral-7b, ollama-client, rag, wip
- Language: Python
- Homepage:
- Size: 140 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Retrieval Augmented Web Frontend
This is a RAG Web Application written using Gradio and FastAPI.
It exposes a way to chat with an LLM while augmenting the response with precise data fetched from a Vector DB.Technologies used:
- ChromaDB as the Vector Store
- Langchain with Integrations
- Ollama
- Mistral LLM
- Gradio
- Python 3.12
## Run Locally
The application runs locally by launching it via the `fastapi` cli command:
```bash
# development mode
$ fastapi dev# production mode
$ fastapi run
```