https://github.com/indiecodermm/loc-gist
Lightweight AI agent to chat with documents using a local RAG pipeline
https://github.com/indiecodermm/loc-gist
chromadb langchain ollama rag-chatbot ttkbootstrap
Last synced: 3 months ago
JSON representation
Lightweight AI agent to chat with documents using a local RAG pipeline
- Host: GitHub
- URL: https://github.com/indiecodermm/loc-gist
- Owner: IndieCoderMM
- Created: 2025-06-09T19:40:47.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-08-14T17:54:12.000Z (5 months ago)
- Last Synced: 2025-08-14T19:34:46.642Z (5 months ago)
- Topics: chromadb, langchain, ollama, rag-chatbot, ttkbootstrap
- Language: Python
- Homepage:
- Size: 123 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LocGist
**LocGist** is a offline AI tool designed to help you quickly summarize and extract key information from your documents. It uses the Qwen 3 model to provide fast and efficient document processing without relying on external APIs.
- [Tutorial](https://www.freecodecamp.org/news/build-a-local-ai/)

## Features
- 🛡️Privacy: No data leaves your system.
- 💵 No Cost: Run locally without API fees.
- 🖥️Offline Capability: Process documents without internet access.
- ⚙️ Customization: Use any LLM, embedding, and adapt to your needs
## Getting Started
### Ollama Setup 🦙
**1. Install Ollama**
- Windows: Download the installer from the Ollama website: https://ollama.com/download
- Linux/Mac: Open a terminal and run `curl -fsSL https://ollama.com/install.sh | sh`
**2. Verify Ollama Installation**
- Open a new terminal window and run: `ollama --version`
**3. Choose Your Qwen 3 Model**
- Select a Qwen 3 model (e.g., qwen3:8b, qwen3:4b, qwen3:30b-a3b) based on your intended task and available hardware resources
- Consider the model's size, performance, and reasoning capabilities
**4. Pull and Run Qwen 3**
- Pull the chosen Qwen 3 model with: `ollama pull `
- *Interactive Mode*: `ollama run `
- *Server Mode*:
- Start the Ollama server with: `ollama serve `
- Access the model via API at `http://localhost:11434`
### Python Setup 🐍
1. Create a virtual environment to manage dependencies
```bash
python -m venv .venv
```
2. Activate the environment
```bash
source .venv/bin/activate # Linux/Mac
venv\Scripts\activate # Windows
```
3. Install necessary libraries using pip
```bash
pip install langchain langchain-community langchain-core langchain-ollama chromadb pypdf ttkbootstrap
```
4. Run the app
```bash
python -m loc_gist
```