https://github.com/ksm26/dr-x-nlp-pipeline
A fully offline NLP pipeline for extracting, chunking, embedding, querying, summarizing, and translating research documents using local LLMs. Inspired by the fictional mystery of Dr. X, the system supports multi-format files, local RAG-based Q&A, Arabic translation, and ROUGE-based summarization โ all without cloud dependencies.
https://github.com/ksm26/dr-x-nlp-pipeline
chromadb document-analysis llm local-llm modular-ai multilingual-ai-model nlp offlineai ollama opensource-ai rag textsummarization
Last synced: 6 months ago
JSON representation
A fully offline NLP pipeline for extracting, chunking, embedding, querying, summarizing, and translating research documents using local LLMs. Inspired by the fictional mystery of Dr. X, the system supports multi-format files, local RAG-based Q&A, Arabic translation, and ROUGE-based summarization โ all without cloud dependencies.
- Host: GitHub
- URL: https://github.com/ksm26/dr-x-nlp-pipeline
- Owner: ksm26
- Created: 2025-04-21T09:34:19.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-04-21T09:48:35.000Z (6 months ago)
- Last Synced: 2025-04-23T16:16:53.601Z (6 months ago)
- Topics: chromadb, document-analysis, llm, local-llm, modular-ai, multilingual-ai-model, nlp, offlineai, ollama, opensource-ai, rag, textsummarization
- Language: Python
- Homepage: https://github.com/ksm26/dr-x-nlp-pipeline
- Size: 9.92 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ๐ง The Enigmatic Research of Dr. X โ NLP Pipeline (Local LLMs)
This project is a full-featured NLP pipeline designed to analyze the mysterious research documents left behind by **Dr. X**, a fictional scientist who vanished under mysterious circumstances. The goal is to extract, summarize, understand, and translate his research using **local, offline NLP tools** โ no internet or cloud APIs required.
---
## ๐ Features
- โ Multi-format file ingestion (`.pdf`, `.docx`, `.csv`, `.xlsx`, `.xls`, `.xlsm`)
- โ Token-based chunking with metadata (filename, page, chunk number)
- โ Local vector search using `ChromaDB`
- โ RAG Q&A system powered by **local LLaMA (via Ollama)**
- โ Automatic translation of English answers to **Arabic**
- โ Local summarization of full documents
- โ ROUGE metric evaluation
- โ Performance logging (tokens/sec for all major components)
- โ Fully modular & offline---
## ๐งฑ Architecture
โโโ file_reader.py # Extracts text & tables from all formats \
โโโ chunker.py # Tokenizes and chunks text with cl100k_base \
โโโ embedding_pipeline.py # Embeds chunks and stores in ChromaDB \
โโโ rag_qa_system.py # Runs Q&A retrieval + local LLaMA generation \
โโโ translation_utils.py # Translates answers to Arabic (offline) \
โโโ summarizer.py # Summarizes files + evaluates with ROUGE \
โโโ requirements.txt # All dependencies\
๐ files/ \
โโโ All input files (.pdf, .docx, .csv, etc.)---
## ๐ง Tech Stack
| Component | Tool/Library |
|------------------|--------------------------------------|
| **LLM (local)** | `Ollama` (e.g. `llama2`, `tinyllama`) |
| **Embedding** | `sentence-transformers` (`MiniLM`) |
| **Vector DB** | `ChromaDB (PersistentClient)` |
| **Translation** | `argos-translate` (EN โ AR) |
| **Summarization** | `Falconsai/text_summarization` |
| **Metrics** | `tiktoken`, `rouge-score`, `time` |---
## ๐ก How It Works
1. **Extract** text + tables from PDFs, Word, and Excel files.
2. **Chunk** the text based on tokens (cl100k_base).
3. **Embed** chunks using MiniLM and store in a local ChromaDB.
4. **Ask Questions** via a CLI โ the system retrieves relevant chunks and generates an answer using LLaMA.
5. **Translate** the answer into Arabic.
6. **Summarize** full documents and measure summary quality with ROUGE.---
## ๐งช Example: CLI Output
```bash
โ Ask a question about Dr. X's documents:
> What was his last known research?๐ฌ English Answer:
Dr. Xโs final study focused on zero-point energy manipulation using ancient resonance systems.๐ฃ๏ธ Arabic Translation:
ุฑูุฒุช ุงูุฏุฑุงุณุฉ ุงูุฃุฎูุฑุฉ ููุฏูุชูุฑ ุฅูุณ ุนูู ุงูุชูุงุนุจ ุจุทุงูุฉ ุงูููุทุฉ ุงูุตูุฑูุฉ ุจุงุณุชุฎุฏุงู ุฃูุธู ุฉ ุงูุฑููู ุงููุฏูู ุฉ.
```## ๐ Performance Metrics
| Task | Tokens | Time | TPS |
|---------------|--------|----------|-----------|
| Embedding | 1,200 | 1.8 sec | ~666 TPS |
| RAG Generation| 620 | 1.2 sec | ~516 TPS |
| Summarization | 1,500 | 3.0 sec | ~500 TPS |## Supported Formats
- โ PDF (.pdf) \
- โ Word (.docx) \
- โ Excel (.xlsx, .xls, .xlsm) \
- โ CSV (.csv) \
- โ Multi-sheet support with pandas \## ๐ ๏ธ Setup Instructions
### Install Requirements
```bash
pip install -r requirements.txt
```### Setup Ollama
- install Ollama: https://ollama.com/download
```bash
ollama pull tinyllama
```### Run Embedding
```bash
python embedding_pipeline.py
```### Ask Questions (RAG + Arabic)
```bash
python rag_qa_system.py
```### Summarize a Document
```bash
python summarizer.py
```## โ Evaluation Criteria Coverage
- โ Executes correctly across all modules
- โ Efficient + logs tokens/sec
- โ Translates and summarizes with high fluency
- โ Handles all required file formats
- โ Uses appropriate local LLMs and vector DB
- โ Clean code, modular design, creative solution