https://github.com/anubis-labs/pathrag-system
PathRAG System - A Path-based Retrieval-Augmented Generation implementation with knowledge graph visualization and Ollama integration for enhanced question answering.
https://github.com/anubis-labs/pathrag-system
docker fastapi knowledge-graph llm ollama path-based-retrieval rag react retrieval-augmented-generation weaviate
Last synced: 7 months ago
JSON representation
PathRAG System - A Path-based Retrieval-Augmented Generation implementation with knowledge graph visualization and Ollama integration for enhanced question answering.
- Host: GitHub
- URL: https://github.com/anubis-labs/pathrag-system
- Owner: Anubis-Labs
- Created: 2025-03-03T08:59:24.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-03-03T09:06:00.000Z (7 months ago)
- Last Synced: 2025-03-14T05:25:39.040Z (7 months ago)
- Topics: docker, fastapi, knowledge-graph, llm, ollama, path-based-retrieval, rag, react, retrieval-augmented-generation, weaviate
- Language: JavaScript
- Homepage:
- Size: 1.84 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PathRAG - Path-based Retrieval Augmented Generation
PathRAG is an implementation of the paper ["PathRAG: Pruning Graph-based Retrieval Augmented Generation with Relational Paths"](https://arxiv.org/abs/2502.14902). This system improves retrieval-augmented generation by extracting and utilizing key relational paths from an indexing graph.
## System Overview
The application consists of four main components running in Docker containers:
1. **Frontend** (React + Vite)
- Modern UI built with Material-UI
- Interactive knowledge graph visualization
- File upload and query interface
- Real-time model selection and results display2. **Backend** (FastAPI)
- RESTful API endpoints for data processing
- Knowledge graph construction
- Path-based retrieval implementation
- Integration with Weaviate and Ollama3. **Weaviate** (Vector Database)
- Stores and indexes knowledge graph nodes
- Enables semantic similarity search
- Manages relationships between entities4. **Ollama** (LLM Service)
- Local LLM deployment
- Handles query processing
- Generates natural language responses## Features
- **Document Upload**
- Support for PDF, TXT, and DOCX files
- Raw text input option
- Multiple file upload capability
- Progress tracking- **Knowledge Base Management**
- Create and manage multiple knowledge bases
- Add descriptions and metadata
- Organize documents by topic or domain- **Knowledge Graph Visualization**
- Interactive graph display
- Node and relationship exploration
- Path visualization for query results- **Query Interface**
- Natural language querying
- Model selection dropdown
- Query history tracking
- Detailed response visualization## Installation
1. **Prerequisites**
- Docker and Docker Compose
- Git
- 8GB+ RAM recommended
- NVIDIA GPU (optional, for improved LLM performance)2. **Clone the Repository**
```bash
git clone https://github.com/your-username/PathRAG.git
cd PathRAG
```3. **Environment Setup**
```bash
# Start all services
docker-compose up -d
```4. **Access the Application**
- Frontend: http://localhost
- Backend API: http://localhost:8000
- Weaviate Console: http://localhost:8080
- Ollama API: http://localhost:11434## Architecture
### Frontend (Port 80)
- Vite + React application
- Material-UI components
- Nginx for static file serving and API proxying
- Environment configuration via VITE_API_URL### Backend (Port 8000)
- FastAPI application
- Uvicorn ASGI server
- File processing and KG construction
- Path-based retrieval implementation### Weaviate (Port 8080)
- Vector database for KG storage
- RESTful and GraphQL APIs
- Configurable vectorizer modules
- Persistent data storage### Ollama (Port 11434)
- Local LLM service
- Multiple model support
- REST API for inference
- Model management## API Endpoints
### Upload API
- `POST /upload/files` - Upload documents
- `POST /upload/knowledge_base` - Create knowledge base
- `GET /query/kbs` - List knowledge bases### Query API
- `GET /query/models` - List available models
- `POST /query/query` - Execute queries
- `GET /query/graph` - Retrieve graph data## Configuration
### Docker Compose
```yaml
version: '3.8'
services:
frontend:
build: ./frontend
ports:
- "80:80"
environment:
- VITE_API_URL=backend:
build: ./backend
expose:
- "8000"
environment:
- WEAVIATE_URL=http://weaviate:8080
- OLLAMA_URL=http://ollama:11434weaviate:
image: semitechnologies/weaviate:1.24.1
environment:
- QUERY_DEFAULTS_LIMIT=20
- AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=trueollama:
image: ollama/ollama:latest
volumes:
- ollama-models:/root/.ollama
```## Development
### Frontend Development
```bash
cd frontend
npm install
npm run dev
```### Backend Development
```bash
cd backend
pip install -r requirements.txt
uvicorn main:app --reload
```## Contributing
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Push to the branch
5. Create a Pull Request## License
[Add your license information here]
## References
- [PathRAG Paper](https://arxiv.org/abs/2502.14902)
- [Original PathRAG Repository](https://github.com/BUPT-GAMMA/PathRAG)