https://github.com/fahdmirza/doclingwithollama
Docling with Ollama - RAG on Local Files with Local Models
https://github.com/fahdmirza/doclingwithollama
docling ollama pdf-converter retrieval-augmented-generation
Last synced: 2 months ago
JSON representation
Docling with Ollama - RAG on Local Files with Local Models
- Host: GitHub
- URL: https://github.com/fahdmirza/doclingwithollama
- Owner: fahdmirza
- License: apache-2.0
- Created: 2024-12-30T22:09:06.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-01-01T21:48:02.000Z (6 months ago)
- Last Synced: 2025-03-27T12:46:33.653Z (3 months ago)
- Topics: docling, ollama, pdf-converter, retrieval-augmented-generation
- Language: Python
- Homepage: https://www.youtube.com/@fahdmirza
- Size: 880 KB
- Stars: 55
- Watchers: 2
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Docling With Ollama

## Installation Video
[](https://www.youtube.com/embed/GMHazLUQBQM)
## Introduction
Welcome to **Docling with Ollama**! This tool is combines the best of both Docling for document parsing and Ollama for local models. It enables you to use Docling and Ollama for RAG over PDF files (or any other supported file format) with LlamaIndex. It provides you a nice clean Streamlit GUI to chat with your own documents locally.
## Prerequisites
Before you begin, ensure you have met the following requirements:
- **Python**: Make sure you have Python version >3.10 installed. You can download it from [python.org](https://www.python.org/downloads/).
- **Pip**: Ensure pip is installed to manage Python packages. It usually comes with Python.
- **Virtual Environment**: It's recommended to use a virtual environment to manage dependencies. I prefer Conda.
- **Ollama**: Make sure Ollama is installed and llama3.2 model is downloaded with ollama pull llama3.2 command## Installation
To install **Docling With Ollama**, follow these steps:
1. **Clone the repo**:
```bash
git clone https://github.com/fahdmirza/doclingwithollama
```2. **Navigate to the project directory**:
```bash
cd doclingwithollama
```3. **Create a virtual environment** (recommended):
Use Conda: (recommended)
```bash
conda create -n ai python=3.11 -y && conda activate ai
```
Or Use Python VENV:
```bash
python3 -m venv myenv
source myenv/bin/activate # On Windows use `myenv\Scripts\activate`
```5. **Install the dependencies**:
```bash
pip install torch
pip install git+https://github.com/huggingface/transformers
pip install llama-index-core llama-index-readers-docling llama-index-node-parser-docling llama-index-readers-file python-dotenv llama-index-llms-ollama llama-index-embeddings-huggingface llama-index-llms-huggingface-api
pip install pdfplumber numpy streamlit```
## Running the Tool
To run **Docling with Ollama**, execute the following command:
```bash
streamlit run app.py
```Open your browser and go to `http://localhost:8501` to see the tool in action, if it doesnt open automatically.
## Usage
From left panel, upload your local PDF file, and start chatting with them.
## Contributing
Contributions are always welcome! See `CONTRIBUTING.md` for ways to get started.
## License
This project is licensed under the APACHE 2.0 License - see the `LICENSE` file for details.
## Contact
For questions or feedback, please contact Fahd Mirza at https://www.youtube.com/@fahdmirza