Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wafflecomposite/langchain-ask-pdf-local
An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
https://github.com/wafflecomposite/langchain-ask-pdf-local
langchain llama llamacpp llm self-hosted stablevicuna
Last synced: 4 days ago
JSON representation
An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
- Host: GitHub
- URL: https://github.com/wafflecomposite/langchain-ask-pdf-local
- Owner: wafflecomposite
- Created: 2023-05-07T16:30:21.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-05-07T16:32:38.000Z (over 1 year ago)
- Last Synced: 2025-01-01T09:30:15.209Z (11 days ago)
- Topics: langchain, llama, llamacpp, llm, self-hosted, stablevicuna
- Language: Python
- Homepage:
- Size: 18.6 KB
- Stars: 88
- Watchers: 4
- Forks: 7
- Open Issues: 7
-
Metadata Files:
- Readme: readme.md
Awesome Lists containing this project
- stars - wafflecomposite/langchain-ask-pdf-local - An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally. (Python)
README
# Ask Your PDF, locally
| ![UI screenshot of Ask Your PDF](media/ScreenshotAskYourPDF.png) |
|:--:|
| Answering question about [2303.12712 paper](https://arxiv.org/pdf/2303.12712.pdf) 7mb pdf file |This is an attempt to recreate [Alejandro AO's langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) (also check out [his tutorial on YT](https://www.youtube.com/watch?v=wUAUdEw5oxM)) using open source models running locally.
It uses [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) instead of OpenAI Embeddings, and [StableVicuna-13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) instead of OpenAI models.
It runs on the CPU, is impractically slow and was created more as an experiment, but I am still fairly happy with the results.
## Requirements
GPU is not used and is not required.You can squeeze it into 16 GB of RAM, but I recommend 24 GB or more.
## Installation
- Install requirements (preferably to `venv`): `pip install -r requirements.txt`
- Download `stable-vicuna-13B.ggml.q4_2.bin` from [TheBloke/stable-vicuna-13B-GGML](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML/tree/main) and place it in project folder.
## Usage
Run `streamlit run .\app.py`
This should launch the UI in your default browser. Select a PDF file, send the question, wait patiently.