Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yas-sim/openvino-chatbot-rag-pdf
LLM Chatbot-RAG by OpenVINO. The chatbot can read a PDF file and answer to the related questions.
https://github.com/yas-sim/openvino-chatbot-rag-pdf
chatbot huggingface large-language-models llama llama2 llm neural-chat openvino pdf python rag retrieval-augmented-generation
Last synced: 2 months ago
JSON representation
LLM Chatbot-RAG by OpenVINO. The chatbot can read a PDF file and answer to the related questions.
- Host: GitHub
- URL: https://github.com/yas-sim/openvino-chatbot-rag-pdf
- Owner: yas-sim
- Created: 2023-12-25T12:53:09.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-29T12:34:37.000Z (12 months ago)
- Last Synced: 2024-11-08T21:04:48.341Z (2 months ago)
- Topics: chatbot, huggingface, large-language-models, llama, llama2, llm, neural-chat, openvino, pdf, python, rag, retrieval-augmented-generation
- Language: Python
- Homepage:
- Size: 104 KB
- Stars: 4
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# OpenVINO Chatbot using RAG (PDF)
## Description
This project demonstrates how to extend the LLM models' capability to answer a question related to a given document.
The project consists of two programs. One is for preparation, and the other is for question and answering using LLM.
The preparation program will read a PDF file and generate a database (vector store).
The LLM model will pick up a collection of a fraction of the input document that is related to the given query from the user and then answer the query by referring to the picked-up documents. This technique is so called RAG (Retrieval Augmented Generation).## Programs/Files
|#|file name|description|
|---|---|---|
|1|`vectorstore_generator.py`|Reads a PDF file and generates a vectorstore.
You can modify this program to make it read and use the other document file format in this RAG chatbot demo.|
|2|`openvino-chatbot-rag-pdf.py`|LLM chatbot using OpenVINO. Answer to the query by refeering a vectorstore.|
|3|`llm-model-downloader.py`|Downloads LLM models from HuggingFace and converts them into OpenVINO IR models.
This program downloads follosing models by default:
* `dolly-v2-3b`
* `neural-chat-7b-v3-1`
* `tinyllama-1.1b-chat-v0.6`
* `youri-7b-chat`.
You can download `llama2-7b-chat` by uncomment some lines in the code.|
|4|`.env`|Some configurations (model name, model precision, inference device, etc)|## How to run
0. Install prerequisites```sh
python -m venv venv
venv\Scripts\activate
python -m pip install -U pip
pip install -U setuptools wheel
pip install -r requirements.txt
```1. Download LLM models
This program downloads the LLM models and converts them into OpenVINO IR models.
If you don't want to download many LLM models, you can comment out the models in the code to save time.
```sh
phthon llm-model-downloader.py
```2. Preparation - Read a PDF file and generate a vectorstore
```sh
python vectorstore_generator.py -i input.pdf
```
`./vectorstore_{pdf_basename}` directory will be created. The data of the vectorstore will be stored in the directory. E.g. `./vectorstore_input`.
![generation](./resources/generation.png)3. Run LLM Chatbot
```sh
python openvino-chatbot-rag-pdf.py -v vectorstore_input
```
![chatbot](./resources/chatbot.png)## Appendix - vectorstore (retriever) test tool
You can check which fraction of the input documents are picked up from the vectorstore based on the input query.
```sh
python test_vectorstore.py -v vectorstore_hoge
```![test_vectorstore](./resources/test_vectorstore.png)
## Test environment- Windows 11
- OpenVINO 2023.2.0