Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/leoneversberg/llm-chatbot-rag
A local LLM chatbot with RAG for PDF input files
https://github.com/leoneversberg/llm-chatbot-rag
chatbot llm nlp rag
Last synced: 3 months ago
JSON representation
A local LLM chatbot with RAG for PDF input files
- Host: GitHub
- URL: https://github.com/leoneversberg/llm-chatbot-rag
- Owner: leoneversberg
- License: mit
- Created: 2024-03-21T11:51:30.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-04-17T08:16:15.000Z (7 months ago)
- Last Synced: 2024-06-03T09:31:13.886Z (5 months ago)
- Topics: chatbot, llm, nlp, rag
- Language: Jupyter Notebook
- Homepage: https://medium.com/towards-data-science/how-to-build-a-local-open-source-llm-chatbot-with-rag-f01f73e2a131
- Size: 106 KB
- Stars: 38
- Watchers: 2
- Forks: 15
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - leoneversberg/llm-chatbot-rag - A local LLM chatbot with RAG for PDF input files (Jupyter Notebook)
README
# llm-chatbot-rag
![Screenshot](/images/example.jpg)
To use certain LLM models (such as Gemma), you need to create a .env file containing the line `ACCESS_TOKEN=`
Install dependencies with `pip install -r requirements.txt`
Run with `streamlit run src/app.py`
### Using quantization requires a GPU
To use bitsandbytes quantization, a Nvidia GPU is required.
Make sure to install the NVIDIA Toolkit first and then PyTorch.You can check if your GPU is available in Python with
```
import torch
print(torch.cuda.is_available())
```If you do not have a compatible GPU, try setting `device="cpu"` for the model and remove the quantization config.