Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/danitilahun/llamaread-pdf-url
LlamaRead PDF URL is a powerful and intelligent application designed to seamlessly read and analyze content from both PDFs and URLs. Leveraging the advanced capabilities of LLaMA3, this app transforms the way you interact with documents and web content by providing insightful and accurate answers to your queries.
https://github.com/danitilahun/llamaread-pdf-url
docker llama3 openai pdf pgvector phidata rag sqlalchemy streamlit url
Last synced: about 4 hours ago
JSON representation
LlamaRead PDF URL is a powerful and intelligent application designed to seamlessly read and analyze content from both PDFs and URLs. Leveraging the advanced capabilities of LLaMA3, this app transforms the way you interact with documents and web content by providing insightful and accurate answers to your queries.
- Host: GitHub
- URL: https://github.com/danitilahun/llamaread-pdf-url
- Owner: Danitilahun
- Created: 2024-06-07T20:30:51.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-06-07T20:38:23.000Z (5 months ago)
- Last Synced: 2024-10-29T08:03:56.586Z (9 days ago)
- Topics: docker, llama3, openai, pdf, pgvector, phidata, rag, sqlalchemy, streamlit, url
- Language: Python
- Homepage:
- Size: 8.79 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# RAG with Llama3 on Groq
This cookbook shows how to do retrieval-augmented generation (RAG) using Llama3 on Groq.
For embeddings we can either use Ollama or OpenAI.
> Note: Fork and clone this repository if needed
### 1. Create a virtual environment
```shell
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate
```### 2. Export your Groq API Key
```shell
export GROQ_API_KEY=***
```### 3. Use Ollama or OpenAI for embeddings
Since Groq doesnt provide embeddings yet, you can either use Ollama or OpenAI for embeddings.
- To use Ollama for embeddings [Install Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#macos) and run the `nomic-embed-text` model
```shell
ollama run nomic-embed-text
```- To use OpenAI for embeddings, export your OpenAI API key
```shell
export OPENAI_API_KEY=sk-***
```### 4. Install libraries
```shell
pip install -r cookbook/llms/groq/rag/requirements.txt
```### 5. Run PgVector
> Install [docker desktop](https://docs.docker.com/desktop/install/mac-install/) first.
- run using the docker run command
```shell
docker run -d \
-e POSTGRES_DB=ai \
-e POSTGRES_USER=ai \
-e POSTGRES_PASSWORD=ai \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v pgvolume:/var/lib/postgresql/data \
-p 5532:5432 \
--name pgvector \
phidata/pgvector:16
```### 6. Run RAG App
```shell
streamlit run app.py
```- Open [localhost:8501](http://localhost:8501) to view your RAG app.
- Add websites or PDFs and ask question.