https://github.com/wasay8/rag-q-a-with-groq-and-open-llm-models
Streamlit based RAG for interactive Q&A using Groq AI and various open-source LLM models. Upload PDFs, create vector embeddings, and query documents for context-based answers.
https://github.com/wasay8/rag-q-a-with-groq-and-open-llm-models
gemma2-9b groq huggingface langchain llama3 mixtral-8x7b rag streamlit-webapp
Last synced: 4 months ago
JSON representation
Streamlit based RAG for interactive Q&A using Groq AI and various open-source LLM models. Upload PDFs, create vector embeddings, and query documents for context-based answers.
- Host: GitHub
- URL: https://github.com/wasay8/rag-q-a-with-groq-and-open-llm-models
- Owner: wasay8
- Created: 2024-09-15T21:38:23.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-16T03:25:46.000Z (about 1 year ago)
- Last Synced: 2025-03-14T04:28:46.495Z (7 months ago)
- Topics: gemma2-9b, groq, huggingface, langchain, llama3, mixtral-8x7b, rag, streamlit-webapp
- Language: Python
- Homepage: https://rag-q-a-with-groq-and-open-llm-models-4memuabzugtjwrbyg3tv6h.streamlit.app/
- Size: 20.1 MB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# RAG: Q&A with Groq and Open Source LLM Models
This Streamlit app allows users to perform SAT Maths question and answer (Q&A) operations using various LLM models and vector embeddings. The app leverages Groq AI, Hugging Face models and FAISS vector database for processing and querying document contexts.
## Requirements
- Python 3.7+
- Streamlit
- LangChain (including `langchain_groq`, `langchain_openai`, `langchain_community`, etc.)
- Vector Database FAISS
- `python-dotenv` for environment variable management## Installation
1. **Clone the Repository:**
```bash
git clone https://github.com/your-repo-url.git
cd your-repo-directory
```2. **Create a Virtual Environment (optional but recommended):**
```bash
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
```3. **Install Required Packages:**
```bash
pip install -r requirements.txt
```4. **Set Up Environment Variables:**
Create a `.env` file in the same directory as `app.py` with the following content:
```env
HF_TOKEN=your_hugging_face_token
GROQ_API_KEY=your_groq_api_key
```## Usage
1. **Prepare Documents:**
Create a `Documents` folder in the same directory as `app.py` and place all PDF files you want to use as context for Q&A.
2. **Run the Streamlit App:**
```bash
streamlit run app.py
```3. **Interact with the App:**
- Use the sidebar to input your Groq API key and Hugging Face API key.
- Click on "Document Embedding" to load and process the documents.
- Select an LLM model from the dropdown menu and set the temperature for the model.
- Enter your query in the text input box and view the response.## Notes
- Ensure that the paths and API keys are correctly set up in the `.env` file.
- Adjust the paths and settings according to your local setup if needed.## Acknowledgement
Special thanks to Krish Naik’s for guidance.
## Troubleshooting
- If no documents are loaded, verify the file path and contents in the `Documents` folder.
- Check your API keys and ensure they are correct.
- For any errors, refer to the error messages displayed in the app or consult the LangChain documentation for additional guidance.Feel free to contribute or open issues if you encounter any problems or have suggestions for improvements!