https://github.com/code4mk/langchain-rag-application
Generative ai RAG movie history ( chat and qa) with langchain, openai and pinecone
https://github.com/code4mk/langchain-rag-application
gen-ai gradio langchain langchain-rag openai pinecone
Last synced: 3 months ago
JSON representation
Generative ai RAG movie history ( chat and qa) with langchain, openai and pinecone
- Host: GitHub
- URL: https://github.com/code4mk/langchain-rag-application
- Owner: code4mk
- Created: 2024-08-03T15:23:04.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-08-03T21:00:33.000Z (10 months ago)
- Last Synced: 2025-02-19T12:55:24.746Z (4 months ago)
- Topics: gen-ai, gradio, langchain, langchain-rag, openai, pinecone
- Language: Python
- Homepage:
- Size: 11.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# RAG Application
This is a Retrieval-Augmented Generation (RAG) application using Langchain, OpenAI, and Pinecone for movie history.
## Features
- Chat
- Question Answering (QA)## Setup
1. **Create a virtual environment:**
```bash
python -m venv venv
```2. **Activate the virtual environment:**
- On Windows:
```bash
venv\Scripts\activate
```
- On macOS and Linux:
```bash
source venv/bin/activate
```3. **Install dependencies:**
```bash
pip install -r requirements.txt
```4. **Set up environment variables:**
Create a `.env` file in the root directory of your project and add the following variables:
```plaintext
OPENAI_API_KEY=""
PINECONE_API_KEY=""
PINECONE_INDEX_NAME="movie-history"
FEATURE_NAME="chat" # "qa"
```-> `FEATURE_NAME` will be `chat|qa` . diff is chat will preserve history
## Load Data into Pinecone VectorDB
1. **Load the data into Pinecone:**
```bash
python ./src/vector_store/load_data.py
```## Run the Project
1. start project with gradio ui
```bash
python app.py
```* `127.0.0.1:7860`