Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bayyy7/agentic_rag-v1
End-to-End Chatbot app with RAG system using Langchain and Streamlit
https://github.com/bayyy7/agentic_rag-v1
chatbot langchain langgraph rag streamlit
Last synced: 7 days ago
JSON representation
End-to-End Chatbot app with RAG system using Langchain and Streamlit
- Host: GitHub
- URL: https://github.com/bayyy7/agentic_rag-v1
- Owner: bayyy7
- Created: 2024-12-09T08:18:50.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2024-12-10T02:52:14.000Z (2 months ago)
- Last Synced: 2024-12-22T15:13:27.954Z (about 2 months ago)
- Topics: chatbot, langchain, langgraph, rag, streamlit
- Language: Jupyter Notebook
- Homepage:
- Size: 2.21 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Agentic RAG with Langchain
## 🌟 Project Overview
This is the implementation of using Google Gemini 1.5 Pro with langchain as framework. Allowing LLM to give an answer based on the given context (pdf). Also implemented using Langgrapph, a powerful state from Langchain, allowing developer to create custom flow or architecture, deliver with Chat Memory
## ✨ Features
- 📄 PDF Document Upload
- 🤖 AI-Powered Knowledge Retrieval
- 💬 Interactive Chat Interface
- 🔍 Semantic Document Search## 🛠 Tech Stack
- **Language Model**: Google Gemini 1.5 Pro
- **Framework**:
- Streamlit
- LangChain
- **Embedding**: Sentence Transformers
- **Vector Store**: FAISS
- **Programming Language**: Python 3.8+## 🚀 Quick Start
### Prerequisites
- Python 3.8+
- Langchain
- Google Generative AI API Key### Installation
1. Clone the repository
```bash
git clone https://github.com/bayyy7/agentic_rag-v1.git
cd agentic_rag-v1.git
```2. Create a virtual environment
```bash
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
```3. Install dependencies
```bash
pip install -r requirements.txt
```4. Configure environment variables
- Create a `.env` file in the project root
- Add your Google API key:
```
GOOGLE_GENERATIVE_AI=your_google_api_key_here
```
5. Create your system prompt
- create `prompt` folder
- create new python file `system_prompt.py`
```
def system_prompt(tool_messages):
"""
Generate the system prompt content.
"""
docs_content = "\n\n".join(doc.content for doc in tool_messages)
return (
"[YOUR PROMPT HERE]"
f"{docs_content}\n\n"
)
```### Running the Application
```bash
streamlit run app.py
```
### Custom Config
You can directly change the configuration on the `config/config.py`. There are several example you can change by what you want. Also be careful when changes the Embedding Model, you must known the dimension of the Embedding it self. This code below is the helper to know the size of embedding dimension.
- Using the `embed_query` function
```bash
vector = embeddings.embed_query("aiueo")
matrix = numpy.array(vector).astype('float32')
len(matrix)
```
- Using the `embed_document` function
```bash
vector = embeddings.embed_documents(str("aiueo"))
matrix = numpy.array(vector).astype('float32')
matrix.shape[1]
```