https://github.com/brenimcode/whatsapp-ai-chatbot
AI-powered conversational agent for barber shops using RAG, Prompt Engineering, and LLMs. Automates WhatsApp support 24/7
https://github.com/brenimcode/whatsapp-ai-chatbot
ai docker langchain python
Last synced: 3 months ago
JSON representation
AI-powered conversational agent for barber shops using RAG, Prompt Engineering, and LLMs. Automates WhatsApp support 24/7
- Host: GitHub
- URL: https://github.com/brenimcode/whatsapp-ai-chatbot
- Owner: brenimcode
- Created: 2025-02-07T16:05:47.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-15T01:03:49.000Z (about 1 year ago)
- Last Synced: 2025-02-15T01:27:09.633Z (about 1 year ago)
- Topics: ai, docker, langchain, python
- Language: Python
- Homepage:
- Size: 0 Bytes
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Conversational AI Agent for WhatsApp Barber Shops
This conversational agent leverages **Artificial Intelligence (AI)**, **RAG (Retrieval-Augmented Generation)**, and **Prompt Engineering** to revolutionize customer service for barber shops. It operates **24/7 autonomously**, ensuring **fast, natural, and personalized responses** to WhatsApp users.
## Technologies Used







## System Overview

This system employs **RAG (Retrieval-Augmented Generation)** to generate responses from various document formats (PDF, images, text, etc.), utilizing a **vector database** to fetch relevant information dynamically. This ensures responses that are more **coherent, concise, and less prone to hallucinations**—a common issue in LLMs.
For efficient data storage, I use **ChromaDB**, a vector database that stores high-dimensional embeddings and efficiently handles retrieval and deletion operations.
Embeddings are generated using **HuggingFace Embeddings**, an open-source, high-performance solution.
To maintain conversational context, the agent stores the last five WhatsApp messages, preserving a history that allows for **more accurate and context-aware responses**.
The LLM receives input structured as follows:
- **User Message (WhatsApp input)**
- **Relevant Documents (retrieved from the vector database)**
- **Optimized Prompt (Few-Shot Prompting with instructions, persona, and context)**
Incoming WhatsApp messages are processed through **WAHA API**, which acts as a middleware between the user and the LLM.
A **FastAPI**-based API exposes a `/webhook` route, which is consumed by the WAHA API. When a message arrives, it is sent to the webhook, processed by the LLM, and a response is generated.
## Requirements
Ensure you have the following installed on your system:
- Python (recommended version: 3.10 or higher)
- Docker & Docker Compose
- Other dependencies listed in `requirements.txt`
## Installing Dependencies
With the virtual environment activated, install project dependencies using:
```bash
pip install -r requirements.txt
```
## Running the Project
Once dependencies are installed, start the services using Docker Compose:
```bash
docker-compose up --build
```