https://github.com/swiichycode/plana-chatbot-microservice
This microservice is designed to handle chatbot interactions efficiently by leveraging Redis for caching and PostgreSQL for persistent storage. The architecture ensures quick responses while maintaining a reliable conversation history.
https://github.com/swiichycode/plana-chatbot-microservice
Last synced: 20 days ago
JSON representation
This microservice is designed to handle chatbot interactions efficiently by leveraging Redis for caching and PostgreSQL for persistent storage. The architecture ensures quick responses while maintaining a reliable conversation history.
- Host: GitHub
- URL: https://github.com/swiichycode/plana-chatbot-microservice
- Owner: SwiichyCode
- Created: 2025-02-22T16:25:45.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-02-22T17:28:47.000Z (9 months ago)
- Last Synced: 2025-02-22T17:32:46.706Z (9 months ago)
- Language: TypeScript
- Size: 25.4 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Chatbot Microservice with Redis & PostgreSQL
## π Overview
This microservice is designed to handle chatbot interactions efficiently by leveraging **Redis** for caching and **PostgreSQL** for persistent storage. The architecture ensures quick responses while maintaining a reliable conversation history.
## ποΈ Architecture
The chatbot service follows a **Producer-Consumer pattern** to achieve high performance and scalability:
1. **Fast response**: Chat history is **temporarily stored in Redis** for quick retrieval.
2. **Asynchronous persistence**: Each message is **queued in Redis** for background storage.
3. **Worker process**: A background **worker listens to Redis** and saves conversations into **PostgreSQL**.
### π Project Structure
```
π chatbot-service/
βββ π src/
β βββ π config/ # Configuration (Redis, OpenAI, PostgreSQL)
β βββ π controllers/ # Business logic for handling requests
β βββ π services/ # Services (Chat handling, database operations)
β βββ π workers/ # Background worker for PostgreSQL storage
β βββ π models/ # Sequelize models for PostgreSQL
β βββ app.js # Express app configuration
β βββ server.js # Main server entry point
β βββ worker.js # Worker process for persisting conversations
βββ .env
βββ package.json
```
## π Installation & Setup
### 1οΈβ£ Clone the Repository
```bash
git clone https://github.com/your-repo/chatbot-service.git
cd chatbot-service
```
### 2οΈβ£ Install Dependencies
```bash
npm install
```
### 3οΈβ£ Configure Environment Variables
Create a `.env` file with the following configuration:
```env
PORT=4000
OPENAI_API_KEY=your_openai_api_key
REDIS_HOST=localhost
REDIS_PORT=6379
PG_HOST=localhost
PG_USER=your_pg_user
PG_PASSWORD=your_pg_password
PG_DATABASE=your_pg_database
EXPIRATION_TIME=1800
```
### 4οΈβ£ Start the Services
Start the **chatbot API**:
```bash
npm run start
```
Start the **worker for database persistence**:
```bash
npm run worker
```
## π οΈ Implementation Details
### π Chatbot Service (`src/services/chatService.js`)
- Stores conversation in **Redis** for quick access.
- Enqueues messages in **Redis Queue** (`chatQueue`) for background storage.
- Calls **OpenAI API** to generate responses.
### π PostgreSQL Storage Worker (`src/workers/worker.js`)
- Listens to the `chatQueue` in Redis.
- Saves the conversation history to **PostgreSQL** in the background.
- Ensures persistent data storage **without affecting chatbot response time**.
## π Advantages of This Architecture
β
**High Performance**: Fast chatbot responses using **Redis cache**.
β
**Scalability**: Background worker allows **high throughput** for message storage.
β
**Resilience**: If PostgreSQL is down, **Redis retains messages** until it's back up.
β
**Reliability**: Messages are stored **permanently in PostgreSQL** for long-term history.
## ποΈ Future Enhancements
- **Dockerization**: Deploy with Docker & Kubernetes.
- **BullMQ Integration**: Replace Redis Queue with BullMQ for better job handling.
- **Multi-user Sessions**: Handle multiple active conversations more efficiently.
## π― Conclusion
This chatbot microservice balances **performance, scalability, and reliability** by combining **Redis caching** with **PostgreSQL persistence**. The asynchronous **Worker pattern** ensures fast responses while keeping conversation history intact.
π Ready to deploy? Letβs get started! π₯