An open API service indexing awesome lists of open source software.

https://github.com/pheonix-19/opsai

OpsAI (Operational AI) is an intelligent IT support automation platform that uses AI to automatically categorize tickets, suggest solutions, and route requests to the right teams. Built with advanced NLP and machine learning technologies, it integrates with Jira, Slack, and Freshdesk to streamline operational workflows and improve response times.
https://github.com/pheonix-19/opsai

docker fastapi freshdesk grafana helpdesk-automation huggingface lora machine-learning nlp peft prometheus sentence-transformers slack-bot transformers

Last synced: about 2 months ago
JSON representation

OpsAI (Operational AI) is an intelligent IT support automation platform that uses AI to automatically categorize tickets, suggest solutions, and route requests to the right teams. Built with advanced NLP and machine learning technologies, it integrates with Jira, Slack, and Freshdesk to streamline operational workflows and improve response times.

Awesome Lists containing this project

README

          

# ๐Ÿค– OpsAI: Intelligent IT Support Automation

[![GitHub stars](https://img.shields.io/github/stars/pheonix-19/OpsAI.svg?style=social&label=Star)](https://github.com/pheonix-19/OpsAI)
[![GitHub forks](https://img.shields.io/github/forks/pheonix-19/OpsAI.svg?style=social&label=Fork)](https://github.com/pheonix-19/OpsAI/fork)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![Docker](https://img.shields.io/badge/docker-%230db7ed.svg?style=flat&logo=docker&logoColor=white)](https://www.docker.com/)
[![FastAPI](https://img.shields.io/badge/FastAPI-005571?style=flat&logo=fastapi)](https://fastapi.tiangolo.com/)

> **Transform your IT helpdesk with AI-powered ticket triage and resolution suggestions**

OpsAI is an advanced AI system that revolutionizes IT support operations by automatically categorizing tickets, suggesting solutions, and routing requests to the right teams. Using cutting-edge vector embeddings and fine-tuned language models, it learns from historical data to provide instant, contextual support recommendations.

## ๏ฟฝ **Live Screenshots**

**๐Ÿ–ฅ๏ธ Grafana Dashboard in Action:**
![Grafana OpsAI Dashboard](asset/grafana.png)
*Real-time monitoring dashboard showing API metrics, request rates, and system health*

**๐Ÿ“Š Prometheus Metrics Collection:**
![Prometheus Metrics](asset/promethius1.png)
*Prometheus collecting and displaying OpsAI application metrics*

**โš™๏ธ Prometheus Configuration & Targets:**
![Prometheus Targets](asset/promethius2.png)
*Prometheus monitoring targets and service discovery configuration*

## ๏ฟฝ๐Ÿ“‹ **Table of Contents**

- [๐Ÿ“ธ Live Screenshots](#-live-screenshots)
- [๐Ÿ—๏ธ System Architecture](#๏ธ-system-architecture--components)
- [๐ŸŽฏ What Problem Does OpsAI Solve?](#-what-problem-does-opsai-solve)
- [โœจ Core Features](#-core-features)
- [๐Ÿš€ Quick Demo](#-quick-demo)
- [๐Ÿ“‹ Prerequisites](#-prerequisites)
- [โšก Installation & Setup](#-installation--setup)
- [๐ŸŽฎ API Endpoints Reference](#-api-endpoints-reference)
- [๐Ÿ“Š Monitoring & Observability](#-monitoring--observability)
- [๐Ÿ“ Project Structure](#-project-structure)
- [๐Ÿ” Security & Secrets Management](#-security--secrets-management)
- [๐Ÿณ Docker & Deployment](#-docker--deployment)
- [๐Ÿ”— Enterprise Integrations](#-enterprise-integrations)
- [๐Ÿงช Testing & Development](#-testing--development)
- [๐Ÿšจ Troubleshooting](#-troubleshooting)
- [๐Ÿš€ Quick Start Guide](#-quick-start-guide)
- [๐Ÿ“š Additional Resources](#-additional-resources)
- [๐Ÿค Contributing](#-contributing)
- [๐Ÿ“„ License & Support](#-license--support)Intelligent IT Support Automation

> **Transform your IT helpdesk with AI-powered ticket triage and resolution suggestions**

OpsAI is an ad## ๐Ÿ” **Security & Secrets Management**anced AI system that revolutionizes IT support operations by automatically categorizing tickets, suggesting solutions, and routing requests to the right teams. Using cutting-edge vector embeddings and fine-tuned language models, it learns from historical data to provide instant, contextual support recommendations.

## ๐Ÿ—๏ธ **System Architecture & Components**

```
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿค– OpsAI System Architecture โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ‘ค Users ๐Ÿ”ง IT Teams ๐Ÿ“Š Stakeholders
โ”‚ โ”‚ โ”‚
โ–ผ โ–ผ โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ”— Integration Layer โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿ“‹ Jira โ”‚ ๐Ÿ’ฌ Slack Bot โ”‚ ๐ŸŽซ Freshdesk โ”‚ ๐ŸŒ Custom APIs โ”‚
โ”‚ Webhooks โ”‚ Real-time โ”‚ Ticket Sync โ”‚ REST Endpoints โ”‚
โ”‚ Automation โ”‚ Notifications โ”‚ Customer Mgmt โ”‚ External Systems โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚ โ”‚ โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ–ผ โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿš€ FastAPI Server (Port 8000) โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿ“ Endpoints: /classify | /resolve | /feedback | /metrics | /docs โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ โ–ผ โ–ผ โ–ผ โ–ผ โ–ผ โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿง  AI/ML Processing Core โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿ” Vector Search โ”‚ ๐Ÿค– Language Model โ”‚ ๐ŸŽฏ Classification โ”‚ ๐Ÿ”„ Learning โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ ๐Ÿ“Š Embeddings: โ”‚ ๐Ÿงฌ Model: โ”‚ ๐Ÿท๏ธ Labels: โ”‚ ๐Ÿ“ˆ Training: โ”‚
โ”‚ โ€ข sentence-trans โ”‚ โ€ข GPT-Neo-125M โ”‚ โ€ข auth, network โ”‚ โ€ข LoRA โ”‚
โ”‚ โ€ข all-MiniLM-L6-v2 โ”‚ โ€ข LoRA Fine-tuned โ”‚ โ€ข performance, mail โ”‚ โ€ข Adaptation โ”‚
โ”‚ โ€ข Vector Similarity โ”‚ โ€ข Context-aware โ”‚ โ€ข Team Routing โ”‚ โ€ข Feedback โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ ๐Ÿ—‚๏ธ FAISS Index: โ”‚ ๐Ÿ’ญ Generation: โ”‚ ๐ŸŽฏ Mapping: โ”‚ ๐Ÿ”„ Updates: โ”‚
โ”‚ โ€ข Fast Search โ”‚ โ€ข Solution Suggest โ”‚ โ€ข IT Helpdesk โ”‚ โ€ข Continuous โ”‚
โ”‚ โ€ข Metadata Store โ”‚ โ€ข Context Tickets โ”‚ โ€ข Engineering โ”‚ โ€ข Improvementโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ’พ Data Storage Layer โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿ“ Raw Data โ”‚ โš™๏ธ Processed โ”‚ ๐Ÿ—‚๏ธ Vector Index โ”‚ ๐ŸŽ“ Models โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ ๐Ÿ“„ tickets.csv โ”‚ ๐Ÿ“‹ Normalized: โ”‚ ๐Ÿ” FAISS Database: โ”‚ ๐Ÿงฌ Weights: โ”‚
โ”‚ ๐Ÿ“„ tickets.json โ”‚ โ€ข ticket_0.json โ”‚ โ€ข ticket_index โ”‚ โ€ข LoRA โ”‚
โ”‚ ๐Ÿ“Š Historical Data โ”‚ โ€ข ticket_1.json โ”‚ โ€ข ticket_meta.pkl โ”‚ โ€ข Adapters โ”‚
โ”‚ ๐Ÿ”„ Continuous Feed โ”‚ โ€ข Clean Format โ”‚ โ€ข Fast Retrieval โ”‚ โ€ข Fine-tuned โ”‚
โ”‚ โ”‚ โ€ข Standardized โ”‚ โ€ข Similarity Search โ”‚ โ€ข Checkpoint โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ“Š Monitoring & Observability โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿ“ˆ Prometheus โ”‚ ๐Ÿ“Š Grafana โ”‚ ๐Ÿšจ Alerting โ”‚ ๐Ÿ“ Logging โ”‚
โ”‚ (Port 9090) โ”‚ (Port 3000) โ”‚ โ”‚ โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ ๐Ÿ“Š Metrics: โ”‚ ๐Ÿ“‹ Dashboards: โ”‚ ๐Ÿšจ Alerts: โ”‚ ๐Ÿ—‚๏ธ Logs: โ”‚
โ”‚ โ€ข Request Count โ”‚ โ€ข Performance โ”‚ โ€ข High Error Rate โ”‚ โ€ข API Calls โ”‚
โ”‚ โ€ข Response Time โ”‚ โ€ข Error Rates โ”‚ โ€ข Slow Response โ”‚ โ€ข Model Inf. โ”‚
โ”‚ โ€ข AI Performance โ”‚ โ€ข Business KPIs โ”‚ โ€ข System Down โ”‚ โ€ข Debug Info โ”‚
โ”‚ โ€ข System Health โ”‚ โ€ข Real-time Charts โ”‚ โ€ข Auto-notification โ”‚ โ€ข Audit Trailโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿณ Infrastructure Layer โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ๐Ÿณ Docker Setup โ”‚ ๐Ÿ Python Env โ”‚ ๐Ÿ”ฅ Hardware โ”‚ โš™๏ธ CI/CD โ”‚
โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ ๐Ÿ“‹ Services: โ”‚ ๐Ÿ“ฆ Dependencies: โ”‚ ๐Ÿ’ป Requirements: โ”‚ ๐Ÿ”„ Pipeline: โ”‚
โ”‚ โ€ข API Container โ”‚ โ€ข transformers โ”‚ โ€ข Python 3.11+ โ”‚ โ€ข GitHub โ”‚
โ”‚ โ€ข Prometheus โ”‚ โ€ข fastapi โ”‚ โ€ข 8GB+ RAM โ”‚ โ€ข Actions โ”‚
โ”‚ โ€ข Grafana โ”‚ โ€ข torch โ”‚ โ€ข CUDA GPU (opt) โ”‚ โ€ข Testing โ”‚
โ”‚ โ€ข Auto-scaling โ”‚ โ€ข faiss-cpu โ”‚ โ€ข 4GB Disk โ”‚ โ€ข Deploy โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ“ˆ Data Flow Direction โ”‚
โ”‚ โ”‚
โ”‚ Tickets โ†’ Integration โ†’ API โ†’ AI Processing โ†’ Data Storage โ†’ Monitoring โ”‚
โ”‚ โ†‘ โ†“ โ†“ โ”‚
โ”‚ Feedback โ†โ”€โ”€ Solutions โ†โ”€โ”€ Intelligence โ†โ”€โ”€ Training โ†โ”€โ”€ Analytics โ†โ”€โ”€ Metrics โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
```

## ๐ŸŽฏ **What Problem Does OpsAI Solve?**

### **Before OpsAI (Traditional IT Support):**
```
User reports issue โ†’ Manual ticket review โ†’ Search past solutions โ†’ Assign to team โ†’ Resolution
โฑ๏ธ Hours/Days ๐Ÿ’ฐ High cost ๐Ÿ” Time-intensive ๐Ÿ‘ฅ Manual routing
```

### **With OpsAI (AI-Powered Support):**
```
User reports issue โ†’ AI instant analysis โ†’ Auto-suggested solution โ†’ Smart team routing โ†’ Fast resolution
โšก Seconds ๐Ÿ’ฐ Cost efficient ๐Ÿง  AI-powered ๐ŸŽฏ Accurate routing
```

## โœจ **Core Features**

| Feature | Description | Business Impact |
|---------|-------------|-----------------|
| ๐ŸŽฏ **Smart Classification** | AI categorizes tickets by type (auth, network, performance) | Automatic team routing |
| ๐Ÿง  **Resolution Suggestions** | Generates solutions based on similar past cases | Faster problem solving |
| ๐Ÿ” **Semantic Search** | Finds relevant tickets using AI understanding, not just keywords | Better context matching |
| ๐Ÿ“Š **Real-time Monitoring** | Prometheus metrics + Grafana dashboards | System health visibility |
| ๐Ÿ”— **Enterprise Integration** | Connects with Jira, Slack, Freshdesk | Seamless workflow integration |
| ๐ŸŽ“ **Continuous Learning** | LoRA fine-tuning adapts to your organization | Improving accuracy over time |

## ๐Ÿš€ **Quick Demo**

### **Example 1: Ticket Classification**
```bash
curl -X POST "http://localhost:8000/classify" \
-H "Content-Type: application/json" \
-d '{"text": "Cannot access email, getting authentication errors"}'
```
**Response:**
```json
{
"tags": ["auth", "mail", "user"],
"teams": ["IT Helpdesk"]
}
```

### **Example 2: AI Resolution Suggestion**
```bash
curl -X POST "http://localhost:8000/resolve" \
-H "Content-Type: application/json" \
-d '{"text": "Database connection timeout in production"}'
```
**Response:**
```json
{
"suggestion": "Check database connection pool settings and increase timeout values...",
"context_tickets": [{"title": "Similar DB issue", "resolution": "..."}]
}
```

## ๐Ÿ“‹ **Prerequisites**

- **Python 3.11+** (tested with 3.12.3)
- **8GB+ RAM** (for AI model inference)
- **Docker & Docker Compose** (for full stack deployment)
- **CUDA-compatible GPU** (optional, for faster inference)
- **4GB disk space** (for models and vector index)

## โšก **Installation & Setup**

### **Method 1: Local Development (Recommended for Testing)**

1. **Clone and Setup Environment:**
```bash
git clone https://github.com/pheonix-19/OpsAI.git
cd OpsAI

# Create virtual environment
python3 -m venv env
source env/bin/activate # Linux/macOS
# env\Scripts\activate # Windows

# Install dependencies
pip install -r requirements.txt
pip install -e .
```

2. **Process Sample Data and Build AI Index:**
```bash
# Process the included sample tickets
PYTHONPATH=. python -m src.ingestion.ingest data/raw data/processed

# Build vector embeddings index for semantic search
PYTHONPATH=. python -m src.embeddings.build_index --input-dir data/processed --output-dir data/index
```

3. **Start the API Server:**
```bash
PYTHONPATH=. uvicorn src.api.main:app --host 0.0.0.0 --port 8000 --reload
```

4. **Access the System:**
- **API Documentation**: http://localhost:8000/docs
- **Metrics Endpoint**: http://localhost:8000/metrics

### **Method 2: Full Production Stack (Docker)**

```bash
# Start complete monitoring stack
docker-compose up --build

# Access services:
# - OpsAI API: http://localhost:8000
# - Prometheus: http://localhost:9090
# - Grafana: http://localhost:3000 (admin/admin)
```

## ๐ŸŽฎ **API Endpoints Reference**

| Endpoint | Method | Purpose | Example Use Case |
|----------|--------|---------|------------------|
| `/` | GET | Health check | Service monitoring |
| `/classify` | POST | Categorize tickets | Auto-route to teams |
| `/resolve` | POST | Get AI suggestions | Provide solutions |
| `/feedback` | POST | Submit user ratings | Improve AI accuracy |
| `/metrics` | GET | Prometheus metrics | System monitoring |

### **Detailed API Usage:**

#### **๐ŸŽฏ Classify Tickets**
```bash
curl -X POST "http://localhost:8000/classify" \
-H "Content-Type: application/json" \
-d '{
"text": "Server not responding to ping requests",
"top_k": 3
}'
```

#### **๐Ÿง  Get AI Resolutions**
```bash
curl -X POST "http://localhost:8000/resolve" \
-H "Content-Type: application/json" \
-d '{
"text": "Application crashes when uploading large files",
"top_k": 5
}'
```

#### **๐Ÿ“ Submit Feedback**
```bash
curl -X POST "http://localhost:8000/feedback" \
-H "Content-Type: application/json" \
-d '{
"ticket": {"title": "Login issue", "description": "Cannot access system"},
"suggestion": "Reset password and clear browser cache",
"rating": 5,
"comment": "Perfect solution, worked immediately!"
}'
```

## ๐Ÿ“Š **Monitoring & Observability**

### **๐Ÿ” Prometheus Metrics**
OpsAI automatically tracks comprehensive performance metrics:

```bash
# View current metrics
curl http://localhost:8000/metrics | grep opsai

# Example metrics output:
opsai_requests_total{endpoint="/classify",method="POST"} 5.0
opsai_request_latency_seconds_sum{endpoint="/resolve"} 2.28
```

**Key Metrics Tracked:**
- **Request Volume**: API calls per endpoint per second
- **Response Times**: Latency percentiles (50th, 90th, 99th)
- **Error Rates**: Failed requests and status codes
- **AI Performance**: Model inference times
- **Business KPIs**: Total tickets processed

### **๐Ÿ“ˆ Grafana Dashboards** โœ… **CONFIGURED & WORKING**

**โœ… Active Dashboards:**
1. **OpsAI Monitoring Dashboard** - Real-time API metrics
2. **Prometheus 2.0 Stats** - System performance monitoring
3. **Prometheus Stats** - Infrastructure metrics

**๐Ÿ“Š Access:** http://localhost:3000 (admin/admin)

**Dashboard Features:**
- ๐Ÿ“Š **Total API Requests**: Live request tracking
- โฑ๏ธ **Request Rate**: Real-time requests per minute
- ๐Ÿšจ **HTTP Status Codes**: Success vs Error monitoring
- ๐Ÿ“ˆ **Endpoint Breakdown**: Usage analytics by endpoint
- ๐Ÿฅง **Visual Analytics**: Interactive charts and tables

### **๏ฟฝ Live Screenshots**

**๐Ÿ–ฅ๏ธ Grafana Dashboard in Action:**
![Grafana OpsAI Dashboard](asset/grafana.png)
*Real-time monitoring dashboard showing API metrics, request rates, and system health*

**๐Ÿ“Š Prometheus Metrics Collection:**
![Prometheus Metrics](asset/promethius1.png)
*Prometheus collecting and displaying OpsAI application metrics*

**โš™๏ธ Prometheus Configuration & Targets:**
![Prometheus Targets](asset/promethius2.png)
*Prometheus monitoring targets and service discovery configuration*

### **๏ฟฝ๐Ÿ” Prometheus Query Examples**
Essential queries for monitoring (see `PROMETHEUS_QUERIES.md` for complete reference):

```promql
# Basic metrics
sum(opsai_requests_total) by (endpoint) # Total requests by endpoint
rate(opsai_requests_total[5m]) # Request rate per second

# Performance monitoring
avg(opsai_request_latency_seconds) by (endpoint) # Average response time
histogram_quantile(0.95, rate(opsai_request_latency_seconds_bucket[5m])) # 95th percentile
```

## ๐Ÿ“ **Project Structure**
```
opsai/
โ”œโ”€โ”€ src/ # Core application code
โ”‚ โ”œโ”€โ”€ api/ # FastAPI endpoints
โ”‚ โ”œโ”€โ”€ embeddings/ # Vector search & FAISS
โ”‚ โ”œโ”€โ”€ ingestion/ # Data processing
โ”‚ โ”œโ”€โ”€ integrations/ # External APIs (Jira, Slack)
โ”‚ โ”œโ”€โ”€ model_training/ # AI model fine-tuning
โ”‚ โ””โ”€โ”€ monitoring/ # Prometheus metrics
โ”œโ”€โ”€ data/ # Training data & indexes
โ”œโ”€โ”€ models/ # LoRA adapters & weights
โ”œโ”€โ”€ tests/ # Test suite
โ””โ”€โ”€ infra/ # Docker & monitoring configs
```

## ๏ฟฝ **Security & Secrets Management**

### **๐Ÿšจ Important: Managing Secrets in Public Repositories**

โš ๏ธ **NEVER commit actual secrets to your repository!** This guide shows you how to securely manage environment variables and API keys for both local development and CI/CD.

#### **๐Ÿ“‹ Required vs Optional Credentials**

| **Credential** | **Required For** | **Default Behavior** |
|---------------|------------------|---------------------|
| `DATABASE_URL` | Database connection | โœ… Defaults to local SQLite |
| `OPENAI_API_KEY` | OpenAI features | โš ๏ธ Optional - features disabled if missing |
| `HUGGINGFACE_API_TOKEN` | Model downloads | โš ๏ธ Optional - uses cached/local models |
| `JIRA_API_TOKEN` | JIRA integration | โš ๏ธ Only if using JIRA |
| `SLACK_BOT_TOKEN` | Slack bot | โš ๏ธ Only if using Slack |
| `FRESHDESK_API_KEY` | Freshdesk integration | โš ๏ธ Only if using Freshdesk |
| `DOCKERHUB_USER/TOKEN` | CI/CD deployment | โš ๏ธ Only for Docker Hub publishing |

#### **๐Ÿ›ก๏ธ Local Development Setup**

1. **Copy environment template:**
```bash
cp .env.example .env
```

2. **Edit `.env` with your actual values (NEVER commit this file):**
```bash
# Required only if using specific integrations
JIRA_URL="https://your-company.atlassian.net"
JIRA_USER="your-email@company.com"
JIRA_API_TOKEN="your_new_jira_token_here"

SLACK_BOT_TOKEN="xoxb-your-slack-bot-token-here"
SLACK_APP_TOKEN="xapp-your-slack-app-token-here"

# Optional - for enhanced AI features
OPENAI_API_KEY="sk-your-openai-key-here"
HUGGINGFACE_API_TOKEN="hf_your-token-here"
```

3. **The `.env` file is automatically ignored by git** (included in `.gitignore`)

#### **๐Ÿ”‘ GitHub Secrets for CI/CD**

For GitHub Actions to work with your secrets:

1. **Go to GitHub Repository Settings**
2. **Navigate to:** Settings โ†’ Secrets and variables โ†’ Actions
3. **Add these secrets** (only the ones you need):

```
# Docker deployment (required for CI/CD)
DOCKERHUB_USER=your_dockerhub_username
DOCKERHUB_TOKEN=your_dockerhub_access_token

# Integration secrets (optional)
JIRA_API_TOKEN=your_jira_token
SLACK_BOT_TOKEN=your_slack_token
FRESHDESK_API_KEY=your_freshdesk_key
```

#### **โœ… Security Best Practices Implemented**

- โœ… **No secrets in source code** - All credentials from environment variables
- โœ… **Secure config validation** - `src/config.py` handles missing secrets gracefully
- โœ… **Environment isolation** - Production vs development detection
- โœ… **CI/CD ready** - GitHub Actions configured with proper secret injection
- โœ… **Optional integrations** - Core functionality works without external APIs

#### **๐Ÿ”ง Security Configuration Files**

**Key files for security:**
- `.env.example` - Template with placeholder values (safe to commit)
- `src/config.py` - Secure configuration management
- `.gitignore` - Ensures `.env` files are never committed
- `SECURITY.md` - Complete security guidelines

### **๐Ÿšจ Token Security Checklist**

- [ ] All real tokens removed from version control
- [ ] `.env` file exists locally with actual values
- [ ] GitHub secrets configured for CI/CD
- [ ] Old/exposed tokens revoked and regenerated
- [ ] Team members trained on security practices

## ๐Ÿณ **Docker & Deployment**

### **๐Ÿ› ๏ธ Fixed Docker Build Issues**

**Common Docker problems and solutions implemented:**

#### **โŒ Problem: Package Version Conflicts**
```
ERROR: Could not find a version that satisfies the requirement tokenizers==0.21.2
ERROR: No matching distribution found for SQLAlchemy==2.0.23
```

#### **โœ… Solution: Flexible Version Ranges**
Updated `requirements.txt` to use compatible version ranges instead of pinned versions:

```python
# Before (problematic)
tokenizers==0.21.2
SQLAlchemy==2.0.23

# After (working)
tokenizers>=0.13.0,<1.0.0
SQLAlchemy>=1.4.0,<3.0.0
```

#### **โŒ Problem: Network Timeouts During Build**
```
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool: Read timed out
```

#### **โœ… Solution: Enhanced Dockerfile**
```dockerfile
# Install with increased timeout and retries
RUN pip install --no-cache-dir \
--timeout 1000 \
--retries 5 \
--default-timeout=1000 \
-r requirements.txt
```

### **๐Ÿš€ Deployment Options**

#### **Option 1: Quick Development Setup**
```bash
# Minimal setup for development
cp requirements-minimal.txt requirements.txt
docker-compose up --build
```

#### **Option 2: Full Production Stack**
```bash
# Complete setup with all features
docker-compose up --build
```

#### **Option 3: Retry Script (Handles Network Issues)**
```bash
# Automated retry with fallback to minimal setup
./docker-build.sh
```

### **๐Ÿ“ฆ Docker Services Overview**

| **Service** | **Port** | **Purpose** | **Health Check** |
|-------------|----------|-------------|------------------|
| `opsai-api` | 8000 | Main application | `curl localhost:8000/` |
| `prometheus` | 9090 | Metrics collection | `curl localhost:9090/-/healthy` |
| `grafana` | 3000 | Monitoring dashboards | `curl localhost:3000/api/health` |

### **๐Ÿ”ง Docker Troubleshooting**

**Check service status:**
```bash
docker-compose ps
docker-compose logs api
```

**Restart specific service:**
```bash
docker-compose restart api
docker-compose restart prometheus
```

**Clean rebuild:**
```bash
docker-compose down
docker system prune -f
docker-compose up --build --no-cache
```

## ๐Ÿ“Š **Monitoring & Metrics - Complete Setup Guide**

### **๐ŸŽฏ Prometheus Configuration**

**โœ… Working Prometheus Setup:**

```yaml
# infra/prometheus/prometheus.yml
global:
scrape_interval: 15s

scrape_configs:
- job_name: 'prometheus' # Self-monitoring
static_configs:
- targets: ['localhost:9090']

- job_name: 'opsai_api' # Application monitoring
static_configs:
- targets: ['api:8000']
```

### **๐Ÿ“ˆ Grafana Dashboard Setup**

**โœ… Auto-configured Grafana features:**

1. **Data Source**: Prometheus auto-configured at `http://prometheus:9090`
2. **Dashboards**: Pre-built OpsAI monitoring dashboard
3. **Provisioning**: Automatic setup via configuration files

**Access:** http://localhost:3000 (admin/admin)

### **๐Ÿ“Š Available Metrics & Queries**

#### **๐Ÿš€ OpsAI Application Metrics**

**โœ… Confirmed Working Queries:**

```promql
# Instant metrics (always show data)
opsai_requests_total # Total API requests
process_resident_memory_bytes{job="opsai_api"} # Memory usage
time() - process_start_time_seconds{job="opsai_api"} # Uptime
up{job="opsai_api"} # Service availability
python_gc_objects_collected_total{job="opsai_api"} # Python metrics

# Aggregated metrics
sum by (endpoint) (opsai_requests_total) # Requests by endpoint
sum by (http_status) (opsai_requests_total) # Requests by status code
```

#### **๐Ÿ“ˆ Rate-based Metrics (Need Traffic)**

```promql
# Generate traffic first: ./generate-traffic.sh
rate(opsai_requests_total[5m]) # Request rate
rate(process_cpu_seconds_total{job="opsai_api"}[5m]) * 100 # CPU usage
histogram_quantile(0.95, rate(opsai_request_latency_seconds_bucket[5m])) # 95th percentile latency
```

### **๐Ÿ” Testing Metrics**

**Generate test traffic:**
```bash
# Continuous traffic generation
./generate-traffic.sh

# Or manual testing
for i in {1..20}; do
curl -s http://localhost:8000/ > /dev/null
curl -s http://localhost:8000/docs > /dev/null
sleep 1
done
```

**Verify metrics in Prometheus:**
```bash
# Check if metrics are being collected
curl -s "http://localhost:9090/api/v1/query?query=opsai_requests_total" | jq '.data.result | length'

# Test specific queries
curl -s "http://localhost:9090/api/v1/query?query=up{job=\"opsai_api\"}"
```

### **๐Ÿ“‹ Grafana Dashboard Features**

**Working dashboard panels:**
- ๐Ÿ“Š **Total API Requests**: Real-time request count
- โฑ๏ธ **Request Rate**: Requests per minute over time
- ๐Ÿฅง **HTTP Status Codes**: Success vs error breakdown
- ๐Ÿ“ˆ **Request Latency**: Response time percentiles
- ๐Ÿ’พ **Memory Usage**: RAM consumption tracking
- โฐ **Service Uptime**: Time since last restart

### **๐Ÿšจ Monitoring Troubleshooting**

**If Grafana shows "No Data":**

1. **Check Prometheus targets:**
```bash
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets[] | {job: .labels.job, health: .health}'
```

2. **Verify data source in Grafana:**
- URL should be: `http://prometheus:9090`
- Click "Save & Test" - should show green "Data source is working"

3. **Test simple queries in Grafana:**
- Start with: `opsai_requests_total`
- Set time range to "Last 15 minutes"
- Enable auto-refresh (5s)

4. **Generate traffic if needed:**
```bash
./generate-traffic.sh
```

### **๐Ÿ“ˆ Custom Dashboard Creation**

**Manual dashboard setup:**
1. Go to Grafana โ†’ "+" โ†’ Dashboard โ†’ Add new panel
2. Enter query: `opsai_requests_total`
3. Set visualization type (Time series, Stat, etc.)
4. Configure time range and refresh interval
5. Save dashboard

### **๐Ÿ”ง Monitoring Best Practices**

- โœ… **Start simple**: Use instant metrics first (`opsai_requests_total`)
- โœ… **Generate traffic**: Use `./generate-traffic.sh` for rate metrics
- โœ… **Check time ranges**: Use "Last 15 minutes" for recent data
- โœ… **Verify targets**: Ensure Prometheus is scraping successfully
- โœ… **Test queries**: Use Prometheus UI to validate queries before Grafana

## ๐Ÿ“š **Step-by-Step Setup Walkthrough**

### **๐Ÿš€ Complete Setup from Scratch**

#### **1. Repository Setup**
```bash
git clone https://github.com/pheonix-19/OpsAI.git
cd OpsAI
```

#### **2. Security Configuration**
```bash
# Copy environment template
cp .env.example .env

# Edit .env with your actual credentials (optional)
nano .env

# Verify .env is in .gitignore
grep -q "^\.env$" .gitignore && echo "โœ… .env properly ignored"
```

#### **3. Docker Build (with retry logic)**
```bash
# Method 1: Automated retry script
chmod +x docker-build.sh
./docker-build.sh

# Method 2: Manual build
docker-compose up --build

# Method 3: Minimal build (if having issues)
cp requirements-minimal.txt requirements.txt
docker-compose up --build
```

#### **4. Verify Services**
```bash
# Check all services are running
docker-compose ps

# Test API
curl http://localhost:8000/

# Test metrics endpoint
curl http://localhost:8000/metrics | head -10

# Check Prometheus targets
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets[].health'
```

#### **5. Setup Monitoring**
```bash
# Generate test traffic
./generate-traffic.sh &

# Open Grafana (admin/admin)
open http://localhost:3000

# Open Prometheus
open http://localhost:9090
```

#### **6. Test AI Features**
```bash
# Test classification
curl -X POST "http://localhost:8000/classify" \
-H "Content-Type: application/json" \
-d '{"text": "Cannot login to email account"}'

# Test resolution suggestions
curl -X POST "http://localhost:8000/resolve" \
-H "Content-Type: application/json" \
-d '{"text": "Database connection timeout error"}'
```

### **๐Ÿ”ง CI/CD Setup**

#### **GitHub Actions Configuration**

Your repository includes automated CI/CD with these workflows:

**`.github/workflows/ci.yml`** - Tests and builds on every push:
```yaml
# Automatically runs:
- Python linting with flake8
- Test suite with pytest
- Docker image build
- Deployment to Docker Hub (if secrets configured)
```

**`.github/workflows/retrain.yml`** - Scheduled model retraining:
```yaml
# Runs weekly to:
- Retrain AI models with new data
- Update LoRA adapters
- Upload new model artifacts
```

#### **Required GitHub Secrets for CI/CD**

**Minimal setup (for basic CI/CD):**
```
DOCKERHUB_USER=your_dockerhub_username
DOCKERHUB_TOKEN=your_dockerhub_access_token
```

**Full setup (for all integrations):**
```
JIRA_API_TOKEN=your_jira_token
SLACK_BOT_TOKEN=your_slack_token
FRESHDESK_API_KEY=your_freshdesk_key
```

### **๐Ÿ“ Configuration Files Reference**

| **File** | **Purpose** | **When to Edit** |
|----------|-------------|------------------|
| `.env.example` | Template for environment variables | Never (contains placeholders) |
| `.env` | Your actual secrets (not in git) | Add your real credentials |
| `src/config.py` | Configuration management | Customize app settings |
| `requirements.txt` | Python dependencies | Add new packages |
| `docker-compose.yml` | Service orchestration | Modify ports/volumes |
| `infra/prometheus/prometheus.yml` | Metrics collection | Add monitoring targets |

### **๐ŸŽฏ Quick Validation Checklist**

- [ ] Services start: `docker-compose ps` shows all running
- [ ] API responds: `curl http://localhost:8000/` returns JSON
- [ ] Metrics work: `curl http://localhost:8000/metrics` shows data
- [ ] Prometheus scraping: Targets page shows "UP" status
- [ ] Grafana connected: Data source test succeeds
- [ ] Dashboards show data: Generate traffic and verify graphs
- [ ] AI features work: Classification and resolution endpoints respond
- [ ] Security configured: No real secrets in git, `.env` properly ignored

This comprehensive setup ensures your OpsAI deployment is secure, monitored, and production-ready! ๐ŸŽ‰

## ๐Ÿ”— **Enterprise Integrations**

### **๐Ÿ“‹ Jira Integration**
```bash
# Environment variables for Jira
JIRA_URL=https://your-domain.atlassian.net
JIRA_USER=your-email@company.com
JIRA_API_TOKEN=your-api-token

# Auto-process tickets from Jira webhooks
# POST /jira/webhook - Receives ticket updates
```

### **๐Ÿ’ฌ Slack Bot Integration**
```bash
# Slack bot configuration
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token

# Start the Slack bot
python src/integrations/slack_bot.py
```

## ๐Ÿงช **Testing & Development**

### **Run Test Suite:**
```bash
# Run all tests
pytest

# Run specific modules
pytest tests/test_api.py # API endpoint tests
pytest tests/test_embeddings.py # Vector search tests
pytest tests/test_ingestion.py # Data processing tests
```

### **Development Workflow:**
```bash
# Hot reload during development
uvicorn src.api.main:app --reload --port 8000

# Process new training data
python src/ingestion/ingest.py --input data/raw/new_tickets.csv

# Rebuild search index
python src/embeddings/build_index.py --input-dir data/processed --output-dir data/index
```

## ๐Ÿšจ **Troubleshooting**

### **Common Issues & Solutions**

**๐Ÿ”ง Device Mismatch Error:**
```
RuntimeError: Expected all tensors to be on the same device
```
**Solution:** โœ… Fixed in latest version - tensors automatically moved to correct device

**๐Ÿ”ง Import Errors:**
```
ImportError: attempted relative import with no known parent package
```
**Solution:** Use `PYTHONPATH=. python -m src.module.script`

**๐Ÿ”ง Port Already in Use:**
```
OSError: [Errno 98] Address already in use
```
**Solution:** Use different port: `--port 8001` or kill existing process

### **Debugging Commands**
```bash
# Check API health
curl http://localhost:8000/

# View current metrics
curl http://localhost:8000/metrics | grep opsai

# Check Docker services
docker-compose ps
```

## ๐Ÿš€ **Quick Start Guide**

**1. Test Basic Classification:**
```bash
curl -X POST "http://localhost:8000/classify" \
-H "Content-Type: application/json" \
-d '{"text": "Password reset needed for user account"}'
```

**2. Get AI-Powered Solutions:**
```bash
curl -X POST "http://localhost:8000/resolve" \
-H "Content-Type: application/json" \
-d '{"text": "Email server connection timeout"}'
```

**3. Provide Feedback for Learning:**
```bash
curl -X POST "http://localhost:8000/feedback" \
-H "Content-Type: application/json" \
-d '{
"ticket": {"title": "Login issue"},
"suggestion": "Reset password",
"rating": 5,
"comment": "Perfect solution!"
}'
```

## ๐Ÿ“š **Additional Resources**

### **๐Ÿ”— Useful Links**
- **๐Ÿ“– API Documentation**: http://localhost:8000/docs (when running)
- **๐Ÿ“Š Monitoring**: http://localhost:3000 (Grafana dashboards)
- **๐Ÿ” Metrics**: http://localhost:9090 (Prometheus)
- **๐Ÿ“‹ Query Reference**: See `PROMETHEUS_QUERIES.md` for complete monitoring guide
- **๐Ÿ› Issues**: https://github.com/pheonix-19/OpsAI/issues
- **๐Ÿ’ฌ Discussions**: https://github.com/pheonix-19/OpsAI/discussions

### **๐Ÿ“– Technical Stack**
- **Vector Embeddings**: `sentence-transformers/all-MiniLM-L6-v2`
- **Language Model**: `EleutherAI/gpt-neo-125M` with LoRA fine-tuning
- **Search Index**: FAISS (Facebook AI Similarity Search)
- **Monitoring**: Prometheus + Grafana stack
- **API Framework**: FastAPI with automatic OpenAPI docs

## ๐Ÿค **Contributing**

We welcome contributions! Here's how to get started:

### **๐Ÿ› ๏ธ Development Setup**
```bash
# Fork the repository
git clone https://github.com/your-username/OpsAI.git
cd OpsAI

# Create feature branch
git checkout -b feature/amazing-improvement

# Make changes and test
pytest
pre-commit run --all-files

# Submit pull request
git push origin feature/amazing-improvement
```

### **๐ŸŽฏ Contribution Areas**
- ๐Ÿ› **Bug Fixes**: Fix issues and improve stability
- โœจ **New Features**: Add integrations, UI improvements, ML enhancements
- ๐Ÿ“š **Documentation**: Improve guides, examples, and API docs
- ๐Ÿงช **Testing**: Add test coverage and performance benchmarks

## ๐Ÿ“„ **License & Support**

### **๐Ÿ“œ License**
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.

### **๐Ÿ†˜ Getting Help**

**For Questions:**
1. ๐Ÿ“– Check this README and API documentation first
2. ๐Ÿ” Search existing [GitHub issues](https://github.com/pheonix-19/OpsAI/issues)
3. ๐Ÿ’ฌ Start a [GitHub discussion](https://github.com/pheonix-19/OpsAI/discussions)
4. ๐Ÿ› Create a new issue with detailed information

**For Bugs:**
Include in your issue:
- Python version and OS
- Complete error message and stack trace
- Steps to reproduce the problem
- Expected vs actual behavior

### **๐Ÿ™ Acknowledgments**

- **Hugging Face**: For transformer models and libraries
- **FastAPI**: For the excellent web framework
- **Prometheus & Grafana**: For monitoring and observability
- **FAISS**: For efficient vector similarity search
- **OpenAI/EleutherAI**: For foundation language models

---

## ๐ŸŽ‰ **Ready to Transform Your IT Support?**

OpsAI is production-ready and has been tested with real-world IT scenarios. Start with the sample data, then gradually add your organization's historical tickets to improve accuracy.

**Get started in 5 minutes:**
```bash
git clone https://github.com/pheonix-19/OpsAI.git
cd OpsAI
python3 -m venv env
source env/bin/activate
pip install -r requirements.txt
pip install -e .
PYTHONPATH=. uvicorn src.api.main:app --reload
```