Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/srimoyee1212/Kube-GPT-Agent
Use an AI agent to get information about your Kubernetes deployments!
https://github.com/srimoyee1212/Kube-GPT-Agent
Last synced: 3 days ago
JSON representation
Use an AI agent to get information about your Kubernetes deployments!
- Host: GitHub
- URL: https://github.com/srimoyee1212/Kube-GPT-Agent
- Owner: srimoyee1212
- Created: 2024-10-28T06:26:43.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2024-10-29T06:12:51.000Z (2 months ago)
- Last Synced: 2024-10-29T06:28:49.676Z (2 months ago)
- Language: Python
- Size: 11.7 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome_ai_agents - Kube-Gpt-Agent - Use an AI agent to get information about your Kubernetes deployments! (Building / Deployment)
- awesome_ai_agents - Kube-Gpt-Agent - Use an AI agent to get information about your Kubernetes deployments! (Building / Deployment)
README
# Kubernetes AI Agent Using Flask and OpenAI's API
This project implements an AI agent capable of interpreting and answering Kubernetes-related queries using natural language. The agent leverages Flask for the web server, Kubernetes Python client for cluster interactions, and OpenAI’s GPT-4 model for natural language processing.
## Approach
### 1. Flask Web Server and Endpoint Setup
The agent runs on a Flask web server and provides a **POST endpoint** (`/query`) for receiving user queries. Queries are submitted in JSON format, and the server responds with relevant information about the Kubernetes cluster.- **Why Flask?** Flask is lightweight and suitable for microservices, with a minimal setup that allows for efficient request handling and easy API structuring.
### 2. Kubernetes Client Initialization and Error Handling
The agent uses the Kubernetes Python client API (`client.CoreV1Api` and `client.AppsV1Api`) to connect to the Kubernetes cluster, loading the `.kube/config` file for secure configuration access.- **Error Management:** A custom context manager (`k8s_error_handling`) is implemented to manage API exceptions. This logs both Kubernetes-specific errors and unexpected issues, allowing the application to handle errors gracefully and providing clear logging for debugging.
### 3. Agent Design: Kubernetes Query Processor
The main logic resides within the `KubernetesQueryProcessor` class, which encapsulates Kubernetes-related functions needed to fulfill common queries. The agent maps user queries to functions that can:- List pods, services, nodes, deployments, and namespaces
- Retrieve status, logs, or pods generated by specific deployments
- Count the number of running pods and nodesEach function utilizes the Kubernetes API to return data in the expected format, minimizing identifiers. For example, `get_pods_by_deployment` strips any suffixes from pod names, ensuring only base names are returned, as required.
### 4. Natural Language Understanding with GPT-4
OpenAI’s GPT-4 model is used to interpret natural language queries. It translates each query into a structured JSON format containing:- **operation**: The name of the Kubernetes function to execute
- **parameters**: The parameters needed for the operation- **Prompting Technique:** A detailed system prompt provides GPT-4 with a dictionary of valid operations and their parameters. This helps map user queries accurately to specific functions within `KubernetesQueryProcessor`.
- **Function Invocation:** Once the query is parsed, the agent dynamically calls the required function, passing in parameters as needed.### 5. Query Processing Flow
Upon receiving a query:
1. It is parsed with GPT-4, which determines the required operation and parameters.
2. The relevant Kubernetes function is invoked with the specified parameters.
3. The response is formatted for clarity and returned to the user.- **Output Formatting:** For list-based queries, results are presented as comma-separated strings. Counts or single values are returned as plain text.
### 6. Deployment and Testing Recommendations
To ensure robustness, the agent can be tested on Minikube with test applications deployed to confirm responses match expectations. Testing on Minikube provides a realistic Kubernetes environment, allowing potential issues to be identified and resolved before wider deployment.## Local Setup
### Prerequisites
1. **Python 3.10**: Make sure Python 3.10 is installed.
- Install Python [here](https://www.python.org/downloads/).
2. **Kubernetes Cluster Access**: Ensure you have access to a Kubernetes cluster (e.g., using Minikube or a remote cluster).
- **Minikube**: You can install Minikube by following [these instructions](https://minikube.sigs.k8s.io/docs/start/).
- After starting Minikube, confirm the Kubernetes config file is located at `~/.kube/config`.3. **OpenAI API Key**: You’ll need an API key from OpenAI to use GPT-4.
- [Sign up for OpenAI](https://platform.openai.com/) and generate an API key.
- Set the API key as an environment variable:
```bash
export OPENAI_API_KEY="your_openai_api_key"
```4. **Install Dependencies**: Install the required Python libraries.
```bash
pip install -r requirements.txt
```### Running the AI Agent
**Start the Flask Server**
To start the server, navigate to the directory where app.py is located and run:```bash
python main.py
```The server should now be running locally on `http://localhost:8000`. If successful, you should see output similar to:
```bash
* Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
```### Local Testing
esting the Agent Locally
Example Queries Using curl
Once the server is running, you can test the agent by making HTTP POST requests to http://localhost:8000/query. Each query should be submitted in JSON format with a query key, as shown in the examples below.For each example, open a terminal and replace your_query_here with the specific question or command you want to test.
1. Check the Status of a Pod
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "What is the status of the pod named nginx?"}'
```2. List All Pods in the Default Namespace
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all pods in the default namespace"}'
```3. Show Logs of a Specific Pod
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "Show me logs for pod nginx"}'
```4. List All Nodes in the Cluster
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all nodes in the cluster"}'
```5. List All Services in the Default Namespace
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all services in the default namespace"}'
```6. Count Running Pods in the Default Namespace
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "How many pods are running in the default namespace?"}'
```7. List All Deployments in the Default Namespace
```bash
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all deployments in the default namespace"}'
```