https://github.com/feiskyer/kube-copilot
Kubernetes Copilot powered by OpenAI
https://github.com/feiskyer/kube-copilot
agent chatgpt copilot kubernetes openai
Last synced: 12 days ago
JSON representation
Kubernetes Copilot powered by OpenAI
- Host: GitHub
- URL: https://github.com/feiskyer/kube-copilot
- Owner: feiskyer
- License: apache-2.0
- Created: 2023-03-25T09:19:59.000Z (about 2 years ago)
- Default Branch: master
- Last Pushed: 2024-05-06T11:01:34.000Z (12 months ago)
- Last Synced: 2024-05-08T00:28:31.333Z (12 months ago)
- Topics: agent, chatgpt, copilot, kubernetes, openai
- Language: Go
- Homepage:
- Size: 746 KB
- Stars: 124
- Watchers: 11
- Forks: 19
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-AI-driven-development - kube-copilot - Kubernetes Copilot powered by OpenAI (Uncategorized / Uncategorized)
README
# Kubernetes Copilot
Kubernetes Copilot powered by LLM, which leverages advanced language models to streamline and enhance Kubernetes cluster management. This tool integrates seamlessly with your existing Kubernetes setup, providing intelligent automation, diagnostics, and manifest generation capabilities. By utilizing the power of AI, Kubernetes Copilot simplifies complex operations and helps maintain the health and security of your Kubernetes workloads.
## Features
- Automate Kubernetes cluster operations using large language models.
- Provide your own OpenAI, Azure OpenAI, Anthropic Claude, Google Gemini or other OpenAI-compatible LLM providers.
- Diagnose and analyze potential issues for Kubernetes workloads.
- Generate Kubernetes manifests based on provided prompt instructions.
- Utilize native `kubectl` and `trivy` commands for Kubernetes cluster access and security vulnerability scanning.
- Access the web and perform Google searches without leaving the terminal.## Installation
Install the kube-copilot CLI with the following command:
```sh
go install github.com/feiskyer/kube-copilot/cmd/kube-copilot@latest
```## Quick Start
Setup the following environment variables:
- Ensure [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) is installed on the local machine and the kubeconfig file is configured for Kubernetes cluster access.
- Install [`trivy`](https://github.com/aquasecurity/trivy) to assess container image security issues (only required for the `audit` command).
- Set the OpenAI [API key](https://platform.openai.com/account/api-keys) as the `OPENAI_API_KEY` environment variable to enable LLM AI functionality (refer below for other LLM providers).Then run the following commands directly in the terminal:
```sh
Kubernetes Copilot powered by OpenAIUsage:
kube-copilot [command]Available Commands:
analyze Analyze issues for a given resource
audit Audit security issues for a Pod
completion Generate the autocompletion script for the specified shell
diagnose Diagnose problems for a Pod
execute Execute operations based on prompt instructions
generate Generate Kubernetes manifests
help Help about any command
version Print the version of kube-copilotFlags:
-c, --count-tokens Print tokens count
-h, --help help for kube-copilot
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4")
-v, --verbose Enable verbose output
--version version for kube-copilotUse "kube-copilot [command] --help" for more information about a command.
```## LLM Integrations
OpenAI
Set the OpenAI [API key](https://platform.openai.com/account/api-keys) as the `OPENAI_API_KEY` environment variable to enable OpenAI functionality.
Anthropic Claude
Anthropic Claude provides an [OpenAI compatible API](https://docs.anthropic.com/en/api/openai-sdk), so it could be used by using following config:
- `OPENAI_API_KEY=`
- `OPENAI_API_BASE='https://api.anthropic.com/v1/'`Azure OpenAI
For [Azure OpenAI service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=rest-api#retrieve-key-and-endpoint), set the following environment variables:
- `AZURE_OPENAI_API_KEY=`
- `AZURE_OPENAI_API_BASE=https://.openai.azure.com/`
- `AZURE_OPENAI_API_VERSION=2025-03-01-preview`Google Gemini
Google Gemini provides an OpenAI compatible API, so it could be used by using following config:
- `OPENAI_API_KEY=`
- `OPENAI_API_BASE='https://generativelanguage.googleapis.com/v1beta/openai/'`Ollama or other OpenAI compatible LLMs
For Ollama or other OpenAI compatible LLMs, set the following environment variables:
- `OPENAI_API_KEY=`
- `OPENAI_API_BASE='http://localhost:11434/v1'` (or your own base URL)## Key Features
Analyze issues for a given kubernetes resource
`kube-copilot analyze [--resource pod] --name [--namespace ]` will analyze potential issues for the given resource object:
```sh
Analyze issues for a given resourceUsage:
kube-copilot analyze [flags]Flags:
-h, --help help for analyze
-n, --name string Resource name
-s, --namespace string Resource namespace (default "default")
-r, --resource string Resource type (default "pod")Global Flags:
-c, --count-tokens Print tokens count
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4o")
-v, --verbose Enable verbose output
```Audit Security Issues for Pod
`kube-copilot audit --name [--namespace ]` will audit security issues for a Pod:
```sh
Audit security issues for a PodUsage:
kube-copilot audit [flags]Flags:
-h, --help help for audit
-n, --name string Resource name
-s, --namespace string Resource namespace (default "default")Global Flags:
-c, --count-tokens Print tokens count
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4o")
-v, --verbose Enable verbose output
```Diagnose Problems for Pod
`kube-copilot diagnose --name [--namespace ]` will diagnose problems for a Pod:
```sh
Diagnose problems for a PodUsage:
kube-copilot diagnose [flags]Flags:
-h, --help help for diagnose
-n, --name string Resource name
-s, --namespace string Resource namespace (default "default")Global Flags:
-c, --count-tokens Print tokens count
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4o")
-v, --verbose Enable verbose output
```Execute operations based on prompt instructions
`kube-copilot execute --instructions ` will execute operations based on prompt instructions.
It could also be used to ask any questions.```sh
Execute operations based on prompt instructionsUsage:
kube-copilot execute [flags]Flags:
-h, --help help for execute
-i, --instructions string instructions to executeGlobal Flags:
-c, --count-tokens Print tokens count
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4o")
-v, --verbose Enable verbose output
```Generate Kubernetes Manifests
Use the `kube-copilot generate --prompt ` command to create Kubernetes manifests based on
the provided prompt instructions. After generating the manifests, you will be
prompted to confirm whether you want to apply them.```sh
Generate Kubernetes manifestsUsage:
kube-copilot generate [flags]Flags:
-h, --help help for generate
-p, --prompt string Prompts to generate Kubernetes manifestsGlobal Flags:
-c, --count-tokens Print tokens count
-x, --max-iterations int Max iterations for the agent running (default 10)
-t, --max-tokens int Max tokens for the GPT model (default 2048)
-m, --model string OpenAI model to use (default "gpt-4o")
-v, --verbose Enable verbose output
```## Integrations
Google Search
Large language models are trained with outdated data, and hence may lack the most current information or miss out on recent developments. This is where Google Search becomes an optional tool. By integrating real-time search capabilities, LLMs can access the latest data, ensuring that responses are not only accurate but also up-to-date.
To enable it, set `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` (obtain API key from [Google Cloud](https://cloud.google.com/docs/authentication/api-keys?visit_id=638154888929258210-4085587461) and CSE ID from [Google CSE](http://www.google.com/cse/)).
## Python Version
Please refer [feiskyer/kube-copilot-python](https://github.com/feiskyer/kube-copilot-python) for the Python implementation of the same project.
## Contribution
The project is opensource at github [feiskyer/kube-copilot](https://github.com/feiskyer/kube-copilot) (Go) and [feiskyer/kube-copilot-python](https://github.com/feiskyer/kube-copilot-python) (Python) with Apache License.
If you would like to contribute to the project, please follow these guidelines:
1. Fork the repository and clone it to your local machine.
2. Create a new branch for your changes.
3. Make your changes and commit them with a descriptive commit message.
4. Push your changes to your forked repository.
5. Open a pull request to the main repository.