{"id":13425051,"url":"https://github.com/derailed/popeye","last_synced_at":"2025-05-13T11:09:50.297Z","repository":{"id":37664630,"uuid":"176379662","full_name":"derailed/popeye","owner":"derailed","description":"👀 A Kubernetes cluster resource sanitizer","archived":false,"fork":false,"pushed_at":"2025-05-12T14:51:09.000Z","size":14381,"stargazers_count":5796,"open_issues_count":44,"forks_count":317,"subscribers_count":48,"default_branch":"master","last_synced_at":"2025-05-13T11:09:37.238Z","etag":null,"topics":["go","golang","k8s","kubernetes-clusters","kubernetes-resources","misconfigurations","popeye","sanitize-resources","sanitizers"],"latest_commit_sha":null,"homepage":"https://popeyecli.io","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/derailed.png","metadata":{"files":{"readme":"README.md","changelog":"change_logs/release_v0.1.0.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":["derailed"]}},"created_at":"2019-03-18T22:30:27.000Z","updated_at":"2025-05-13T06:51:35.000Z","dependencies_parsed_at":"2023-02-16T23:15:53.177Z","dependency_job_id":"fb36f95a-b521-45cd-99ef-6f78ac3ebfc7","html_url":"https://github.com/derailed/popeye","commit_stats":{"total_commits":436,"total_committers":69,"mean_commits":6.318840579710145,"dds":"0.32798165137614677","last_synced_commit":"d09ec25f3834d2c6a171486b9726b0a91793e3f0"},"previous_names":[],"tags_count":71,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/derailed%2Fpopeye","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/derailed%2Fpopeye/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/derailed%2Fpopeye/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/derailed%2Fpopeye/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/derailed","download_url":"https://codeload.github.com/derailed/popeye/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253929367,"owners_count":21985802,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["go","golang","k8s","kubernetes-clusters","kubernetes-resources","misconfigurations","popeye","sanitize-resources","sanitizers"],"created_at":"2024-07-31T00:01:03.336Z","updated_at":"2025-05-13T11:09:49.672Z","avatar_url":"https://github.com/derailed.png","language":"Go","readme":"\u003cimg src=\"https://github.com/derailed/popeye/raw/master/assets/popeye_logo.png\" align=\"right\" width=\"250\" height=\"auto\"\u003e\n\n# Popeye: Kubernetes Live Cluster Linter\n\nPopeye is a utility that scans live Kubernetes clusters and reports potential issues with deployed resources and configurations.\nAs Kubernetes landscapes grows, it is becoming a challenge for a human to track the slew of manifests and policies that orchestrate a cluster.\nPopeye scans your cluster based on what's deployed and not what's sitting on disk. By linting your cluster, it detects misconfigurations,\nstale resources and assists you to ensure that best practices are in place, thus preventing future headaches.\nIt aims at reducing the cognitive *over*load one faces when operating a Kubernetes cluster in the wild.\nFurthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.\n\nPopeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!\n\n\u003cbr/\u003e\n\u003cbr/\u003e\n\n---\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/derailed/popeye?)](https://goreportcard.com/report/github.com/derailed/popeye)\n[![codebeat badge](https://codebeat.co/badges/827e5642-3ccc-4ecc-b22b-5707dbc34cf1)](https://codebeat.co/projects/github-com-derailed-popeye-master)\n[![release](https://img.shields.io/github/release-pre/derailed/popeye.svg)](https://github.com/derailed/popeye/releases)\n[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/derailed/popeye/blob/master/LICENSE)\n[![Docker Repository on Quay](https://quay.io/repository/derailed/popeye/status \"Docker Repository on Quay\")](https://quay.io/repository/derailed/popeye)\n![GitHub stars](https://img.shields.io/github/stars/derailed/popeye.svg?label=github%20stars)\n[![Releases](https://img.shields.io/github/downloads/derailed/popeye/total.svg)]()\n\n---\n\n## Screenshots\n\n### Console\n\n\u003cimg src=\"assets/screens/console.png\"/\u003e\n\n### JSON\n\n\u003cimg src=\"assets/screens/json.png\"/\u003e\n\n### HTML\n\nYou can dump the scan report to HTML.\n\n\u003cimg src=\"assets/screens/html.png\"/\u003e\n\n### Grafana Dashboard\n\nPopeye publishes [Prometheus](https://prometheus.io) metrics.\nWe provided a sample Popeye dashboard to get you started in this repo.\n\n\u003cimg src=\"assets/screens/pop-dash.png\"/\u003e\n\n---\n\n## Installation\n\nPopeye is available on Linux, OSX and Windows platforms.\n\n* Binaries for Linux, Windows and Mac are available as tarballs in\n  the [release](https://github.com/derailed/popeye/releases) page.\n\n* For OSX/Unit using Homebrew/LinuxBrew\n\n   ```shell\n   brew install derailed/popeye/popeye\n   ```\n\n* Using `go install`\n\n    ```shell\n    go install github.com/derailed/popeye@latest\n    ```\n\n* Building from source\n   Popeye was built using go 1.21+. In order to build Popeye from source you must:\n   1. Clone the repo\n   2. Add the following command in your go.mod file\n\n      ```text\n      replace (\n        github.com/derailed/popeye =\u003e MY_POPEYE_CLONED_GIT_REPO\n      )\n      ```\n\n   3. Build and run the executable\n\n        ```shell\n        go run main.go\n        ```\n\n   Quick recipe for the impatient:\n\n   ```shell\n   # Clone outside of GOPATH\n   git clone https://github.com/derailed/popeye\n   cd popeye\n   # Build and install\n   make build\n   # Run\n   popeye\n   ```\n\n## PreFlight Checks\n\n* Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.\n\n    ```shell\n    export TERM=xterm-256color\n    ```\n\n---\n\n## The Command Line\n\nYou can use Popeye wide open or using a spinach yaml config to\ntune your linters. Details about the Popeye configuration file are below.\n\n```shell\n# Dump version info and logs location\npopeye version\n# Popeye a cluster using your current kubeconfig environment.\n# NOTE! This will run Popeye in the context namespace if set or like kubectl will use the default namespace\npopeye\n# Run Popeye in the `fred` namespace\npopeye -n fred\n# Run Popeye in all namespaces\npopeye -A\n# Run Popeye uses a spinach config file of course! aka spinachyaml!\npopeye -f spinach.yaml\n# Popeye a cluster using a kubeconfig context.\npopeye --context olive\n# Run Popeye with specific linters and log to the console\npopeye -n ns1 -s pod,svc --logs none\n# Run Popeye for a given namespace in a given log file and debug logs\npopeye -n ns1 --logs /tmp/fred.log -v4\n# Stuck?\npopeye help\n```\n\n---\n\n## Linters\n\nPopeye scans your cluster for best practices and potential issues.\nCurrently, Popeye only looks for a given set of curated Kubernetes resources.\nMore will come soon!\nWe are hoping Kubernetes friends will pitch'in to make Popeye even better.\n\nThe aim of the linters is to pick up on misconfigurations, i.e. things\nlike port mismatches, dead or unused resources, metrics utilization,\nprobes, container images, RBAC rules, naked resources, etc...\n\nPopeye is not another static analysis tool. It runs and inspect Kubernetes resources on\nlive clusters and lint resources as they are in the wild!\n\nHere is a list of some of the available linters:\n\n|    | Resource                | Linters                                                                 | Aliases    |\n|----|-------------------------|-------------------------------------------------------------------------|------------|\n| 🛀 | Node                    |                                                                         | no         |\n|    |                         | Conditions ie not ready, out of mem/disk, network, pids, etc            |            |\n|    |                         | Pod tolerations referencing node taints                                 |            |\n|    |                         | CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM) |            |\n| 🛀 | Namespace               |                                                                         | ns         |\n|    |                         | Inactive                                                                |            |\n|    |                         | Dead namespaces                                                         |            |\n| 🛀 | Pod                     |                                                                         | po         |\n|    |                         | Pod status                                                              |            |\n|    |                         | Containers statuses                                                     |            |\n|    |                         | ServiceAccount presence                                                 |            |\n|    |                         | CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)    |            |\n|    |                         | Container image with no tags                                            |            |\n|    |                         | Container image using `latest` tag                                      |            |\n|    |                         | Resources request/limits presence                                       |            |\n|    |                         | Probes liveness/readiness presence                                      |            |\n|    |                         | Named ports and their references                                        |            |\n| 🛀 | Service                 |                                                                         | svc        |\n|    |                         | Endpoints presence                                                      |            |\n|    |                         | Matching pods labels                                                    |            |\n|    |                         | Named ports and their references                                        |            |\n| 🛀 | ServiceAccount          |                                                                         | sa         |\n|    |                         | Unused, detects potentially unused SAs                                  |            |\n| 🛀 | Secrets                 |                                                                         | sec        |\n|    |                         | Unused, detects potentially unused secrets or associated keys           |            |\n| 🛀 | ConfigMap               |                                                                         | cm         |\n|    |                         | Unused, detects potentially unused cm or associated keys                |            |\n| 🛀 | Deployment              |                                                                         | dp, deploy |\n|    |                         | Unused, pod template validation, resource utilization                   |            |\n| 🛀 | StatefulSet             |                                                                         | sts        |\n|    |                         | Unused, pod template validation, resource utilization                    |            |\n| 🛀 | DaemonSet               |                                                                         | ds         |\n|    |                         | Unused, pod template validation, resource utilization                    |            |\n| 🛀 | PersistentVolume        |                                                                         | pv         |\n|    |                         | Unused, check volume bound or volume error                              |            |\n| 🛀 | PersistentVolumeClaim   |                                                                         | pvc        |\n|    |                         | Unused, check bounded or volume mount error                             |            |\n| 🛀 | HorizontalPodAutoscaler |                                                                         | hpa        |\n|    |                         | Unused, Utilization, Max burst checks                                   |            |\n| 🛀 | PodDisruptionBudget     |                                                                         |            |\n|    |                         | Unused, Check minAvailable configuration                                | pdb        |\n| 🛀 | ClusterRole             |                                                                         |            |\n|    |                         | Unused                                                                  | cr         |\n| 🛀 | ClusterRoleBinding      |                                                                         |            |\n|    |                         | Unused                                                                  | crb        |\n| 🛀 | Role                    |                                                                         |            |\n|    |                         | Unused                                                                  | ro         |\n| 🛀 | RoleBinding             |                                                                         |            |\n|    |                         | Unused                                                                  | rb         |\n| 🛀 | Ingress                 |                                                                         |            |\n|    |                         | Valid                                                                   | ing        |\n| 🛀 | NetworkPolicy           |                                                                         |            |\n|    |                         | Valid, Stale, Guarded                                                   | np         |\n| 🛀 | PodSecurityPolicy       |                                                                         |            |\n|    |                         | Valid                                                                   | psp        |\n| 🛀 | Cronjob                 |                                                                         |            |\n|    |                         | Valid, Suspended, Runs                                                  | cj         |\n| 🛀 | Job                     |                                                                         |            |\n|    |                         | Pod checks                                                              | job        |\n| 🛀 | GatewayClass            |                                                                         |            |\n|    |                         | Valid, Unused                                                           | gwc        |\n| 🛀 | Gateway                 |                                                                         |            |\n|    |                         | Valid, Unused                                                           | gw         |\n| 🛀 | HTTPRoute               |                                                                         |            |\n|    |                         | Valid, Unused                                                           | gwr        |\n\nYou can also see the [full list of codes](docs/codes.md)\n\n---\n\n## Saving Scans\n\nTo save the Popeye report to a file pass the `--save` flag to the command.\nBy default it will create a tmp directory and will store your scan report there.\nThe path of the tmp directory will be printed out on STDOUT.\nIf you have the need to specify the output directory for the report,\nyou can use this environment variable `POPEYE_REPORT_DIR`. The final path will be \u003cPOPEYE_REPORT_DIR\u003e/\u003ccluster\u003e/\u003ccontext\u003e.\nBy default, the name of the output file follow the following format : `lint_\u003ccluster-name\u003e_\u003ctime-UnixNano\u003e.\u003coutput-extension\u003e` (e.g. : \"lint-mycluster-1594019782530851873.html\").\nIf you want to also specify the output file name for the report, you can pass the `--output-file` flag with the filename you want as parameter.\n\nExample to save report in working directory:\n\n```shell\nPOPEYE_REPORT_DIR=$(pwd) popeye --save\n```\n\nExample to save report in working directory in HTML format under the name \"report.html\" :\n\n```shell\nPOPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html\n```\n\n### Save To S3 Object Store\n\nAlternatively, you can push the generated reports to an AWS S3 or Minio object store by providing the flag `--s3-bucket`.\nFor parameters you need to provide the name of the S3 bucket where you want to store the report.\nTo save the report in a bucket subdirectory provide the bucket parameter as `bucket/path/to/report`.\n\nExample to save report to S3:\n\n```shell\n# AWS S3\n# NOTE: You must provide env vars for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY\n# This will create bucket my-popeye if not present and upload a popeye json report to /fred/scan.json\npopeye --s3-bucket s3://my-popeye/fred --s3-region us-west-2 --out json --save --output-file scan.json\n\n# Minio Object Store\n# NOTE: You must provide env vars for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and a minio server URI\n# This will create bucket my-popeye if not present and upload a popeye json report to /fred/scan.json\npopeye --s3-bucket minio://my-popeye/fred --s3-region us-east --s3-endpoint localhost:9000 --out json --save --output-file scan.json\n```\n\n---\n\n## Docker Support\n\nYou can also run Popeye in a container by running it directly from the official docker repo on Quay.\nThe default command when you run the docker container is `popeye`, so you customize the scan by using the supported cli flags.\nTo access your clusters, map your local kubeconfig directory into the container with `-v` :\n\n```shell\ndocker run --rm -it -v $HOME/.kube:/root/.kube quay.io/derailed/popeye --context foo -n bar\n```\n\nRunning the above docker command with `--rm` means that the container gets deleted when Popeye exits.\nWhen you use `--save`, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output ;(\nTo get around this, map /tmp to the container's /tmp.\n\n\u003e NOTE: You can override the default output directory location by setting `POPEYE_REPORT_DIR` env variable.\n\n```shell\ndocker run --rm -it \\\n  -v $HOME/.kube:/root/.kube \\\n  -e POPEYE_REPORT_DIR=/tmp/popeye \\\n  -v /tmp:/tmp \\\n  quay.io/derailed/popeye --context foo -n bar --save --output-file my_report.txt\n\n# Docker has exited, and the container has been deleted, but the file\n# is in your /tmp directory because you mapped it into the container\ncat /tmp/popeye/my_report.txt\n\u003csnip\u003e\n```\n\n---\n\n## Output Formats\n\nPopeye can generate linter reports in a variety of formats. You can use the -o cli option and pick your poison from there.\n\n| Format     | Description                                            | Default | Credits                                      |\n|------------|--------------------------------------------------------|---------|----------------------------------------------|\n| standard   | The full monty output iconized and colorized           | yes     |                                              |\n| jurassic   | No icons or color like it's 1979                       |         |                                              |\n| yaml       | As YAML                                                |         |                                              |\n| html       | As HTML                                                |         |                                              |\n| json       | As JSON                                                |         |                                              |\n| junit      | For the Java melancholic                               |         |                                              |\n| prometheus | Dumps report a prometheus metrics                      |         | [dardanel](https://github.com/eminugurkenar) |\n| score      | Returns a single cluster linter score value (0-100)    |         | [kabute](https://github.com/kabute)          |\n\n---\n\n## The Prom Queen!\n\nPopeye can publish Prometheus metrics directly from a scan. You will need to have access to a prometheus pushgateway and credentials.\n\n\u003e NOTE! These are subject to change based on users feedback and usage!!\n\nIn order to publish metrics, additional cli args must be present.\n\n```shell\n# Run popeye using console output and push prom metrics.\npopeye --push-gtwy-url http://localhost:9091\n\n# Run popeye using a saved html output and push prom metrics.\n# NOTE! When scan are dump to disk, popeye_cluster_score metric below includes\n# an additional label to track the persisted artifact so you can aggregate with the scan\n# Don't think it's the correct approach as this changes the metric cardinality on every push.\n# Hence open for suggestions here??\npopeye -o html --save --push-gtwy-url http://localhost:9091\n```\n\n### PopProm metrics\n\nThe following Popeye prometheus metrics are published:\n\n* `popeye_severity_total` [gauge] tracks various counts based on severity.\n* `popeye_code_total` [gauge] tracks counts by Popeye's linter codes.\n* `popeye_linter_tally_total` [gauge] tracks counts per linters.\n* `popeye_report_errors_total` [gauge] tracks scan errors totals.\n* `popeye_cluster_score` [gauge] tracks scan report scores.\n\n\n### PopGraf\n\nA sample [Grafana](https://grafana.com) dashboard can be found in this repo to get you started.\n\n\u003e NOTE! Work in progress, please feel free to contribute if you have UX/grafana/promql chops.\n\n\n---\n\n## SpinachYAML\n\nA spinach YAML configuration file can be specified via the `-f` option to further configure the linters. This file may specify\nthe container utilization threshold and specific linter configurations as well as resources and codes that will be excluded from the linter.\n\n\u003e NOTE! This file will change as Popeye matures!\n\nUnder the `excludes` key you can configure to skip certain resources, or linter codes.\nPopeye's linters are named after the k8s resource names.\nFor example the PodDisruptionBudget linter is named `poddisruptionbudgets` and scans `policy/v1/poddisruptionbudgets`\n\n\u003e NOTE! The linter uses the plural resource `kind` form and everything is spelled in lowercase.\n\nA resource fully qualified name aka `FQN` is used in the spinach file to identity a resource name i.e. `namespace/resource_name`.\n\nFor example, the FQN of a pod named `fred-1234` in the namespace `blee` will be `blee/fred-1234`. This provides for differentiating `fred/p1` and `blee/p1`.\nFor cluster wide resources, the FQN is equivalent to the name.\nExclude rules can be either a straight string match or a regular expression. In the latter case the regular expression must be specified via the `rx:` prefix.\n\n\u003e NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a *loose* regex rule.\n\u003e When your cluster resources change, this could lead to a sub-optimal scans.\n\u003e Thus we recommend running Popeye `wide open` once in a while to make sure you will pick up on any new issues that may have arisen in your clusters…\n\nHere is an example spinach file as it stands in this release.\nThere is a fuller eks and aks based spinach file in this repo under `spinach`.\n(BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)\n\n```yaml\n# spinach.yaml\n\n# A Popeye sample configuration file\npopeye:\n  # Checks resources against reported metrics usage.\n  # If over/under these thresholds a linter warning will be issued.\n  # Your cluster must run a metrics-server for these to take place!\n  allocations:\n    cpu:\n      underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.\n      overPercUtilization: 50   # Checks if cpu is over allocated by more than 50% at current load.\n    memory:\n      underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.\n      overPercUtilization: 50   # Checks if mem is over allocated by more than 50% usage at current load.\n\n  # Excludes excludes certain resources from Popeye scans\n  excludes:\n    # [NEW!] Global exclude resources and codes globally of any linters.\n    global:\n      fqns: [rx:^kube-] # =\u003e excludes all resources in kube-system, kube-public, etc..\n      # [NEW!] Exclude resources for all linters matching these labels\n      labels:\n        app: [bozo, bono] #=\u003e exclude any resources with labels matching either app=bozo or app=bono\n      # [NEW!] Exclude resources for all linters matching these annotations\n      annotations:\n        fred: [blee, duh] # =\u003e exclude any resources with annotations matching either fred=blee or fred=duh\n      # [NEW!] Exclude scan codes globally via straight codes or regex!\n      codes: [\"300\", \"206\", \"rx:^41\"] # =\u003e exclude issue codes 300, 206, 410, 415 (Note: regex match!)\n\n    # [NEW!] Configure individual resource linters\n    linters:\n      # Configure the namespaces linter for v1/namespaces\n      namespaces:\n        # [NEW!] Exclude these codes for all namespace resources straight up or via regex.\n        codes: [\"100\", \"rx:^22\"] # =\u003e exclude codes 100, 220, 225, ...\n        # [NEW!] Excludes specific namespaces from the scan\n        instances:\n          - fqns: [kube-public, kube-system] # =\u003e skip ns kube-pulbic and kube-system\n          - fqns: [blee-ns]\n            codes: [106] # =\u003e skip code 106 for namespace blee-ns\n\n      # Skip secrets in namespace bozo.\n      secrets:\n        instances:\n          - fqns: [rx:^bozo]\n\n      # Configure the pods linter for v1/pods.\n      pods:\n        instances:\n          # [NEW!] exclude all pods matching these labels.\n          - labels:\n              app: [fred,blee] # Exclude codes 102, 105 for any pods with labels app=fred or app=blee\n            codes: [102, 105]\n\n  resources:\n    # Configure node resources.\n    node:\n      # Limits set a cpu/mem threshold in % ie if cpu|mem \u003e limit a lint warning is triggered.\n      limits:\n        # CPU checks if current CPU utilization on a node is greater than 90%.\n        cpu:    90\n        # Memory checks if current Memory utilization on a node is greater than 80%.\n        memory: 80\n\n    # Configure pod resources\n    pod:\n      # Restarts check the restarts count and triggers a lint warning if above threshold.\n      restarts: 3\n      # Check container resource utilization in percent.\n      # Issues a lint warning if about these threshold.\n      limits:\n        cpu:    80\n        memory: 75\n\n\n  # [New!] overrides code severity\n  overrides:\n    # Code specifies a custom severity level ie critical=3, warn=2, info=1\n    - code: 206\n      severity: 1\n\n  # Configure a list of allowed registries to pull images from.\n  # Any resources not using the following registries will be flagged!\n  registries:\n    - quay.io\n    - docker.io\n```\n\n---\n\n## In Cluster\n\nPopeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.\n\nHere is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s\ndirectory in this repo.\n\n```shell\nkubectl apply -f k8s/popeye\n```\n\n```yaml\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name:      popeye\n---\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n  name:      popeye\n  namespace: popeye\nspec:\n  schedule: \"* */1 * * *\" # Fire off Popeye once an hour\n  concurrencyPolicy: Forbid\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          serviceAccountName: popeye\n          restartPolicy: Never\n          containers:\n            - name: popeye\n              image: derailed/popeye:vX.Y.Z\n              imagePullPolicy: IfNotPresent\n              args:\n                - -o\n                - yaml\n                - --force-exit-zero\n              resources:\n                limits:\n                  cpu:    500m\n                  memory: 100Mi\n```\n\nThe `--force-exit-zero` should be set. Otherwise, the pods will end up in an error state.\n\n\u003e NOTE! Popeye exits with a non-zero error code if any lint errors are detected.\n\n### Popeye Got Your RBAC!\n\nIn order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.\n\nSample Popeye RBAC Rules (please note that those are **subject to change**.)\n\n\u003e NOTE! Please review and tune per your cluster policies.\n\n```yaml\n---\n# Popeye ServiceAccount.\napiVersion: v1\nkind:       ServiceAccount\nmetadata:\n  name:      popeye\n  namespace: popeye\n\n---\n# Popeye needs get/list access on the following Kubernetes resources.\napiVersion: rbac.authorization.k8s.io/v1\nkind:       ClusterRole\nmetadata:\n  name: popeye\nrules:\n- apiGroups: [\"\"]\n  resources:\n   - configmaps\n   - endpoints\n   - namespaces\n   - nodes\n   - persistentvolumes\n   - persistentvolumeclaims\n   - pods\n   - secrets\n   - serviceaccounts\n   - services\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"apps\"]\n  resources:\n  - daemonsets\n  - deployments\n  - statefulsets\n  - replicasets\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"networking.k8s.io\"]\n  resources:\n  - ingresses\n  - networkpolicies\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"batch.k8s.io\"]\n  resources:\n  - cronjobs\n  - jobs\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"gateway.networking.k8s.io\"]\n  resources:\n  - gateway-classes\n  - gateways\n  - httproutes\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"autoscaling\"]\n  resources:\n  - horizontalpodautoscalers\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"policy\"]\n  resources:\n  - poddisruptionbudgets\n  - podsecuritypolicies\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n  resources:\n  - clusterroles\n  - clusterrolebindings\n  - roles\n  - rolebindings\n  verbs:     [\"get\", \"list\"]\n- apiGroups: [\"metrics.k8s.io\"]\n  resources:\n  - pods\n  - nodes\n  verbs:     [\"get\", \"list\"]\n\n---\n# Binds Popeye to this ClusterRole.\napiVersion: rbac.authorization.k8s.io/v1\nkind:       ClusterRoleBinding\nmetadata:\n  name: popeye\nsubjects:\n- kind:     ServiceAccount\n  name:     popeye\n  namespace: popeye\nroleRef:\n  kind:     ClusterRole\n  name:     popeye\n  apiGroup: rbac.authorization.k8s.io\n```\n\n---\n\n## Report Morphology\n\nThe lint report outputs each resource group scanned and their potential issues.\nThe report is color/emoji coded in term of linter severity levels:\n\n| Level | Icon | Jurassic | Color     | Description     |\n|-------|------|----------|-----------|-----------------|\n| Ok    | ✅   | OK       | Green     | Happy!          |\n| Info  | 🔊   | I        | BlueGreen | FYI             |\n| Warn  | 😱   | W        | Yellow    | Potential Issue |\n| Error | 💥   | E        | Red       | Action required |\n\nThe heading section for each scanned Kubernetes resource provides a summary count\nfor each of the categories above.\n\nThe Summary section provides a **Popeye Score** based on the linter pass on the given cluster.\n\n---\n\n## Known Issues\n\nThis initial drop is brittle. Popeye will most likely blow up when…\n\n* You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.25.X.\n* You don't have enough RBAC oomph to manage your cluster (see RBAC section)\n\n---\n\n## Disclaimer\n\nThis is work in progress! If there is enough interest in the Kubernetes\ncommunity, we will enhance per your recommendations/contributions.\nAlso if you dig this effort, please let us know that too!\n\n---\n\n## ATTA Girls/Boys!\n\nPopeye sits on top of many of open source projects and libraries. Our *sincere*\nappreciations to all the OSS contributors that work nights and weekends\nto make this project a reality!\n\n### Contact Info\n\n1. **Email**:   fernand@imhotep.io\n2. **Twitter**: [@kitesurfer](https://twitter.com/kitesurfer?lang=en)\n\n---\n\n\u003cimg src=\"https://github.com/derailed/popeye/blob/master/assets/imhotep_logo.png\" width=\"32\" height=\"auto\"/\u003e  \u0026nbsp;© 2025 Imhotep Software LLC.\nAll materials licensed under [Apache v2.0](http://www.apache.org/licenses/LICENSE-2.0)\n","funding_links":["https://github.com/sponsors/derailed"],"categories":["Scanners","Static Analysis","Go","Kubernetes","Security and Compliance","Tools and Libraries","Containers","golang","k8s","Repositories","Configuration Management"],"sub_categories":["[Jenkins](#jenkins)","Kubernetes security posture management","Monitoring, Alerts, and Visualization","Kubernetes","Kubernetes // Dashboards, UI, Reporting and Validation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fderailed%2Fpopeye","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fderailed%2Fpopeye","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fderailed%2Fpopeye/lists"}