{"id":15642909,"url":"https://github.com/developer-guy/policy-as-code-war","last_synced_at":"2026-02-24T02:32:00.765Z","repository":{"id":50484373,"uuid":"342156793","full_name":"developer-guy/policy-as-code-war","owner":"developer-guy","description":"OPA Gatekeeper vs Kyverno","archived":false,"fork":false,"pushed_at":"2021-10-27T14:08:57.000Z","size":248,"stargazers_count":61,"open_issues_count":1,"forks_count":7,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-04-30T10:06:41.005Z","etag":null,"topics":["kubernetes","kyverno","minikube","opa","open-policy-agent","policy-as-code"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/developer-guy.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-02-25T07:18:02.000Z","updated_at":"2025-01-03T12:18:33.000Z","dependencies_parsed_at":"2022-07-30T15:07:59.747Z","dependency_job_id":null,"html_url":"https://github.com/developer-guy/policy-as-code-war","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/developer-guy/policy-as-code-war","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer-guy%2Fpolicy-as-code-war","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer-guy%2Fpolicy-as-code-war/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer-guy%2Fpolicy-as-code-war/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer-guy%2Fpolicy-as-code-war/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/developer-guy","download_url":"https://codeload.github.com/developer-guy/policy-as-code-war/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer-guy%2Fpolicy-as-code-war/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29769176,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-24T01:40:24.820Z","status":"online","status_checked_at":"2026-02-24T02:00:07.497Z","response_time":75,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["kubernetes","kyverno","minikube","opa","open-policy-agent","policy-as-code"],"created_at":"2024-10-03T11:58:04.743Z","updated_at":"2026-02-24T02:32:00.744Z","avatar_url":"https://github.com/developer-guy.png","language":null,"readme":"![policy_as_code_war](./assets/policy_as_code_war.png)\n\n# Introduction\nIn this guide, we are going to demonstrate what OPA Gatekeeper and Kyverno are, what are the differences between them and how we can set up and use them in the Kubernetes cluster by doing hands-on demo. \n\nSo, if you are interested in with one of these topics, please keep reading, there is a lots of good details in the following sections 💪.\n\nLet's start with defining what Policy-as-Code concept is.\n\n\u003c!-- START doctoc generated TOC please keep comment here to allow auto update --\u003e\n\u003c!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --\u003e\n\n- 🧰 [Prerequisites](#prerequisites)\n- 🛡️ [What is Policy-as-Code?](#what-is-policy-as-code)\n- \u003cimg src=\"https://github.com/open-policy-agent/opa/blob/master/logo/logo.svg\" height=\"16\" width=\"16\"/\u003e [What is OPA Gatekeeper ?](#what-is-opa-gatekeeper-)\n- \u003cimg src=\"https://github.com/kyverno/artwork/blob/main/Kyverno.svg\" height=\"16\" width=\"16\"/\u003e [What is Kyverno ?](#what-is-kyverno-)\n- 🎭 [What are differences between OPA Gatekeeper and Kyverno ?](#what-are-differences-between-opa-gatekeeper-and-kyverno-)\n- 🧑‍💻 [Hands On](#hands-on)\n- 👀 [References](#references)\n\n\u003c!-- END doctoc generated TOC please keep comment here to allow auto update --\u003e\n\n# Prerequisites\n\n* \u003cimg src=\"./assets/minikube.svg\" height=\"16\" width=\"16\"/\u003e minikube v1.17.1\n* \u003cimg src=\"https://github.com/cncf/artwork/blob/master/other/illustrations/ashley-mcnamara/kubectl/kubectl.svg\" height=\"16\" width=\"16\"/\u003e kubectl v1.20.2\n\n# What is Policy-as-Code?\nSimilar to the concept of `Infrastructure-as-Code (IaC)` and the benefits you get from codifying your infrastructure setup using the software development practices, `Policy-as-Code (PaC)` is the codification of your policies. \n\nPaC is the idea of writing code in a high-level language to manage and automate policies. By representing policies as code in text files, proven software development best practices can be adopted such as version control, automated testing, and automated deployment. \n\nThe policies you want to enforce come from your organization’s established guidelines or agreed-upon conventions, and best practices within the industry. It could also be derived from tribal knowledge that has accumulated over the years within your operations and development teams.\n\nPaC is very general, so, it can be applied to any environment that you want to manage and enforce policies but if want to apply it unto the Kubernetes world, there are two tools became very important: OPA Gatekeeper and Kyverno.\n\nLet's continue with the description of these tools.\n\n# What is OPA Gatekeeper ?\nBefore move on with the description of the OPA Gatekeeper, we should explain the OPA (Open Policy Agent) is first.\n\nThe [OPA](https://github.com/open-policy-agent/opa) is an open-source, general-purpose policy engine that can be used to enforce policies on various types of software systems like microservices, CI/CD pipelines, gateways, Kubernetes, etc. OPA was developed by Styra and is currently a part of CNCF.\n\nThe [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) is the policy controller for Kubernetes. More technically, it is a customizable Kubernetes Admission Webhook that helps enforce policies and strengthen governance.\n\nThe important thing that we should notice is the use of OPA is not tied to the Kubernetes alone. OPA Gatekeeper, on the other hand, is specifically built for Kubernetes Admission Control use case of OPA.\n\n# What is Kyverno ?\n[Kyverno](https://github.com/kyverno/kyverno/) is a policy engine designed for Kubernetes. With Kyverno, policies are managed as Kubernetes resources and no new language is required to write policies. This allows using familiar tools such as kubectl, git, and kustomize to manage policies. Kyverno policies can validate, mutate, and generate Kubernetes resources. The Kyverno CLI can be used to test policies and validate resources as part of a CI/CD pipeline. Kyverno is an open-source and a part of CNCF Sandbox Project also.\n\n# What are differences between OPA Gatekeeper and Kyverno ?\nLet's explain these differences with the table format.\n\n| Features/Capabilities                       \t| Gatekeeper \t| Kyverno \t|\n|---------------------------------------------\t|------------\t|---------\t|\n| Validation                                  \t|      ✓     \t|    ✓    \t|\n| Mutation                                    \t|     ✓*     \t|    ✓    \t|\n| Generation                                  \t|      X     \t|    ✓    \t|\n| Policy as native resources                  \t|      ✓     \t|    ✓    \t|\n| Metrics exposed                             \t|      ✓     \t|    ✓    \t|\n| OpenAPI validation schema (kubectl explain) \t|      X     \t|    ✓    \t|\n| High Availability                           \t|      ✓     \t|    ✓    \t|\n| API object lookup                           \t|      ✓     \t|    ✓*   \t|\n| CLI with test ability                       \t|     ✓**    \t|    ✓    \t|\n| Policy audit ability                        \t|      ✓     \t|    ✓    \t|\n\n`* Alpha status`\n`** Separate CLI`\n\n\u003e Credit: https://neonmirrors.net/post/2021-02/kubernetes-policy-comparison-opa-gatekeeper-vs-kyverno/\n\nIn my opinion, the best advantages of using Kyverno are no need to learn another policy language and the OpenAPI validation schema support that we can use via kubectl explain command. On the other hand side, OPA Gatekeeper has lots of tools developed around the Rego language to help us to write and test our policies such as [conftest](https://github.com/instrumenta/conftest), [konstraint](https://github.com/plexsystems/konstraint) and this is a big plus in my opinion. These are the tools that we can use to implement `Policy-as-Code Pipeline`. Another advantage of using OPA Gatekeeper, therese are lots of libraries that includes ready to use policies written for us such as [gatekeeper-library](https://github.com/open-policy-agent/gatekeeper-library), [konstraint-examples](https://github.com/plexsystems/konstraint/tree/main/examples) and [raspbernetes-policies](https://github.com/raspbernetes/k8s-security-policies/tree/master/policies).\n\n# Hands On\nI created two seperate folders for OPA Gatekeeper and Kyverno resources. We are going to start with the OPA Gatekepeer project first.\n\nThere are various types of installation of OPA Gatekeeper but in this section we are going to use [plain YAML manifest](./opa-gatekeeper/deploy.yaml) to install it. Let's install OPA Gatekeeper using the YAML manifest. In order to do that, we need to start our local Kubernetes cluster using `minikube`, we are going to use two different [Minikube profiles](https://minikube.sigs.k8s.io/docs/commands/profile/) for the OPA Gatekeeper and the Kyverno, that will result with the creating two seperate Kubernetes cluster.\n```bash\n$ minikube start -p opa-gatekeeper\n😄  [opa-gatekeeper] minikube v1.17.1 on Darwin 10.15.7\n✨  Using the hyperkit driver based on user configuration\n👍  Starting control plane node opa-gatekeeper in cluster opa-gatekeeper\n🔥  Creating hyperkit VM (CPUs=3, Memory=8192MB, Disk=20000MB) ...\n🌐  Found network options:\n    ▪ no_proxy=127.0.0.1,localhost\n🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...\n    ▪ env NO_PROXY=127.0.0.1,localhost\n    ▪ Generating certificates and keys ...\n    ▪ Booting up control plane ...\n    ▪ Configuring RBAC rules ...\n🔎  Verifying Kubernetes components...\n🌟  Enabled addons: storage-provisioner, default-storageclass\n🏄  Done! kubectl is now configured to use \"opa-gatekeeper\" cluster and \"default\" namespace by default\n```\n\nLet's apply the manifest.\n```bash\n$ kubectl apply -f opa-gatekeeper/deploy.yaml\nnamespace/gatekeeper-system created\nWarning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\ncustomresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created\ncustomresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created\ncustomresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created\ncustomresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created\nserviceaccount/gatekeeper-admin created\npodsecuritypolicy.policy/gatekeeper-admin created\nrole.rbac.authorization.k8s.io/gatekeeper-manager-role created\nclusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created\nrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created\nsecret/gatekeeper-webhook-server-cert created\nservice/gatekeeper-webhook-service created\ndeployment.apps/gatekeeper-audit created\ndeployment.apps/gatekeeper-controller-manager created\nWarning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration\nvalidatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created\n```\n\nYou should notice that bunch of CRDs created to allow define and enforce policies called `ConstraintTemplate` which describes both the Rego that enforces the constraint and the schema of the constraint.\n\nIn this section, we are going to enforce policy to validate required labels that we want to on resources, if required label exits then we'll approve the request, if not we'll reject it.\n\nLet's look at the `ConstraintTemplate` that we are going to apply.\n```yaml\napiVersion: templates.gatekeeper.sh/v1beta1\nkind: ConstraintTemplate\nmetadata:\n  name: k8srequiredlabels\nspec:\n  crd:\n    spec:\n      names:\n        kind: K8sRequiredLabels\n      validation:\n        # Schema for the `parameters` field\n        openAPIV3Schema:\n          properties:\n            labels:\n              type: array\n              items: string\n  targets:\n    - target: admission.k8s.gatekeeper.sh\n      rego: |\n        package k8srequiredlabels\n\n        violation[{\"msg\": msg, \"details\": {\"missing_labels\": missing}}] {\n          provided := {label | input.review.object.metadata.labels[label]}\n          required := {label | label := input.parameters.labels[_]}\n          missing := required - provided\n          count(missing) \u003e 0\n          msg := sprintf(\"you must provide labels: %v\", [missing])\n        }\n```\n\nYou should notice that the policy that we define with the Rego language is placed under the `.targets[].rego` section. Once we applied this to the cluster, `K8sRequiredLabels` Custom Resource is going to be created and by using this CR we'll define our policy context, means which resources we want to apply the policy on.\n\nLet's apply it.\n```bash\n$ kubectl apply -f opa-gatekeeper/k8srequiredlabels-constraint-template.yaml\nconstrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created\n\n$ kubectl get customresourcedefinitions.apiextensions.k8s.io\nFound existing alias for \"kubectl\". You should use: \"k\"\nNAME                                                 CREATED AT\nconfigs.config.gatekeeper.sh                         2021-02-25T09:06:10Z\nconstraintpodstatuses.status.gatekeeper.sh           2021-02-25T09:06:10Z\nconstrainttemplatepodstatuses.status.gatekeeper.sh   2021-02-25T09:06:10Z\nconstrainttemplates.templates.gatekeeper.sh          2021-02-25T09:06:10Z\nk8srequiredlabels.constraints.gatekeeper.sh          2021-02-25T09:19:39Z\n```\n\nAs you can see, `K8sRequiredLabels` CR is created. Lets define and apply it too.\n```yaml\napiVersion: constraints.gatekeeper.sh/v1beta1\nkind: K8sRequiredLabels\nmetadata:\n  name: ns-must-have-gk\nspec:\n  match:\n    kinds:\n      - apiGroups: [\"\"]\n        kinds: [\"Namespace\"]\n  parameters:\n    labels: [\"gatekeeper\"]\n```\n\nYou should notice that we'll enforce the policy on `Namespace` resource and the label value that we want to be available on the Namespace is `gatekepeer`.\n```bash\n$ kubectl apply -f opa-gatekeeper/k8srequiredlabels-constraint.yaml\nk8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-gk created\n```\n\nLet's test with creating invalid namespace then a valid one.\n```bash\n$ kubectl apply -f opa-gatekeeper/invalid-namespace.yaml\nFound existing alias for \"kubectl apply -f\". You should use: \"kaf\"\nError from server ([denied by ns-must-have-gk] you must provide labels: {\"gatekeeper\"}): error when creating \"opa-gatekeeper/invalid-namespace.yaml\": admission webhook \"validation.gatekeeper.sh\" denied the request: [denied by ns-must-have-gk] you must provide labels: {\"gatekeeper\"}\n```\n\n```bash\n$ kubectl apply -f opa-gatekeeper/valid-namespace.yaml\nFound existing alias for \"kubectl apply -f\". You should use: \"kaf\"\nnamespace/valid-namespace created\n```\n\nTadaaaa, it worked 🎉🎉🎉🎉\n\nLet's move on with the Kyverno, again, there are various way to install it unto the Kubernetes, in this case, we are going to use Helm. We said that we'll start up another Minikub cluster with different profile.\nLet's start with it.\n```bash\n$ minikube start -p kyverno\n😄  [kyverno] minikube v1.17.1 on Darwin 10.15.7\n✨  Using the hyperkit driver based on user configuration\n👍  Starting control plane node kyverno in cluster kyverno\n🔥  Creating hyperkit VM (CPUs=3, Memory=8192MB, Disk=20000MB) ...\n🌐  Found network options:\n    ▪ no_proxy=127.0.0.1,localhost\n🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...\n    ▪ env NO_PROXY=127.0.0.1,localhost\n    ▪ Generating certificates and keys ...\n    ▪ Booting up control plane ...\n    ▪ Configuring RBAC rules ...\n🔎  Verifying Kubernetes components...\n🌟  Enabled addons: storage-provisioner, default-storageclass\n🏄  Done! kubectl is now configured to use \"kyverno\" cluster and \"default\" namespace by default\n\n$ minikube profile list\n|----------------|-----------|---------|---------------|------|---------|---------|-------|\n|    Profile     | VM Driver | Runtime |      IP       | Port | Version | Status  | Nodes |\n|----------------|-----------|---------|---------------|------|---------|---------|-------|\n| kyverno        | hyperkit  | docker  | 192.168.64.17 | 8443 | v1.20.2 | Running |     1 |\n| minikube       | hyperkit  | docker  | 192.168.64.15 | 8443 | v1.20.2 | Stopped |     1 |\n| opa-gatekeeper | hyperkit  | docker  | 192.168.64.16 | 8443 | v1.20.2 | Running |     1 |\n|----------------|-----------|---------|---------------|------|---------|---------|-------|\n```\n\nLet's install it by using Helm.\n```bash\n$ helm repo add kyverno https://kyverno.github.io/kyverno/\n\"kyverno\" has been added to your repositories\n\n$ helm repo update\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"kyverno\" chart repository\n...Successfully got an update from the \"nats\" chart repository\n...Successfully got an update from the \"falcosecurity\" chart repository\n...Successfully got an update from the \"openfaas\" chart repository\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. ⎈Happy Helming!⎈\n\n$ helm install kyverno --namespace kyverno kyverno/kyverno --create-namespace\nNAME: kyverno\nLAST DEPLOYED: Thu Feb 25 13:16:21 2021\nNAMESPACE: kyverno\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nThank you for installing kyverno 😀\n\nYour release is named kyverno.\n\nWe have installed the \"default\" profile of Pod Security Standards and set them in audit mode.\n\nVisit https://kyverno.io/policies/ to find more sample policies.\n```\n\nLet's look at the Custom Resource Definitions list.\n```bash\n$ kubectl get customresourcedefinitions.apiextensions.k8s.io\nFound existing alias for \"kubectl\". You should use: \"k\"\nNAME                                     CREATED AT\nclusterpolicies.kyverno.io               2021-02-25T10:16:16Z\nclusterpolicyreports.wgpolicyk8s.io      2021-02-25T10:16:16Z\nclusterreportchangerequests.kyverno.io   2021-02-25T10:16:16Z\ngeneraterequests.kyverno.io              2021-02-25T10:16:16Z\npolicies.kyverno.io                      2021-02-25T10:16:16Z\npolicyreports.wgpolicyk8s.io             2021-02-25T10:16:16Z\nreportchangerequests.kyverno.io          2021-02-25T10:16:16Z\n```\n\nWe can also use `kubectl explain` command to get information easily about the resource using the OpenAPI schema.\n```bash\n$ kubectl explain policies\nKIND:     Policy\nVERSION:  kyverno.io/v1\n\nDESCRIPTION:\n     Policy declares validation, mutation, and generation behaviors for matching\n     resources. See: https://kyverno.io/docs/writing-policies/ for more\n     information.\n\nFIELDS:\n   apiVersion\t\u003cstring\u003e\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\u003cstring\u003e\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\u003cObject\u003e\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\u003cObject\u003e -required-\n     Spec defines policy behaviors and contains one or rules.\n\n   status\t\u003cObject\u003e\n     Status contains policy runtime information.\n```\n\nLets look at our first policy definition. In this case we are using validating feature of Kyverno.\n```yaml\napiVersion: kyverno.io/v1\nkind: ClusterPolicy\nmetadata:\n  name: require-labels\nspec:\n  validationFailureAction: enforce\n  rules:\n  - name: check-for-labels\n    match:\n      resources:\n        kinds:\n        - Pod\n    validate:\n      message: \"label `app.kubernetes.io/name` is required\"\n      pattern:\n        metadata:\n          labels:\n            app.kubernetes.io/name: \"?*\"\n```\nYou should notice that we enforcing a required label policy on Pod resource. We are ddefining policies using native Kyverno Custom Resource called `ClusterPolicy`.\n\nLet's apply it.\n```bash\n$ kubectl apply -f kyverno/validating/requirelabels-clusterpolicy.yaml\nclusterpolicy.kyverno.io/require-labels created\n```\n\nLet's test it by creating a Deployment that violates the policy.\n```bash\n$ kubectl apply -f kyverno/validating/invalid-deployment.yaml\nFound existing alias for \"kubectl apply -f\". You should use: \"kaf\"\nError from server: error when creating \"kyverno/validating/invalid-deployment.yaml\": admission webhook \"validate.kyverno.svc\" denied the request:\n\nresource Deployment/default/nginx was blocked due to the following policies\n\nrequire-labels:\n  autogen-check-for-labels: 'validation error: label `app.kubernetes.io/name` is required. Rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'\n```\n\nLet's apply valid one.\n```bash\n$ kubectl apply -f kyverno/validating/valid-deployment.yaml\npod/nginx created\n\n$ kubectl get pods\nNAME    READY   STATUS              RESTARTS   AGE\nnginx   0/1     ContainerCreating   0          6s\n```\n\nTadaaaa, it worked 🎉🎉🎉🎉\n\n# References\n* https://medium.com/trendyol-tech/enforce-organizational-policies-and-security-best-practices-to-your-kubernetes-clusters-by-using-dfc085528e07\n* https://www.velotio.com/engineering-blog/deploy-opa-on-kubernetes\n* https://engineering.mercari.com/en/blog/entry/20201222-enhance-kubernetes-security-with-opa-gatekeeper/\n* https://docs.hashicorp.com/sentinel/concepts/policy-as-code\n* https://www.magalix.com/blog/policy-as-code-for-kubernetes\n* https://betterprogramming.pub/policy-as-code-on-kubernetes-with-kyverno-b144749f144\n* https://itnext.io/fitness-validation-for-your-kubernetes-apps-policy-as-code-7fad698e7dec\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeveloper-guy%2Fpolicy-as-code-war","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeveloper-guy%2Fpolicy-as-code-war","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeveloper-guy%2Fpolicy-as-code-war/lists"}