{"id":20826733,"url":"https://github.com/cedrickchee/postgres-operator","last_synced_at":"2026-03-17T13:23:23.844Z","repository":{"id":138118492,"uuid":"418126235","full_name":"cedrickchee/postgres-operator","owner":"cedrickchee","description":"Learn how to deploy Zalando Postgres operator to my Kubernetes environment (local k3s cluster)","archived":false,"fork":false,"pushed_at":"2021-10-17T12:52:03.000Z","size":7,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-18T17:49:35.980Z","etag":null,"topics":["educational-project","kubernetes-controller","kubernetes-operator","patroni","postgresql-ha-cluster","spilo"],"latest_commit_sha":null,"homepage":"","language":"Makefile","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cedrickchee.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-10-17T12:34:58.000Z","updated_at":"2024-11-01T11:30:58.000Z","dependencies_parsed_at":null,"dependency_job_id":"bfc3b2d7-c8bb-4a2b-ac99-ba084e464f72","html_url":"https://github.com/cedrickchee/postgres-operator","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fpostgres-operator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fpostgres-operator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fpostgres-operator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fpostgres-operator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cedrickchee","download_url":"https://codeload.github.com/cedrickchee/postgres-operator/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243174004,"owners_count":20248214,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["educational-project","kubernetes-controller","kubernetes-operator","patroni","postgresql-ha-cluster","spilo"],"created_at":"2024-11-17T23:09:53.520Z","updated_at":"2025-12-25T13:57:27.299Z","avatar_url":"https://github.com/cedrickchee.png","language":"Makefile","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Kubernetes Postgres Operator\n\n[Zalando Postgres Operator](https://github.com/zalando/postgres-operator)\ncreates and manages PostgreSQL clusters running in Kubernetes.\n\nIt delivers an easy to run highly-available PostgreSQL clusters on Kubernetes\npowered by proven solutions \"under the hood\", such as:\n\n- [Patroni](https://github.com/zalando/patroni) and\n  [Spilo](https://github.com/zalando/spilo) for management,\n- [WAL-G](https://github.com/wal-g/wal-g) for backups,\n- [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a connection pool.\n\nThis project is my notes doing the [Quickstart](https://postgres-operator.readthedocs.io/en/latest/quickstart/).\n\n## Quickstart\n\nThis guide aims to give you a quick look and feel for using the Postgres\nOperator on a local Kubernetes environment.\n\n### Prerequisites\n\nSince the Postgres Operator is designed for the Kubernetes (K8s) framework,\nhence set it up first. For local tests I'm using [k3d](https://k3d.io/), which\nallows creating multi-nodes K8s clusters running on Docker.\n\nTo interact with the K8s infrastructure install its CLI runtime\n[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl).\n\nThis quickstart assumes that you have started minikube or created a local kind\ncluster.\n\nI created a new cluster using k3d (k3s wrapper).\n\n```sh\n$ make k8s/cluster/up\n```\n\n### Configuration Options\n\nConfiguring the Postgres Operator is only possible before deploying a new\nPostgres cluster. This can work in two ways: via a ConfigMap or a custom\n`OperatorConfiguration` object. More details on configuration can be found\n[here](https://postgres-operator.readthedocs.io/en/latest/reference/operator_parameters/).\n\n### Deployment options\n\nThe Postgres Operator can be deployed in the following ways:\n\n- Manual deployment\n- Kustomization\n- Helm chart\n\n#### Manual deployment setup\n\nThe Postgres Operator can be installed simply by applying yaml manifests. Note,\nwe provide the `/manifests` directory as an example only; you should consider\nadjusting the manifests to your K8s environment (e.g. namespaces).\n\n```sh\n# Please execute the script only from the root directory of this repo.\n\n# First, clone the repository and change to the directory\n$ git clone https://github.com/zalando/postgres-operator.git pgop\n\n# apply the manifests in the following order\n$ kubectl create -f pgop/manifests/configmap.yaml  # configuration\nconfigmap/postgres-operator created\n\n$ kubectl create -f pgop/manifests/operator-service-account-rbac.yaml  # identity and permissions\nserviceaccount/postgres-operator created\nclusterrole.rbac.authorization.k8s.io/postgres-operator created\nclusterrolebinding.rbac.authorization.k8s.io/postgres-operator created\nclusterrole.rbac.authorization.k8s.io/postgres-pod created\n\n$ kubectl create -f pgop/manifests/postgres-operator.yaml  # deployment\ndeployment.apps/postgres-operator created\n\n$ kubectl create -f pgop/manifests/api-service.yaml  # operator API to be used by UI\nservice/postgres-operator created\n```\n\n\u003e There is a Kustomization manifest that combines the mentioned resources\n\u003e (except for the CRD) - it can be used with kubectl 1.14 or newer as easy as:\n\u003e `kubectl apply -k github.com/zalando/postgres-operator/manifests`\n\nFor convenience, they have automated starting the operator with minikube using\nthe `run_operator_locally` script. It applies the\n[acid-minimal-cluster](pgop/manifests/minimal-postgres-manifest.yaml). manifest.\n\n`$ ./pgop/run_operator_locally.sh`\n\n#### Helm chart\n\nAlternatively, the operator can be installed by using the provided\n[Helm](https://helm.sh/) chart which saves you the manual steps. Clone this repo\nand change directory to the repo root. With Helm v3 installed you should be able\nto run:\n\n```sh\n$ helm install postgres-operator ./pgop/charts/postgres-operator\n```\n\nSee the quickstart doc for more.\n\n### Check if Postgres Operator is running\n\nStarting the operator may take a few seconds. Check if the operator pod is\nrunning before applying a Postgres cluster manifest.\n\n```sh\n# if you've created the operator using yaml manifests\n$ kubectl get pod -l name=postgres-operator\n\n# if you've created the operator using helm chart\n$ kubectl get pod -l app.kubernetes.io/name=postgres-operator\n```\n\nIf the operator doesn't get into `Running` state, either check the latest K8s\nevents of the deployment or pod with `kubectl describe` or inspect the operator\nlogs:\n\n```sh\n$ kubectl logs \"$(kubectl get pod -l name=postgres-operator --output='name')\"\n```\n\n### Deploy the operator UI\n\nIn the following paragraphs we describe how to access and manage PostgreSQL\nclusters from the command line with kubectl. But it can also be done from the\nbrowser-based [Postgres Operator UI](https://postgres-operator.readthedocs.io/en/latest/operator-ui/).\nBefore deploying the UI make sure the operator is running and its REST API is\nreachable through a [K8s service](pgop/manifests/api-service.yaml). The URL to this API must be configured in the\n[deployment manifest](https://postgres-operator.readthedocs.io/en/ui/manifests/deployment.yaml#L43)\nof the UI.\n\nTo deploy the UI simply apply all its manifests files or use the UI helm chart:\n\n```sh\n# manual deployment\n$ kubectl apply -f pgop/ui/manifests/\ndeployment.apps/postgres-operator-ui created\ningress.networking.k8s.io/postgres-operator-ui created\nservice/postgres-operator-ui created\nserviceaccount/postgres-operator-ui created\nclusterrole.rbac.authorization.k8s.io/postgres-operator-ui created\nclusterrolebinding.rbac.authorization.k8s.io/postgres-operator-ui created\nerror: unable to recognize \"pgop/ui/manifests/kustomization.yaml\": no matches for kind \"Kustomization\" in version \"kustomize.config.k8s.io/v1beta1\"\n\n# or kustomization\n$ kubectl apply -k github.com/zalando/postgres-operator/ui/manifests\n\n# or helm chart\n$ helm install postgres-operator-ui ./charts/postgres-operator-ui\n```\n\nLike with the operator, check if the UI pod gets into `Running` state:\n\n```sh\n# if you've created the operator using yaml manifests\n$ kubectl get pod -l name=postgres-operator-ui\n\n# if you've created the operator using helm chart\n$ kubectl get pod -l app.kubernetes.io/name=postgres-operator-ui\n```\n\nYou can now access the web interface by port forwarding the UI pod (mind the\nlabel selector) and enter `localhost:8081` in your browser:\n\n```sh\n$ kubectl port-forward svc/postgres-operator-ui 8081:80\nForwarding from 127.0.0.1:8081 -\u003e 8081\nForwarding from [::1]:8081 -\u003e 8081\nHandling connection for 8081\n```\n\n```sh\n$ curl -i localhost:8081\nHTTP/1.1 200 OK\nContent-Type: text/html; charset=utf-8\nContent-Length: 4699\nDate: Fri, 15 Oct 2021 15:10:37 GMT\n\n\u003c!doctype html\u003e\n\u003chtml lang=\"en\"\u003e\n  \u003chead\u003e\n    \u003cmeta charset=\"utf-8\"\u003e\n    \u003ctitle\u003ePostgreSQL Operator UI\u003c/title\u003e\n...\n\u003c/html\u003e\n```\n\nAvailable option are explained in detail in the [UI docs](https://postgres-operator.readthedocs.io/en/latest/operator-ui/).\n\n### Create a Postgres cluster\n\nIf the operator pod is running it listens to new events regarding `postgresql`\nresources. Now, it's time to submit your first Postgres cluster manifest.\n\n```sh\n# create a Postgres cluster\n$ kubectl create -f pgop/manifests/minimal-postgres-manifest.yaml\nThe postgresql \"acid-minimal-cluster\" is invalid: spec.postgresql.version: Unsupported value: \"14\": supported values: \"9.3\", \"9.4\", \"9.5\", \"9.6\", \"10\", \"11\", \"12\", \"13\"\n# fix and retry\n\n$ kubectl create -f pgop/manifests/minimal-postgres-manifest.yaml\npostgresql.acid.zalan.do/acid-minimal-cluster created\n```\n\nAfter the cluster manifest is submitted and passed the validation, the operator\nwill create Service and Endpoint resources and a StatefulSet which spins up new\nPod(s) given the number of instances specified in the manifest. All resources\nare named like the cluster. The database pods can be identified by their number\nsuffix, starting from `-0`. They run the\n[Spilo](https://github.com/zalando/spilo) container image by Zalando. As for the\nservices and endpoints, there will be one for the master pod and another one for\nall the replicas (`-repl` suffix). Check if all components are coming up. Use\nthe label `application=spilo` to filter and list the label `spilo-role` to see\nwho is currently the master.\n\n```sh\n# check the deployed cluster\n$ kubectl get postgresql\nNAME                   TEAM   VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE     STATUS\nacid-minimal-cluster   acid   13        2      1Gi                                     8m19s   Running\n\n# check created database pods\n$ kubectl get pods -l application=spilo -L spilo-role\nNAME                     READY   STATUS    RESTARTS   AGE   SPILO-ROLE\nacid-minimal-cluster-0   1/1     Running   0          21m   master\nacid-minimal-cluster-1   1/1     Running   0          16m   replica\n\n# check created service resources\n$ kubectl get svc -l application=spilo -L spilo-role\nNAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SPILO-ROLE\nacid-minimal-cluster          ClusterIP   10.43.66.169    \u003cnone\u003e        5432/TCP   21m   master\nacid-minimal-cluster-repl     ClusterIP   10.43.252.226   \u003cnone\u003e        5432/TCP   21m   replica\nacid-minimal-cluster-config   ClusterIP   None            \u003cnone\u003e        \u003cnone\u003e     17m\n```\n\n### Connect to the Postgres cluster via psql\n\nYou can create a port-forward on a database pod to connect to Postgres. See the\n[user guide](https://postgres-operator.readthedocs.io/en/latest/user/#connect-to-postgresql)\nfor instructions. With minikube it's also easy to retrieve the connections\nstring from the K8s service that is pointing to the master pod:\n\n```sh\n$ export HOST_PORT=$(minikube service acid-minimal-cluster --url | sed 's,.*/,,')\n$ export PGHOST=$(echo $HOST_PORT | cut -d: -f 1)\n$ export PGPORT=$(echo $HOST_PORT | cut -d: -f 2)\n```\n\nNote: as I'm not using minikube, I follow the instructions mentioned in the user\nguide (copied here):\n\n\u003e With a `port-forward` on one of the database pods (e.g. the master) you can\n\u003e connect to the PostgreSQL database from your machine. Use labels to filter for\n\u003e the master pod of our test cluster.\n\n```sh\n# get name of master pod of acid-minimal-cluster\n$ export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-minimal-cluster,spilo-role=master -n default)\n\n# set up port forward\n$ kubectl port-forward $PGMASTER 6432:5432 -n default\nForwarding from 127.0.0.1:6432 -\u003e 5432\nForwarding from [::1]:6432 -\u003e 5432\nHandling connection for 6432\n```\n\n**Sidenote:**\n\nAnother way to connect from your host to k8s service running in k3s pod.\nSee k3d doc: [exposing services](https://k3d.io/usage/guides/exposing_services/).\n\n\u003e Open another CLI and connect to the database using e.g. the psql client. When\n\u003e connecting with the `postgres` user read its password from the K8s secret which\n\u003e was generated when creating the `acid-minimal-cluster`. As non-encrypted\n\u003e connections are rejected by default set the SSL mode to `require`:\n\n```sh\n$ export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)\n$ export PGSSLMODE=require\n$ psql -U postgres -h localhost -p 6432\npsql (14.0 (Ubuntu 14.0-1.pgdg20.04+1), server 13.4 (Ubuntu 13.4-4.pgdg18.04+1))\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)\n...\npostgres=#\n```\n\n**Sidenote:**\n\nCheck the _master_ database replication:\n\n```sh\npostgres=# select * from pg_stat_replication\\x\\g\\x\nExpanded display is on.\n-[ RECORD 1 ]----+------------------------------\npid              | 2727\nusesysid         | 16661\nusename          | standby\napplication_name | acid-minimal-cluster-1\nclient_addr      | 10.42.0.14\nclient_hostname  | \nclient_port      | 37126\nbackend_start    | 2021-10-16 09:40:29.651345+00\nbackend_xmin     | \nstate            | streaming\nsent_lsn         | 0/E0017B0\nwrite_lsn        | 0/E0017B0\nflush_lsn        | 0/E0017B0\nreplay_lsn       | 0/E0017B0\nwrite_lag        | \nflush_lag        | \nreplay_lag       | \nsync_priority    | 0\nsync_state       | async\nreply_time       | 2021-10-16 11:11:47.517085+00\n\nExpanded display is off.\n```\n\nCheck the _replica_ database replication:\n\n```sh\npostgres=# select * from pg_stat_wal_receiver\\x\\g\\x\nExpanded display is on.\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 1262\nstatus                | streaming\nreceive_start_lsn     | 0/C000000\nreceive_start_tli     | 3\nwritten_lsn           | 0/F000110\nflushed_lsn           | 0/F000110\nreceived_tli          | 3\nlast_msg_send_time    | 2021-10-16 11:21:09.534223+00\nlast_msg_receipt_time | 2021-10-16 11:21:09.534364+00\nlatest_end_lsn        | 0/F000110\nlatest_end_time       | 2021-10-16 11:12:08.207732+00\nslot_name             | acid_minimal_cluster_1\nsender_host           | 10.42.0.12\nsender_port           | 5432\nconninfo              | user=standby passfile=/run/postgresql/pgpass host=10.42.0.12 port=5432 sslmode=prefer application_name=acid-minimal-cluster-1 gssencmode=prefer channel_binding=prefer\n\nExpanded display is off.\n```\n\nSee this SO thread for more: https://stackoverflow.com/a/54164409/206570\n\n### Delete a Postgres cluster\n\nTo delete a Postgres cluster simply delete the `postgresql` custom resource.\n\n```sh\n$ kubectl delete postgresql acid-minimal-cluster\npostgresql.acid.zalan.do \"acid-minimal-cluster\" deleted\n```\n\nThis should remove the associated StatefulSet, database Pods, Services and\nEndpoints. The PersistentVolumes are released and the PodDisruptionBudget is\ndeleted. Secrets however are not deleted and backups will remain in place.\n\nWhen deleting a cluster while it is still starting up or got stuck during that\nphase it can [happen](https://github.com/zalando/postgres-operator/issues/551)\nthat the `postgresql` resource is deleted leaving orphaned components behind. This\ncan cause troubles when creating a new Postgres cluster. For a fresh setup you\ncan delete your local minikube or kind cluster and start again.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcedrickchee%2Fpostgres-operator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcedrickchee%2Fpostgres-operator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcedrickchee%2Fpostgres-operator/lists"}