{"id":13742614,"url":"https://github.com/ucloud/redis-operator","last_synced_at":"2025-09-10T08:09:13.188Z","repository":{"id":37664645,"uuid":"202650123","full_name":"ucloud/redis-operator","owner":"ucloud","description":"Redis operator build a Highly Available Redis cluster with Sentinel atop Kubernetes","archived":false,"fork":false,"pushed_at":"2022-12-27T03:33:08.000Z","size":8231,"stargazers_count":215,"open_issues_count":20,"forks_count":66,"subscribers_count":11,"default_branch":"master","last_synced_at":"2025-05-26T19:05:47.165Z","etag":null,"topics":["kubernetes","operator","redis-sentinel"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ucloud.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-08-16T03:14:17.000Z","updated_at":"2025-05-12T20:18:08.000Z","dependencies_parsed_at":"2023-01-31T02:31:17.772Z","dependency_job_id":null,"html_url":"https://github.com/ucloud/redis-operator","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/ucloud/redis-operator","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ucloud%2Fredis-operator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ucloud%2Fredis-operator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ucloud%2Fredis-operator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ucloud%2Fredis-operator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ucloud","download_url":"https://codeload.github.com/ucloud/redis-operator/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ucloud%2Fredis-operator/sbom","scorecard":{"id":906703,"data":{"date":"2025-08-11","repo":{"name":"github.com/ucloud/redis-operator","commit":"7e1236998e418e6e3c59a719596b09a46d0bc62d"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":2.5,"checks":[{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Code-Review","score":6,"reason":"Found 4/6 approved changesets -- score normalized to 6","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: Apache License 2.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Pinned-Dependencies","score":2,"reason":"dependency not pinned by hash detected -- score normalized to 2","details":["Warn: containerImage not pinned by hash: Dockerfile:1","Warn: containerImage not pinned by hash: Dockerfile:26","Warn: containerImage not pinned by hash: Dockerfile-withvendor:1","Warn: containerImage not pinned by hash: Dockerfile-withvendor:22","Warn: containerImage not pinned by hash: build/Dockerfile:1: pin your Docker image by updating registry.access.redhat.com/ubi7/ubi-minimal:latest to registry.access.redhat.com/ubi7/ubi-minimal:latest@sha256:244564fff3841542a17abe5d7daf5c803676dc13af48ea5302218b31ee1c2db7","Warn: containerImage not pinned by hash: test/e2e/Dockerfile:2","Warn: goCommand not pinned by hash: test/e2e/Dockerfile:8","Warn: goCommand not pinned by hash: vendor/github.com/json-iterator/go/build.sh:10","Info:   0 out of   6 containerImage dependencies pinned","Info:   2 out of   4 goCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 30 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Vulnerabilities","score":0,"reason":"43 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: GO-2025-3852 / GHSA-856v-8qm2-9wjv","Warn: Project is vulnerable to: GO-2022-0322 / GHSA-cg3q-j54f-5p7p","Warn: Project is vulnerable to: GO-2024-2748 / GHSA-33c5-9fx5-fvjm","Warn: Project is vulnerable to: GO-2021-0065 / GHSA-jmrx-5g74-6v2f","Warn: Project is vulnerable to: GO-2021-0064 / GHSA-8cfg-vx93-jvxw","Warn: Project is vulnerable to: GO-2020-0017 / GHSA-w73w-5m7g-f7qc","Warn: Project is vulnerable to: GO-2022-0209 / GHSA-r5c5-pr8j-pfp7","Warn: Project is vulnerable to: GO-2023-1992 / GHSA-x3jr-pf6g-c48f","Warn: Project is vulnerable to: GO-2022-0229 / GHSA-cjjc-xp8v-855w","Warn: Project is vulnerable to: GO-2020-0012 / GHSA-ffhg-7mh4-33c4","Warn: Project is vulnerable to: GO-2021-0227 / GHSA-3vm4-22fp-5rfm","Warn: Project is vulnerable to: GO-2022-0968 / GHSA-gwc9-m7rh-j2ww","Warn: Project is vulnerable to: GO-2021-0356 / GHSA-8c26-wmh5-6g9v","Warn: Project is vulnerable to: GO-2024-2961","Warn: Project is vulnerable to: GO-2023-2402 / GHSA-45x7-px36-x8w8","Warn: Project is vulnerable to: GO-2024-3321 / GHSA-v778-237x-gjrc","Warn: Project is vulnerable to: GO-2025-3487 / GHSA-hcg3-q754-cr77","Warn: Project is vulnerable to: GO-2022-0536 / GHSA-39qc-96h7-956f / GHSA-hgr8-6h9x-f7q9","Warn: Project is vulnerable to: GO-2022-0236 / GHSA-h86h-8ppg-mxmh","Warn: Project is vulnerable to: GO-2021-0238 / GHSA-83g2-8m93-v3w7","Warn: Project is vulnerable to: GO-2022-0288","Warn: Project is vulnerable to: GO-2022-0969 / GHSA-69cg-p879-7622","Warn: Project is vulnerable to: GO-2022-1144 / GHSA-xrjj-mj9h-534m","Warn: Project is vulnerable to: GO-2023-1571 / GHSA-vvpx-j8f3-3w6h","Warn: Project is vulnerable to: GO-2023-1988 / GHSA-2wrh-6pvc-2jm9","Warn: Project is vulnerable to: GO-2023-2102 / GHSA-4374-p667-p6c8","Warn: Project is vulnerable to: GHSA-qppj-fm5r-hxr3","Warn: Project is vulnerable to: GO-2024-2687 / GHSA-4v7x-pqxf-cx7m","Warn: Project is vulnerable to: GO-2024-3333","Warn: Project is vulnerable to: GO-2025-3503 / GHSA-qxp5-gwg8-xv66","Warn: Project is vulnerable to: GO-2025-3595 / GHSA-vvgc-356p-c3xw","Warn: Project is vulnerable to: GO-2020-0015 / GHSA-5rcv-m4m3-hfh7","Warn: Project is vulnerable to: GO-2021-0113 / GHSA-ppp9-7jff-5vj2","Warn: Project is vulnerable to: GO-2022-1059 / GHSA-69ch-w2m2-3vjp","Warn: Project is vulnerable to: GO-2022-0493 / GHSA-p782-xgp4-8hr8","Warn: Project is vulnerable to: GO-2021-0061 / GHSA-r88r-gmrh-7j83","Warn: Project is vulnerable to: GO-2022-0956 / GHSA-6q6q-88xp-6f2r","Warn: Project is vulnerable to: GO-2020-0036 / GHSA-wxc4-f4m6-wwqv","Warn: Project is vulnerable to: GO-2022-0193 / GHSA-fcf9-6fv2-fc5v","Warn: Project is vulnerable to: GO-2022-0192 / GHSA-2wp2-chmh-r934","Warn: Project is vulnerable to: GO-2022-0197 / GHSA-4r78-hx75-jjj2 / GHSA-mv93-wvcp-7m7r","Warn: Project is vulnerable to: GO-2020-0014 / GHSA-vfw5-hrgq-h5wf","Warn: Project is vulnerable to: GO-2021-0078 / GHSA-5p4h-3377-7w67"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}}]},"last_synced_at":"2025-08-24T17:37:26.865Z","repository_id":37664645,"created_at":"2025-08-24T17:37:26.865Z","updated_at":"2025-08-24T17:37:26.865Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274429155,"owners_count":25283322,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-10T02:00:12.551Z","response_time":83,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["kubernetes","operator","redis-sentinel"],"created_at":"2024-08-03T05:00:34.096Z","updated_at":"2025-09-10T08:09:13.131Z","avatar_url":"https://github.com/ucloud.png","language":"Go","readme":"# redis-operator\n\n## Overview\n\nRedis operator build a **Highly Available Redis cluster with Sentinel** atop Kubernetes.\nUsing this operator you can create a Redis deployment that resists without human intervention to certain kind of failures.\n\nThe operator itself is built with the [Operator framework](https://github.com/operator-framework/operator-sdk).\n\nIt inspired by [spotahome/redis-operator](https://github.com/spotahome/redis-operator).\n\n![Redis Cluster atop Kubernetes](/static/redis-sentinel-readme.png)\n\n* Create a statefulset to mange Redis instances (masters and replicas), each redis instance has default PreStop script that can do failover if master is down.\n* Create a statefulset to mange Sentinel instances that will control the Redis nodes, each Sentinel instance has default ReadinessProbe script to detect whether the current sentinel's status is ok. When a sentinel pod is not ready, it is removed from Service load balancers.\n* Create a Service and a Headless service for Sentinel statefulset.\n* Create a Headless service for Redis statefulset.\n\nTable of Contents\n=================\n\n   * [redis-operator](#redis-operator)\n      * [Overview](#overview)\n      * [Prerequisites](#prerequisites)\n      * [Features](#features)\n      * [Quick Start](#quick-start)\n         * [Deploy redis operator](#deploy-redis-operator)\n         * [Deploy a sample redis cluster](#deploy-a-sample-redis-cluster)\n            * [Resize an Redis Cluster](#resize-an-redis-cluster)\n            * [Create redis cluster with password](#create-redis-cluster-with-password)\n            * [Dynamically changing redis config](#dynamically-changing-redis-config)\n            * [Persistence](#persistence)\n            * [Custom SecurityContext](#custom-securitycontext)\n         * [Cleanup](#cleanup)\n      * [Automatic failover details](#automatic-failover-details)\n\n## Prerequisites\n\n* go version v1.13+.\n* Access to a Kubernetes v1.13.10+ cluster.\n\n## Features\nIn addition to the sentinel's own capabilities, redis-operator can:\n\n* Push events and update status to the Kubernetes when resources have state changes\n* Deploy redis operator  watches and manages resources in a single namespace or cluster-wide\n* Create redis cluster with password\n* Dynamically changing redis config\n* False delete automatic recovery\n* Persistence\n* Custom SecurityContext\n\n## Quick Start\n\n### Deploy redis operator\n\nRegister the RedisCluster custom resource definition (CRD).\n```\n$ kubectl create -f deploy/crds/redis_v1beta1_rediscluster_crd.yaml\n```\n\nA namespace-scoped operator watches and manages resources in a single namespace, whereas a cluster-scoped operator watches and manages resources cluster-wide.\nYou can chose run your operator as namespace-scoped or cluster-scoped.\n```\n// cluster-scoped\n$ kubectl create -f deploy/service_account.yaml\n$ kubectl create -f deploy/cluster/cluster_role.yaml\n$ kubectl create -f deploy/cluster/cluster_role_binding.yaml\n$ kubectl create -f deploy/cluster/operator.yaml\n\n// namespace-scoped\n$ kubectl create -f deploy/service_account.yaml\n$ kubectl create -f deploy/namespace/role.yaml\n$ kubectl create -f deploy/namespace/role_binding.yaml\n$ kubectl create -f deploy/namespace/operator.yaml\n```\n\nVerify that the redis-operator is up and running:\n```\n$ kubectl get deployment\nNAME             READY   UP-TO-DATE   AVAILABLE   AGE\nredis-operator   1/1     1            1           65d\n```\n\n### Deploy a sample redis cluster\n```\n$ cat deploy/cluster/redis_v1beta1_rediscluster_cr.yaml\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  annotations:\n    # if your operator run as cluster-scoped, add this annotations\n    redis.kun/scope: cluster-scoped\n  name: test\nspec:\n  # Add fields here\n  size: 3\n\nkubectl apply -f deploy/cluster/redis_v1beta1_rediscluster_cr.yaml\nif you run operator as namespace-scoped, do:\nkubectl apply -f deploy/namespace/redis_v1beta1_rediscluster_cr.yaml\n```\n\nVerify that the cluster instances and its components are running.\n```\n$ kubectl get rediscluster\nNAME   SIZE   STATUS    AGE\ntest   3      Healthy   4m9s\n\n$ kubectl get all -l app.kubernetes.io/managed-by=redis-operator\nNAME                        READY   STATUS    RESTARTS   AGE\npod/redis-cluster-test-0    1/1     Running   0          4m16s\npod/redis-cluster-test-1    1/1     Running   0          3m22s\npod/redis-cluster-test-2    1/1     Running   0          2m40s\npod/redis-sentinel-test-0   1/1     Running   0          4m16s\npod/redis-sentinel-test-1   1/1     Running   0          81s\npod/redis-sentinel-test-2   1/1     Running   0          18s\n\nNAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE\nservice/redis-cluster-test             ClusterIP   None            \u003cnone\u003e        6379/TCP    4m16s\nservice/redis-sentinel-headless-test   ClusterIP   None            \u003cnone\u003e        26379/TCP   4m16s\nservice/redis-sentinel-test            ClusterIP   10.22.22.34     \u003cnone\u003e        26379/TCP   4m16s\n\nNAME                                   READY   AGE\nstatefulset.apps/redis-cluster-test    3/3     4m16s\nstatefulset.apps/redis-sentinel-test   3/3     4m16s\n```\n\n* redis-cluster-\u003cNAME\u003e: Redis statefulset\n* redis-sentinel-\u003cNAME\u003e: Sentinel statefulset\n* redis-sentinel-\u003cNAME\u003e: Sentinel service\n* redis-sentinel-headless-\u003cNAME\u003e: Sentinel headless service\n* redis-cluster-\u003cNAME\u003e: Redis headless service\n\nDescribe the Redis Cluster, Viewing Events and Status\n```\n$ kubectl describe redisclusters test\n\nName:         test\nNamespace:    default\nLabels:       \u003cnone\u003e\nAnnotations:  redis.kun/scope: cluster-scoped\nAPI Version:  redis.kun/v1beta1\nKind:         RedisCluster\nMetadata:\n  Generation:          1\n  UID:                 ec0c3be4-b9c5-11e9-8191-6c92bfb35d2e\nSpec:\n  Image:              redis:5.0.4-alpine\n  Resources:\n    Limits:\n      Cpu:     400m\n      Memory:  300Mi\n    Requests:\n      Cpu:     100m\n      Memory:  50Mi\n  Size:        3\nStatus:\n  Conditions:\n    Last Transition Time:  2019-08-08T10:21:14Z\n    Last Update Time:      2019-08-08T10:22:14Z\n    Message:               Cluster ok\n    Reason:                Cluster available\n    Status:                True\n    Type:                  Healthy\n    Last Transition Time:  2019-08-08T10:18:53Z\n    Last Update Time:      2019-08-08T10:18:53Z\n    Message:               Bootstrap redis cluster\n    Reason:                Creating\n    Status:                True\n    Type:                  Creating\nEvents:\n  Type    Reason        Age                    From            Message\n  ----    ------        ----                   ----            -------\n  Normal  Creating      3m22s                  redis-operator  Bootstrap redis cluster\n  Normal  Ensure        2m12s (x8 over 3m22s)  redis-operator  Makes sure of redis cluster ready\n  Normal  CheckAndHeal  2m12s (x8 over 3m22s)  redis-operator  Check and heal the redis cluster problems\n  Normal  Updating      2m12s (x8 over 3m22s)  redis-operator  wait for all redis server start\n```\n\n#### Resize an Redis Cluster\nThe initial cluster size is 3. Modify the file and change size from 3 to 5.\n```\n$ cat deploy/crds/redis_v1beta1_rediscluster_cr.yaml\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  annotations:\n    # if your operator run as cluster-scoped, add this annotations\n    redis.kun/scope: cluster-scoped\n  name: test\nspec:\n  # Add fields here\n  size: 5\n\nkubectl apply -f deploy/crds/redis_v1beta1_rediscluster_cr.yaml\n```\n\nThe Redis Cluster will scale to 5 members(1 Master with 4 Slaves).\n\n#### Create redis cluster with password\n\nYou can setup redis with auth by set `spec.password`.\n\n```\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  annotations:\n    # if your operator run as cluster-scoped, add this annotations\n    redis.kun/scope: cluster-scoped\n  name: test\n  namespace: default\nspec:\n  # custom password (null to disable)\n  password: asdfsdf\n  # custom configurations\n  config:\n    hz: \"10\"\n    loglevel: verbose\n    maxclients: \"10000\"\n  image: redis:5.0.4-alpine\n  resources:\n    limits:\n      cpu: 400m\n      memory: 300Mi\n    requests:\n      cpu: 100m\n      memory: 50Mi\n  size: 3\n```\n\n#### Dynamically changing redis config\n\nIf the custom configurations is changed, the operator will use `config set` cmd apply the changes to the redis node without the need of reload the redis node.\n\n```\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  annotations:\n    # if your operator run as cluster-scoped, add this annotations\n    redis.kun/scope: cluster-scoped\n  name: test\n  namespace: default\nspec:\n  # custom password (null to disable)\n  password: asdfsdf\n  # change the configurations\n  config:\n    hz: \"12\"\n    loglevel: debug\n    maxclients: \"10000\"\n  image: redis:5.0.4-alpine\n  resources:\n    limits:\n      cpu: 400m\n      memory: 300Mi\n    requests:\n      cpu: 100m\n      memory: 50Mi\n  size: 3\n```\n\n#### Persistence\n\nThe operator has the ability of add persistence to Redis data. By default an emptyDir will be used, so the data is not saved.\n\nIn order to have persistence, a PersistentVolumeClaim usage is allowed.\n\nThe `spec.disablePersistence:false` flag can automatically configures the persistence parameters.\n\n```\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  name: test\n  # if your operator run as cluster-scoped, add this annotations\n  annotations:\n    redis.kun/scope: \"cluster-scoped\"\n  namespace: default\nspec:\n  image: redis:5.0.4-alpine\n  resources:\n    limits:\n      cpu: 400m\n      memory: 300Mi\n    requests:\n      cpu: 50m\n      memory: 30Mi\n  size: 3\n\n  # when the disablePersistence set to false, the following configurations will be set automatically:\n\n  # disablePersistence: false\n  # config[\"save\"] = \"900 1 300 10\"\n  # config[\"appendonly\"] = \"yes\"\n  # config[\"auto-aof-rewrite-min-size\"] = \"536870912\"\n  # config[\"repl-diskless-sync\"] = \"yes\"\n  # config[\"repl-backlog-size\"] = \"62914560\"\n  # config[\"aof-load-truncated\"] = \"yes\"\n  # config[\"stop-writes-on-bgsave-error\"] = \"no\"\n\n  # when the disablePersistence set to true, the following configurations will be set automatically:\n\n  # disablePersistence: true\n  # config[\"save\"] = \"\"\n  # config[\"appendonly\"] = \"no\"\n  storage:\n    # By default, the persistent volume claims will be deleted when the Redis Cluster be delete.\n    # If this is not the expected usage, a keepAfterDeletion flag can be added under the storage section\n    keepAfterDeletion: true\n    persistentVolumeClaim:\n      metadata:\n        name: test\n      spec:\n        accessModes:\n        - ReadWriteOnce\n        resources:\n          requests:\n            storage: 1Gi\n        storageClassName: sc-rbd-x5\n        volumeMode: Filesystem\n```\n\n#### Custom SecurityContext\n\nYou can change `net.core.somaxconn`(default is 128) by uses the pod securityContext to set unsafe sysctls net.core.somaxconn.\n\n```\napiVersion: redis.kun/v1beta1\nkind: RedisCluster\nmetadata:\n  annotations:\n    # if your operator run as cluster-scoped, add this annotations\n    redis.kun/scope: cluster-scoped\n  name: test\nspec:\n  # Add fields here\n  size: 3\n  securityContext:\n      sysctls:\n      - name: net.core.somaxconn\n        value: \"1024\"\n```\n\n### Cleanup\n\n```\n$ kubectl delete -f deploy/cluster/redis_v1beta1_rediscluster_cr.yaml\n$ kubectl delete -f deploy/cluster/operator.yaml\n$ kubectl delete -f deploy/cluster/cluster_role.yaml\n$ kubectl delete -f deploy/cluster/cluster_role_binding.yaml\n$ kubectl delete -f deploy/service_account.yaml\n$ kubectl delete -f deploy/crds/redis_v1beta1_rediscluster_crd.yaml\n\nor:\n$ kubectl delete -f deploy/namespace/redis_v1beta1_rediscluster_cr.yaml\n$ kubectl delete -f deploy/namespace/operator.yaml\n$ kubectl delete -f deploy/namespace/role.yaml\n$ kubectl delete -f deploy/namespace/role_binding.yaml\n$ kubectl delete -f deploy/service_account.yaml\n$ kubectl delete -f deploy/crds/redis_v1beta1_rediscluster_crd.yaml\n```\n\n## Automatic failover details\n\nRedis-operator build a **Highly Available Redis cluster with Sentinel**, Sentinel always checks the MASTER and SLAVE\ninstances in the Redis cluster, checking whether they working as expected. If sentinel detects a failure in the\nMASTER node in a given cluster, Sentinel will start a failover process. As a result, Sentinel will pick a SLAVE\ninstance and promote it to MASTER. Ultimately, the other remaining SLAVE instances will be automatically reconfigured\nto use the new MASTER instance.\n\noperator guarantees the following:\n* Only one Redis instance as master in a cluster\n* Number of Redis instance(masters and replicas) is equal as the set on the RedisCluster specification\n* Number of Sentinels is equal as the set on the RedisCluster specification\n* All Redis slaves have the same master\n* All Sentinels point to the same Redis master\n* Sentinel has not dead nodes\n\nBut Kubernetes pods are volatile, they can be deleted and recreated, and pods IP will change when pod be recreated,\nand also, the IP will be recycled and redistributed to other pods.\nUnfortunately, sentinel cannot delete the sentinel list or redis list in its memory when the pods IP changes.\nThis can be caused because there’s no way of a Sentinel node to self-deregister from the Sentinel Cluster before die,\nprovoking the Sentinel node list to increase without any control.\n\nTo ensure that Sentinel is working properly, operator will send a **RESET(SENTINEL RESET * )** signal to Sentinel node\none by one (if no failover is being running at that moment).\n`SENTINEL RESET mastername` command: they'll refresh the list of replicas within the next 10 seconds, only adding the\nones listed as correctly replicating from the current master INFO output.\nDuring this refresh time, `SENTINEL slaves \u003cmaster name\u003e` command can not get any result from sentinel, so operator sent\nRESET signal to Sentinel one by one and wait sentinel status became ok(monitor correct master and has slaves).\nAdditional, Each Sentinel instance has default ReadinessProbe script to detect whether the current sentinel's status is ok.\nWhen a sentinel pod is not ready, it is removed from Service load balancers.\nOperator also create a headless svc for Sentinel statefulset, if you can not get result from `SENTINEL slaves \u003cmaster name\u003e` command,\nYou can try polling the headless domain.\n","funding_links":[],"categories":["Repository is obsolete"],"sub_categories":["Awesome Operators in the Wild"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fucloud%2Fredis-operator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fucloud%2Fredis-operator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fucloud%2Fredis-operator/lists"}