{"id":13720181,"url":"https://github.com/hellofresh/eks-rolling-update","last_synced_at":"2026-01-22T08:04:09.459Z","repository":{"id":39459207,"uuid":"182057444","full_name":"hellofresh/eks-rolling-update","owner":"hellofresh","description":"EKS Rolling Update is a utility for updating the launch configuration of worker nodes in an EKS cluster.","archived":false,"fork":false,"pushed_at":"2025-10-23T12:37:32.000Z","size":180,"stargazers_count":364,"open_issues_count":29,"forks_count":86,"subscribers_count":185,"default_branch":"master","last_synced_at":"2025-12-01T06:49:46.952Z","etag":null,"topics":["open-source","wiz-reliability-platform-cloud-runtime"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hellofresh.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-04-18T09:23:33.000Z","updated_at":"2025-11-10T16:46:13.000Z","dependencies_parsed_at":"2024-01-10T07:03:06.223Z","dependency_job_id":"4e4f8a00-60ee-4f66-b75e-15cac894104c","html_url":"https://github.com/hellofresh/eks-rolling-update","commit_stats":{"total_commits":117,"total_committers":30,"mean_commits":3.9,"dds":0.7863247863247863,"last_synced_commit":"acd2e285bff2eecb64d20938a38d517d8c2ed31c"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/hellofresh/eks-rolling-update","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hellofresh%2Feks-rolling-update","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hellofresh%2Feks-rolling-update/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hellofresh%2Feks-rolling-update/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hellofresh%2Feks-rolling-update/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hellofresh","download_url":"https://codeload.github.com/hellofresh/eks-rolling-update/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hellofresh%2Feks-rolling-update/sbom","scorecard":{"id":460348,"data":{"date":"2025-08-11","repo":{"name":"github.com/hellofresh/eks-rolling-update","commit":"eaf14913648616c29845ccb1cc96484ce10648bb"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4.6,"checks":[{"name":"Code-Review","score":2,"reason":"Found 4/19 approved changesets -- score normalized to 2","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Maintained","score":2,"reason":"3 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 2","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Info: jobLevel 'contents' permission set to 'read': .github/workflows/hf_pr-dependency-review.yml:15","Warn: no topLevel permission defined: .github/workflows/codeql-analysis.yml:1","Warn: no topLevel permission defined: .github/workflows/deploy.yaml:1","Warn: no topLevel permission defined: .github/workflows/hf_pr-dependency-review.yml:1","Warn: no topLevel permission defined: .github/workflows/pull-request.yaml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: Apache License 2.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql-analysis.yml:33: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/codeql-analysis.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql-analysis.yml:46: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/codeql-analysis.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql-analysis.yml:57: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/codeql-analysis.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codeql-analysis.yml:71: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/codeql-analysis.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/deploy.yaml:11: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/deploy.yaml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/deploy.yaml:14: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/deploy.yaml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/hf_pr-dependency-review.yml:18: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/hf_pr-dependency-review.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/pull-request.yaml:12: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/pull-request.yaml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/pull-request.yaml:15: update your workflow using https://app.stepsecurity.io/secureworkflow/hellofresh/eks-rolling-update/pull-request.yaml/master?enable=pin","Warn: containerImage not pinned by hash: Dockerfile:1","Warn: containerImage not pinned by hash: Dockerfile:12: pin your Docker image by updating python:3-alpine3.10 to python:3-alpine3.10@sha256:152b1952d4b42e360f2efd3037df9b645328c0cc6fbe9c63decbffbff407b96a","Warn: pipCommand not pinned by hash: Dockerfile:18-22","Warn: pipCommand not pinned by hash: .github/workflows/deploy.yaml:20","Warn: pipCommand not pinned by hash: .github/workflows/deploy.yaml:21","Warn: pipCommand not pinned by hash: .github/workflows/deploy.yaml:22","Warn: pipCommand not pinned by hash: .github/workflows/pull-request.yaml:21","Warn: pipCommand not pinned by hash: .github/workflows/pull-request.yaml:22","Warn: pipCommand not pinned by hash: .github/workflows/pull-request.yaml:23","Info:   0 out of   8 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   1 third-party GitHubAction dependencies pinned","Info:   0 out of   2 containerImage dependencies pinned","Info:   0 out of   7 pipCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Branch-Protection","score":-1,"reason":"internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration","details":null,"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Vulnerabilities","score":9,"reason":"1 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: PYSEC-2022-43017 / GHSA-qwmp-2cf2-g9g6"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"SAST","score":7,"reason":"SAST tool detected but not run on all commits","details":["Info: SAST configuration detected: CodeQL","Warn: 0 commits out of 19 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-19T10:57:49.188Z","repository_id":39459207,"created_at":"2025-08-19T10:57:49.188Z","updated_at":"2025-08-19T10:57:49.188Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28658972,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-22T01:17:37.254Z","status":"online","status_checked_at":"2026-01-22T02:00:07.137Z","response_time":144,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["open-source","wiz-reliability-platform-cloud-runtime"],"created_at":"2024-08-03T01:01:00.621Z","updated_at":"2026-01-22T08:04:09.425Z","avatar_url":"https://github.com/hellofresh.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg height=\"150px\" src=\"./logo.png\"  alt=\"EKS Rolling Update\" title=\"EKS Rolling Update\"\u003e\n\u003c/p\u003e\n\n# EKS Rolling Update\n\nEKS Rolling Update is a utility for updating the launch configuration or template of worker nodes in an EKS cluster.\n\n[![Build Status](https://travis-ci.org/hellofresh/eks-rolling-update.svg?branch=master)](https://travis-ci.org/hellofresh/eks-rolling-update)\n\n\n- [Intro](#intro)\n- [Requirements](#requirements)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Configuration](#configuration)\n- [Contributing](#contributing)\n- [License](#license)\n\n\n\u003ca name=\"intro\"\u003e\u003c/a\u003e\n# Intro\n\nEKS Rolling Update is a utility for updating the launch configuration or template of worker nodes in an EKS cluster. It\nupdates worker nodes in a rolling fashion and performs health checks of your EKS cluster to ensure no disruption to service.\nTo achieve this, it performs the following actions:\n\n* Pauses Kubernetes Autoscaler (Optional)\n* Finds a list of worker nodes that do not have a launch config or template that matches their ASG\n* Scales up the desired capacity\n* Ensures the ASGs are healthy and that the new nodes have joined the EKS cluster\n* Cordons the outdated worker nodes\n* Suspends AWS Autoscaling actions while update is in progress\n* Drains outdated EKS outdated worker nodes one by one\n* Terminates EC2 instances of the worker nodes one by one\n* Detaches EC2 instances from the ASG one by one\n* Scales down the ASG to original count (in case of failure)\n* Resumes AWS Autoscaling actions\n* Resumes Kubernetes Autoscaler (Optional)\n\n\u003ca name=\"requirements\"\u003e\u003c/a\u003e\n## Requirements\n\n* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed\n* `KUBECONFIG` environment variable set, or config available in `${HOME}/.kube/config` per default\n* AWS credentials [configured](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#guide-configuration)\n\n### IAM Requirements\n\nThe following IAM permissions are required:\n\n```\nautoscaling:DescribeAutoScalingGroups\nautoscaling:TerminateInstanceInAutoScalingGroup\nautoscaling:SuspendProcesses\nautoscaling:ResumeProcesses\nautoscaling:UpdateAutoScalingGroup\nautoscaling:CreateOrUpdateTags\nautoscaling:DeleteTags\nec2:DescribeLaunchTemplates\nec2:DescribeInstance\n```\n\n\u003ca name=\"installation\"\u003e\u003c/a\u003e\n## Installation\n\n### From PyPi\n\n```\npip3 install eks-rolling-update\n```\n\n### From source\n\n```\nvirtualenv -p python3 venv\nsource venv/bin/activate\npip3 install -r requirements.txt\n```\n\n\u003ca name=\"usage\"\u003e\u003c/a\u003e\n## Usage\n\n```\nusage: eks_rolling_update.py [-h] --cluster_name CLUSTER_NAME [--plan]\n\nRolling update on cluster\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --cluster_name CLUSTER_NAME, -c CLUSTER_NAME\n                        the cluster name to perform rolling update on\n  --plan, -p            perform a dry run to see which instances are out of\n                        date\n```\n\nExample:\n\n```\neks_rolling_update.py -c my-eks-cluster\n```\n\n## Configuration\n\n### Core Configuration\n\n| Environment Variable       | Description                                                                                                           | Default |\n|----------------------------|-----------------------------------------------------------------------------------------------------------------------|---------|\n| RUN_MODE                   | Overall strategy for handling multiple ASGs \u0026 identifying nodes to roll. See [Run Modes](#run-modes) section below    | 1       |\n| DRY_RUN                    | If True, only a query will be run to determine which worker nodes are outdated without running an update operation    | False   |\n| CLUSTER_HEALTH_WAIT        | Number of seconds to wait after ASG has been scaled up before checking health of nodes with the cluster               | 90      |\n| CLUSTER_HEALTH_RETRY       | Number of attempts to validate the health of the cluster after ASG has been scaled                                    | 1       |\n| GLOBAL_MAX_RETRY           | Number of attempts of a node health or instance termination check                                                     | 12      |\n| GLOBAL_HEALTH_WAIT         | Number of seconds to wait before retrying a health node health or instance termination check                          | 20      |\n| BETWEEN_NODES_WAIT         | Number of seconds to wait after removing a node before continuing on                                                  | 0       |\n\n### ASG \u0026 Node-Related Controls\n\n| Environment Variable       | Description                                                                                                                 | Default                                  |\n|----------------------------|-----------------------------------------------------------------------------------------------------------------------------|------------------------------------------|\n| ASG_DESIRED_STATE_TAG      | Temporary tag which will be saved to the ASG to store the state of the EKS cluster prior to update                          | eks-rolling-update:desired_capacity      |\n| ASG_ORIG_CAPACITY_TAG      | Temporary tag which will be saved to the ASG to store the state of the EKS cluster prior to update                          | eks-rolling-update:original_capacity     |\n| ASG_ORIG_MAX_CAPACITY_TAG  | Temporary tag which will be saved to the ASG to store the state of the EKS cluster prior to update                          | eks-rolling-update:original_max_capacity |\n| ASG_NAMES                  | List of space-delimited ASG names. Out of ASGs attached to the cluster, only these will be processed for rolling update. If this is left empty all ASGs of the cluster will be processed. | \"\" |\n| BATCH_SIZE                 | # of instances to scale the ASG by at a time. When set to 0, batching is disabled. See [Batching](#batching) section        | 0                                        |\n| MAX_ALLOWABLE_NODE_AGE     | The max age each node allowed to be. This works with `RUN_MODE` 4 as node rolling is updating based on age of node          | 6                                        |\n| EXCLUDE_NODE_LABEL_KEYS    | List of space-delimited keys for node labels. Nodes with a label using one of these keys will be excluded from the node count when scaling the cluster. | spotinst.io/node-lifecycle |\n| ASG_USE_TERMINATION_POLICY | Prefer ASG termination policy (instance terminate/detach handled by ASG according to configured termination policy)         | False                                    |\n| INSTANCE_WAIT_FOR_STOPPING | Only wait for terminated instances to be in `stopping` or `shutting-down` state, instead of fully `terminated` or `stopped` | False                                    |\n\n### K8S Node \u0026 Pod Controls\n\n| Environment Variable       | Description                                                                                                                | Default                                  |\n|----------------------------|----------------------------------------------------------------------------------------------------------------------------|------------------------------------------|\n| K8S_AUTOSCALER_ENABLED     | If True Kubernetes Autoscaler will be paused before running update                                                         | False                                    |\n| K8S_AUTOSCALER_NAMESPACE   | Namespace where Kubernetes Autoscaler is deployed                                                                          | default                                  |\n| K8S_AUTOSCALER_DEPLOYMENT  | Deployment name of Kubernetes Autoscaler                                                                                   | cluster-autoscaler                       |\n| K8S_AUTOSCALER_REPLICAS    | Number of replicas to scale back up to after Kubernentes Autoscaler paused                                                 | 2                                        |\n| K8S_CONTEXT                | Context from the Kubernetes config to use. If this is left undefined the `current-context` is used                         | None                                     |\n| K8S_PROXY_BYPASS           | Set to `true` to ignore `HTTPS_PROXY` and `HTTP_PROXY` and disable use of any configured proxy when talking to the K8S API | False                                    |\n| TAINT_NODES                | Replace the default **cordon**-before-drain strategy with `NoSchedule` **taint**ing, as a workaround for K8S \u003c `1.19` [prematurely removing cordoned nodes](https://github.com/kubernetes/kubernetes/issues/65013) from `Service`-managed `LoadBalancer`s | False |\n| EXTRA_DRAIN_ARGS           | Additional space-delimited args to supply to the `kubectl drain` function, e.g `--force=true`. See `kubectl drain -h`      | \"\"                                       |\n| ENFORCED_DRAINING          | If draining fails for a node due to corrupted `PodDisruptionBudget`s or failing pods, retry draining with `--disable-eviction=true` and `--force=true` for this node to prevent aborting the script. This is useful to get the rolling update done in development and testing environments and **should not be used in productive environments** since this will bypass checking `PodDisruptionBudget`s | False |\n\n## Run Modes\n\nThere are a number of different values which can be set for the `RUN_MODE` environment variable.\n\n`1` is the default.\n\n| Mode Number   | Description                                                                                                           |\n|---------------|-----------------------------------------------------------------------------------------------------------------------|\n| 1             | Scale up and cordon/taint the outdated nodes of each ASG one-by-one, just before we drain them.                       |\n| 2             | Scale up and cordon/taint the outdated nodes of all ASGs all at once at the beginning of the run.                     |\n| 3             | Cordon/taint the outdated nodes of all ASGs at the beginning of the run but scale each ASG one-by-one.                |\n| 4             | Roll EKS nodes based on age instead of launch config (works with `MAX_ALLOWABLE_NODE_AGE` with default 6 days value). |\n\nEach of them have different advantages and disadvantages.\n* Scaling up all ASGs at once may cause AWS EC2 instance limits to be exceeded\n* Only cordoning the nodes on a per-ASG basis will mean that pods are likely to be moved more than once\n* Cordoning the nodes for all ASGs at once could cause issues if new pods needs to start during the process\n\n## Batching\n\nEKS Rolling Update can batch scale-out the ASG to progressively reach the desired instance count before it begins\ndraining the nodes.\n\nThis is intended for use in cases where a large ASG scale-out may result in instances failing to register with\nEKS. Such a scenario is more likely to occur with larger ASGs where (for example) a 100 instance ASG may be asked\nto scale to 200 (temporarily). Users may find that some instances never register, and this causes EKS Rolling\nUpdate to hang indefinitely waiting for the registered EKS node count to match the instance count.\n\nIf this happens, you may want to consider batching.\n\nFor example, if the ASG will be scaled from 100 instances to 200 instances, specifying a batch size of 10 will\nresult in the ASG first scaling to 110, then 120, 130, etc instances until 200 is reached. Once the desired\ncount is reached, the tool will proceed with the normal draining/scale-in operations.\n\n## Examples\n\n* Plan\n\n```\n$ python eks_rolling_update.py --cluster_name YOUR_EKS_CLUSTER_NAME --plan\n```\n\n* Apply Changes\n\n```\n$ python eks_rolling_update.py --cluster_name YOUR_EKS_CLUSTER_NAME\n```\n\n* Cluster Autoscaler\n\nIf using `cluster-autoscaler`, you must let `eks-rolling-update` know that cluster-autoscaler is running in your cluster by exporting the following environment variables:\n\n```\n$ export  K8S_AUTOSCALER_ENABLED=true \\\n          K8S_AUTOSCALER_NAMESPACE=\"${CA_NAMESPACE}\" \\\n          K8S_AUTOSCALER_DEPLOYMENT=\"${CA_DEPLOYMENT_NAME}\"\n```\n\n* Disable operations on `cluster-autoscaler`\n\n```\n$ unset K8S_AUTOSCALER_ENABLED\n```\n\n* Configure tool via `.env` file\n\nRather than using environment variables, you can use a `.env` file within your working directory to load\nupdater settings. e.g:\n\n```\n$ cat .env\nDRY_RUN=1\n```\n\n\u003ca name=\"docker\"\u003e\u003c/a\u003e\n## Docker\n\nAlthough no public Docker image is currently published for this project, feel free to use the included [Dockerfile](Dockerfile) to build your own image.\n\n```bash\nmake docker-dist version=1.0.DEV\n```\n\nAfter building the image, run using the command\n```bash\ndocker run -ti --rm \\\n  -e AWS_DEFAULT_REGION \\\n  -v \"${HOME}/.aws:/root/.aws\" \\\n  -v \"${HOME}/.kube/config:/root/.kube/config\" \\\n  eks-rolling-update:latest \\\n  -c my-cluster\n```\n\nPass in any additional environment variables and options as described elsewhere in this file.\n\n\u003ca name=\"contributing\"\u003e\u003c/a\u003e\n## Contributing\n\nPlease read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.\n\n\u003ca name=\"licence\"\u003e\u003c/a\u003e\n## License\n\nThis project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details\n","funding_links":[],"categories":["Data plane management"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhellofresh%2Feks-rolling-update","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhellofresh%2Feks-rolling-update","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhellofresh%2Feks-rolling-update/lists"}