{"id":15779878,"url":"https://github.com/rgl/terramate-aws-eks-example","last_synced_at":"2025-04-14T20:45:08.981Z","repository":{"id":225790710,"uuid":"766867867","full_name":"rgl/terramate-aws-eks-example","owner":"rgl","description":"an example kubernetes cluster hosted in the AWS Elastic Kubernetes Service (EKS) using terramate with terraform","archived":false,"fork":false,"pushed_at":"2024-07-09T22:07:30.000Z","size":853,"stargazers_count":3,"open_issues_count":0,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-12-31T11:06:17.124Z","etag":null,"topics":["adot","aws","aws-docdb","aws-documentdb","cert-manager","certificate","container-registry","ecr","eks","external-dns","kubernetes","opentelemetry","terraform","terramate","tls","trust-manager"],"latest_commit_sha":null,"homepage":"","language":"HCL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rgl.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-04T09:26:11.000Z","updated_at":"2024-10-11T13:53:43.000Z","dependencies_parsed_at":"2024-04-05T09:24:35.365Z","dependency_job_id":"ded73419-38ce-4841-a8d2-40e79dcede17","html_url":"https://github.com/rgl/terramate-aws-eks-example","commit_stats":null,"previous_names":["rgl/terramate-aws-eks-example"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rgl%2Fterramate-aws-eks-example","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rgl%2Fterramate-aws-eks-example/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rgl%2Fterramate-aws-eks-example/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rgl%2Fterramate-aws-eks-example/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rgl","download_url":"https://codeload.github.com/rgl/terramate-aws-eks-example/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248960436,"owners_count":21189984,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adot","aws","aws-docdb","aws-documentdb","cert-manager","certificate","container-registry","ecr","eks","external-dns","kubernetes","opentelemetry","terraform","terramate","tls","trust-manager"],"created_at":"2024-10-04T18:21:48.747Z","updated_at":"2025-04-14T20:45:08.958Z","avatar_url":"https://github.com/rgl.png","language":"HCL","readme":"# About\n\n[![Lint](https://github.com/rgl/terramate-aws-eks-example/actions/workflows/lint.yml/badge.svg)](https://github.com/rgl/terramate-aws-eks-example/actions/workflows/lint.yml)\n\nThis creates an example kubernetes cluster hosted in the [AWS Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/) using a Terramate project with Terraform.\n\nThis will:\n\n* Create an Elastic Kubernetes Service (EKS)-based Kubernetes cluster.\n  * Use the [Bottlerocket OS](https://aws.amazon.com/bottlerocket/).\n  * Enable the [VPC CNI cluster add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html).\n  * Enable the [EBS CSI cluster add-on](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html).\n  * Enable the [AWS Distro for OpenTelemetry (ADOT) Operator add-on](https://docs.aws.amazon.com/eks/latest/userguide/opentelemetry.html).\n  * Create the [AWS Distro for OpenTelemetry (ADOT) Collector Deployment and `adot-collector` Service](https://aws-otel.github.io).\n    * Forwarding OpenTelemetry telemetry signals to [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/).\n  * Install [trust-manager](https://github.com/cert-manager/trust-manager).\n    * Manages TLS CA certificate bundles.\n  * Install [reloader](https://github.com/stakater/reloader).\n    * Reloads (restarts) pods when their configmaps or secrets change.\n* Create the Elastic Container Registry (ECR) repositories declared on the\n  [`source_images` global variable](config.tm.hcl), and upload the corresponding container\n  images.\n* Create a public DNS Zone using [Amazon Route 53](https://aws.amazon.com/route53/).\n  * Note that you need to configure the parent DNS Zone to delegate to this DNS Zone name servers.\n  * Use [external-dns](https://github.com/kubernetes-sigs/external-dns) to create the Ingress DNS Resource Records in the DNS Zone.\n* Create an [example AWS DocumentDB](stacks/eks/docdb.tf).\n* Demonstrate how to automatically deploy the [`kubernetes-hello` workload](stacks/eks-workloads/kubernetes-hello.tf).\n  * Show its environment variables.\n  * Show its tokens, secrets, and configs (config maps).\n  * Show its pod name and namespace.\n  * Show the containers running inside its pod.\n  * Show its memory limits.\n  * Show its cgroups.\n  * Expose as a Kubernetes `Ingress`.\n    * Use a sub-domain in the DNS Zone.\n    * Use a public Certificate managed by [Amazon Certificate Manager](https://aws.amazon.com/certificate-manager/) and issued by the public [Amazon Root CA](https://www.amazontrust.com/repository/).\n    * Note that this results in the creation of an [EC2 Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).\n  * Use [Role and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).\n  * Use [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/).\n  * Use [Secret](https://kubernetes.io/docs/concepts/configuration/secret/).\n  * Use [ServiceAccount](https://kubernetes.io/docs/concepts/security/service-accounts/).\n  * Use [Service Account token volume projection (a JSON Web Token and OpenID Connect (OIDC) ID Token)](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection) for the `https://example.com` audience.\n* Demonstrate how to automatically deploy the [`otel-example` workload](stacks/eks-workloads/otel-example.tf).\n  * Expose as a Kubernetes `Ingress` `Service`.\n    * Use a sub-domain in the DNS Zone.\n    * Use a public Certificate managed by [Amazon Certificate Manager](https://aws.amazon.com/certificate-manager/) and issued by the public [Amazon Root CA](https://www.amazontrust.com/repository/).\n    * Note that this results in the creation of an [EC2 Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).\n  * Send OpenTelemetry telemetry signals to the [`adot-collector` service](stacks/eks/adot-collector/main.tf).\n    * Send the logs telemetry signal to the Amazon CloudWatch Logs service.\n* Demonstrate how to manually deploy a stateful application.\n  * Deploy the [etcd key-value store](https://etcd.io).\n    * Use a [`StatefulSet` Workload](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).\n    * Use a [`PersistentVolumeClaim` Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).\n  * Deploy the [hello-etcd example application](https://github.com/rgl/hello-etcd).\n    * Use the etcd key-value store.\n* Demonstrate how to automatically deploy the [`docdb-example` workload](stacks/eks-workloads/docdb-example.tf).\n  * Use [the deployed example AWS DocumentDB](stacks/eks/docdb.tf).\n  * Use a `trust-manager` managed CA certificates volume that includes the [Amazon RDS CA certificates (i.e. `global-bundle.pem`)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.CertificatesAllRegions).\n\nThe main components are:\n\n![components](components.png)\n\nFor equivalent example see:\n\n* [terraform-aws-eks-example](https://github.com/rgl/terraform-aws-eks-example)\n\n# Usage (on a Ubuntu Desktop)\n\nInstall the dependencies:\n\n* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).\n* [Terraform](https://www.terraform.io/downloads.html).\n* [Terramate](https://terramate.io/docs/cli/installation).\n* [Crane](https://github.com/google/go-containerregistry/releases).\n* [jq](https://github.com/jqlang/jq/releases).\n* [Docker](https://docs.docker.com/engine/install/).\n\nSet the AWS Account credentials using SSO, e.g.:\n\n```bash\n# set the account credentials.\n# NB the aws cli stores these at ~/.aws/config.\n# NB this is equivalent to manually configuring SSO using aws configure sso.\n# see https://docs.aws.amazon.com/cli/latest/userguide/sso-configure-profile-token.html#sso-configure-profile-token-manual\n# see https://docs.aws.amazon.com/cli/latest/userguide/sso-configure-profile-token.html#sso-configure-profile-token-auto-sso\ncat \u003esecrets-example.sh \u003c\u003c'EOF'\n# set the environment variables to use a specific profile.\n# NB use aws configure sso to configure these manually.\n# e.g. use the pattern \u003caws-sso-session\u003e-\u003caws-account-id\u003e-\u003caws-role-name\u003e\nexport aws_sso_session='example'\nexport aws_sso_start_url='https://example.awsapps.com/start'\nexport aws_sso_region='eu-west-1'\nexport aws_sso_account_id='123456'\nexport aws_sso_role_name='AdministratorAccess'\nexport AWS_PROFILE=\"$aws_sso_session-$aws_sso_account_id-$aws_sso_role_name\"\nunset AWS_ACCESS_KEY_ID\nunset AWS_SECRET_ACCESS_KEY\nunset AWS_DEFAULT_REGION\n# configure the ~/.aws/config file.\n# NB unfortunately, I did not find a way to create the [sso-session] section\n#    inside the ~/.aws/config file using the aws cli. so, instead, manage that\n#    file using python.\npython3 \u003c\u003c'PY_EOF'\nimport configparser\nimport os\naws_sso_session = os.getenv('aws_sso_session')\naws_sso_start_url = os.getenv('aws_sso_start_url')\naws_sso_region = os.getenv('aws_sso_region')\naws_sso_account_id = os.getenv('aws_sso_account_id')\naws_sso_role_name = os.getenv('aws_sso_role_name')\naws_profile = os.getenv('AWS_PROFILE')\nconfig = configparser.ConfigParser()\naws_config_directory_path = os.path.expanduser('~/.aws')\naws_config_path = os.path.join(aws_config_directory_path, 'config')\nif os.path.exists(aws_config_path):\n  config.read(aws_config_path)\nconfig[f'sso-session {aws_sso_session}'] = {\n  'sso_start_url': aws_sso_start_url,\n  'sso_region': aws_sso_region,\n  'sso_registration_scopes': 'sso:account:access',\n}\nconfig[f'profile {aws_profile}'] = {\n  'sso_session': aws_sso_session,\n  'sso_account_id': aws_sso_account_id,\n  'sso_role_name': aws_sso_role_name,\n  'region': aws_sso_region,\n}\nos.makedirs(aws_config_directory_path, mode=0o700, exist_ok=True)\nwith open(aws_config_path, 'w') as f:\n  config.write(f)\nPY_EOF\nunset aws_sso_start_url\nunset aws_sso_region\nunset aws_sso_session\nunset aws_sso_account_id\nunset aws_sso_role_name\n# show the user, user amazon resource name (arn), and the account id, of the\n# profile set in the AWS_PROFILE environment variable.\nif ! aws sts get-caller-identity \u003e/dev/null 2\u003e\u00261; then\n  aws sso login\nfi\naws sts get-caller-identity\nEOF\n```\n\nOr, set the AWS Account credentials using an Access Key, e.g.:\n\n```bash\n# set the account credentials.\n# NB get these from your aws account iam console.\n#    see Managing access keys (console) at\n#        https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey\ncat \u003esecrets-example.sh \u003c\u003c'EOF'\nexport AWS_ACCESS_KEY_ID='TODO'\nexport AWS_SECRET_ACCESS_KEY='TODO'\nunset AWS_PROFILE\n# set the default region.\nexport AWS_DEFAULT_REGION='eu-west-1'\n# show the user, user amazon resource name (arn), and the account id.\naws sts get-caller-identity\nEOF\n```\n\nLoad the secrets:\n\n```bash\nsource secrets-example.sh\n```\n\nReview the [`config.tm.hcl`](config.tm.hcl) file.\n\nAt least, modify the `ingress_domain` global to a DNS Zone that is a child of a\nDNS Zone that you control. The `ingress_domain` DNS Zone will be created by\nthis example. The DNS Zone will be hosted in the Amazon Route 53 DNS name\nservers.\n\nGenerate the project configuration:\n\n```bash\nterramate generate\n```\n\nCommit the changes to the git repository.\n\nInitialize the project:\n\n```bash\nterramate run terraform init -lockfile=readonly\nterramate run terraform validate\n```\n\nLaunch the example:\n\n```bash\nterramate run terraform apply\n```\n\nThe first launch will fail while trying to create the `aws_acm_certificate`\nresource. You must delegate the DNS Zone, as described bellow, and then launch\nthe example again to finish the provisioning.\n\nShow the ingress domain and the ingress DNS Zone name servers:\n\n```bash\ningress_domain=\"$(terramate run -C stacks/eks-aws-load-balancer-controller \\\n  terraform output -raw ingress_domain)\"\ningress_domain_name_servers=\"$(terramate run -C stacks/eks-aws-load-balancer-controller \\\n  terraform output -json ingress_domain_name_servers \\\n  | jq -r '.[]')\"\nprintf \"ingress_domain:\\n\\n$ingress_domain\\n\\n\"\nprintf \"ingress_domain_name_servers:\\n\\n$ingress_domain_name_servers\\n\\n\"\n```\n\nUsing your parent ingress domain DNS Registrar or DNS Hosting provider, delegate the `ingress_domain` DNS Zone to the returned `ingress_domain_name_servers` DNS name servers. For example, at the parent DNS Zone, add:\n\n```plain\nexample NS ns-123.awsdns-11.com.\nexample NS ns-321.awsdns-34.net.\nexample NS ns-456.awsdns-56.org.\nexample NS ns-948.awsdns-65.co.uk.\n```\n\nVerify the delegation:\n\n```bash\ningress_domain=\"$(terramate run -C stacks/eks-aws-load-balancer-controller \\\n  terraform output -raw ingress_domain)\"\ningress_domain_name_server=\"$(terramate run -C stacks/eks-aws-load-balancer-controller \\\n  terraform output -json ingress_domain_name_servers | jq -r '.[0]')\"\ndig ns \"$ingress_domain\" \"@$ingress_domain_name_server\" # verify with amazon route 53 dns.\ndig ns \"$ingress_domain\"                                # verify with your local resolver.\n```\n\nLaunch the example again, this time, no error is expected:\n\n```bash\nterramate run terraform apply\n```\n\nShow the terraform state:\n\n```bash\nterramate run terraform state list\nterramate run terraform show\n```\n\nShow the [OpenID Connect Discovery Document](https://openid.net/specs/openid-connect-discovery-1_0.html) (aka OpenID Connect Configuration):\n\n```bash\nwget -qO- \"$(\n  terramate run -C stacks/eks-workloads \\\n    terraform output -raw cluster_oidc_configuration_url)\" \\\n  | jq\n```\n\n**NB** The Kubernetes Service Account tokens are JSON Web Tokens (JWT) signed\nby the cluster OIDC provider. They can be validated using the metadata at the\n`cluster_oidc_configuration_url` endpoint. You can view a Service Account token\nat the installed `kubernetes-hello` service endpoint.\n\nGet the cluster `kubeconfig.yml` configuration file:\n\n```bash\nexport KUBECONFIG=\"$PWD/kubeconfig.yml\"\nrm -f \"$KUBECONFIG\"\naws eks update-kubeconfig \\\n  --region \"$(terramate run -C stacks/eks-workloads terraform output -raw region)\" \\\n  --name \"$(terramate run -C stacks/eks-workloads terraform output -raw cluster_name)\"\n```\n\nAccess the EKS cluster:\n\n```bash\nexport KUBECONFIG=\"$PWD/kubeconfig.yml\"\nkubectl cluster-info\nkubectl get nodes -o wide\nkubectl get ingressclass\nkubectl get storageclass\n# NB notice that the ReclaimPolicy is Delete. this means that, when we delete a\n#    PersistentVolumeClaim or PersistentVolume, the volume will be deleted from\n#    the AWS account.\nkubectl describe storageclass/gp2\n```\n\nList the installed Helm chart releases:\n\n```bash\nhelm list --all-namespaces\n```\n\nShow a helm release status, the user supplied values, all the values, and the\nchart managed kubernetes resources:\n\n```bash\nhelm -n external-dns status external-dns\nhelm -n external-dns get values external-dns\nhelm -n external-dns get values external-dns --all\nhelm -n external-dns get manifest external-dns\n```\n\nShow the `adot` OpenTelemetryCollector instance:\n\n```bash\nkubectl get -n opentelemetry-operator-system opentelemetrycollector/adot -o yaml\n```\n\nAccess the `otel-example` ClusterIP Service from a [kubectl port-forward local port](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/):\n\n```bash\nkubectl port-forward service/otel-example 6789:80 \u0026\nsleep 3 \u0026\u0026 printf '\\n\\n'\nwget -qO- http://localhost:6789/quote | jq\nkill %1 \u0026\u0026 sleep 3\n```\n\nWait for the `otel-example` Ingress to be available:\n\n```bash\notel_example_host=\"$(kubectl get ingress/otel-example -o jsonpath='{.spec.rules[0].host}')\"\notel_example_url=\"https://$otel_example_host\"\necho \"otel-example ingress url: $otel_example_url\"\n# wait for the host to resolve at the first route 53 name server.\ningress_domain_name_server=\"$(terramate run -C stacks/eks-aws-load-balancer-controller terraform output -json ingress_domain_name_servers | jq -r '.[0]')\"\nwhile [ -z \"$(dig +short \"$otel_example_host\" \"@$ingress_domain_name_server\")\" ]; do sleep 5; done \u0026\u0026 dig \"$otel_example_host\" \"@$ingress_domain_name_server\"\n# wait for the host to resolve at the public internet (from the viewpoint\n# of our local dns resolver).\nwhile [ -z \"$(dig +short \"$otel_example_host\")\" ]; do sleep 5; done \u0026\u0026 dig \"$otel_example_host\"\n```\n\nAccess the `otel-example` Ingress from the Internet:\n\n```bash\nwget -qO- \"$otel_example_url/quote\" | jq\nwget -qO- \"$otel_example_url/quotetext\"\n```\n\nAudit the `otel-example` Ingress TLS implementation:\n\n```bash\notel_example_host=\"$(kubectl get ingress/otel-example -o jsonpath='{.spec.rules[0].host}')\"\necho \"otel-example ingress host: $otel_example_host\"\nxdg-open https://www.ssllabs.com/ssltest/\n```\n\nAccess the `kubernetes-hello` ClusterIP Service from a [kubectl port-forward local port](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/):\n\n```bash\nkubectl port-forward service/kubernetes-hello 6789:80 \u0026\nsleep 3 \u0026\u0026 printf '\\n\\n'\nwget -qO- http://localhost:6789\nkill %1 \u0026\u0026 sleep 3\n```\n\nAccess the `kubernetes-hello` Ingress from the Internet:\n\n```bash\nkubernetes_hello_host=\"$(kubectl get ingress/kubernetes-hello -o jsonpath='{.spec.rules[0].host}')\"\nkubernetes_hello_url=\"https://$kubernetes_hello_host\"\necho \"kubernetes-hello ingress url: $kubernetes_hello_url\"\n# wait for the host to resolve at the first route 53 name server.\ningress_domain_name_server=\"$(terramate run -C stacks/eks-aws-load-balancer-controller terraform output -json ingress_domain_name_servers | jq -r '.[0]')\"\nwhile [ -z \"$(dig +short \"$kubernetes_hello_host\" \"@$ingress_domain_name_server\")\" ]; do sleep 5; done \u0026\u0026 dig \"$kubernetes_hello_host\" \"@$ingress_domain_name_server\"\n# wait for the host to resolve at the public internet (from the viewpoint\n# of our local dns resolver).\nwhile [ -z \"$(dig +short \"$kubernetes_hello_host\")\" ]; do sleep 5; done \u0026\u0026 dig \"$kubernetes_hello_host\"\n# finally, access the service.\nwget -qO- \"$kubernetes_hello_url\"\nxdg-open \"$kubernetes_hello_url\"\n```\n\nAudit the `kubernetes-example` Ingress TLS implementation:\n\n```bash\nkubernetes_hello_host=\"$(kubectl get ingress/kubernetes-hello -o jsonpath='{.spec.rules[0].host}')\"\necho \"kubernetes-hello ingress host: $kubernetes_hello_host\"\nxdg-open https://www.ssllabs.com/ssltest/\n```\n\nDeploy the [example hello-etcd stateful application](https://github.com/rgl/hello-etcd):\n\n```bash\nrm -rf tmp/hello-etcd\ninstall -d tmp/hello-etcd\npushd tmp/hello-etcd\n# renovate: datasource=docker depName=rgl/hello-etcd registryUrl=https://ghcr.io\nhello_etcd_version='0.0.3'\nwget -qO- \"https://raw.githubusercontent.com/rgl/hello-etcd/v${hello_etcd_version}/manifest.yml\" \\\n  | perl -pe 's,(storageClassName:).+,$1 gp2,g' \\\n  | perl -pe 's,(storage:).+,$1 100Mi,g' \\\n  \u003e manifest.yml\nkubectl apply -f manifest.yml\nkubectl rollout status deployment hello-etcd\nkubectl rollout status statefulset hello-etcd-etcd\nkubectl get service,statefulset,pod,pvc,pv,sc\n```\n\nAccess the `hello-etcd` service from a [kubectl port-forward local port](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/):\n\n```bash\nkubectl port-forward service/hello-etcd 6789:web \u0026\nsleep 3 \u0026\u0026 printf '\\n\\n'\nwget -qO- http://localhost:6789 # Hello World #1!\nwget -qO- http://localhost:6789 # Hello World #2!\nwget -qO- http://localhost:6789 # Hello World #3!\n```\n\nDelete the etcd pod:\n\n```bash\n# NB the used gp2 StorageClass is configured with ReclaimPolicy set to Delete.\n#    this means that, when we delete the application PersistentVolumeClaim, the\n#    volume will be deleted from the AWS account. this also means that, to play\n#    with this, we cannot delete all the application resource. we have to keep\n#    the persistent volume around by only deleting the etcd pod.\n# NB although we delete the pod, the StatefulSet will create a fresh pod to\n#    replace it. using the same persistent volume as the old one.\nkubectl delete pod/hello-etcd-etcd-0\nkubectl get pod/hello-etcd-etcd-0 # NB its age should be in the seconds range.\nkubectl get pvc,pv\n```\n\nAccess the application, and notice that the counter continues after the previously returned value, which means that although the etcd instance is different, it picked up the same persistent volume:\n\n```bash\nwget -qO- http://localhost:6789 # Hello World #4!\nwget -qO- http://localhost:6789 # Hello World #5!\nwget -qO- http://localhost:6789 # Hello World #6!\n```\n\nDelete everything:\n\n```bash\nkubectl delete -f manifest.yml\nkill %1 # kill the kubectl port-forward background command execution.\n# NB the persistent volume will linger for a bit, until it will be eventually\n#    reclaimed and deleted (because the StorageClass is configured with\n#    ReclaimPolicy set to Delete).\nkubectl get pvc,pv\n# force the persistent volume deletion.\n# NB if you do not do this (or wait until the persistent volume is actually\n#    deleted), the associated AWS EBS volume we be left created in your AWS\n#    account, and you have to manually delete it from there.\nkubectl delete pvc/etcd-data-hello-etcd-etcd-0\n# NB you should wait until its actually deleted.\nkubectl get pvc,pv\npopd\n```\n\nAccess the `docdb-example` ClusterIP Service from a [kubectl port-forward local port](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/):\n\n```bash\nkubectl port-forward service/docdb-example 6789:80 \u0026\nsleep 3 \u0026\u0026 printf '\\n\\n'\nwget -qO- http://localhost:6789 \u0026\u0026 printf '\\n\\n'\nkill %1 \u0026\u0026 sleep 3\n```\n\nAccess the `docdb-example` Ingress from the Internet:\n\n```bash\ndocdb_example_host=\"$(kubectl get ingress/docdb-example -o jsonpath='{.spec.rules[0].host}')\"\ndocdb_example_url=\"https://$docdb_example_host\"\necho \"docdb-example ingress url: $docdb_example_url\"\n# wait for the host to resolve at the first route 53 name server.\ningress_domain_name_server=\"$(terramate run -C stacks/eks-aws-load-balancer-controller terraform output -json ingress_domain_name_servers | jq -r '.[0]')\"\nwhile [ -z \"$(dig +short \"$docdb_example_host\" \"@$ingress_domain_name_server\")\" ]; do sleep 5; done \u0026\u0026 dig \"$docdb_example_host\" \"@$ingress_domain_name_server\"\n# wait for the host to resolve at the public internet (from the viewpoint\n# of our local dns resolver).\nwhile [ -z \"$(dig +short \"$docdb_example_host\")\" ]; do sleep 5; done \u0026\u0026 dig \"$docdb_example_host\"\n# finally, access the service.\nwget -qO- \"$docdb_example_url\"\nxdg-open \"$docdb_example_url\"\n```\n\nVerify the trusted CA certificates, this should include the Amazon RDS CA\ncertificates (e.g. `Amazon RDS eu-west-1 Root CA RSA2048 G1`):\n\n```bash\nkubectl exec --stdin deployment/docdb-example -- bash \u003c\u003c'EOF'\nopenssl crl2pkcs7 -nocrl -certfile /etc/ssl/certs/ca-certificates.crt \\\n  | openssl pkcs7 -print_certs -text -noout\nEOF\n```\n\nList all the used container images:\n\n```bash\n# see https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/\nkubectl get pods --all-namespaces \\\n  -o jsonpath=\"{.items[*].spec['initContainers','containers'][*].image}\" \\\n  | tr -s '[[:space:]]' '\\n' \\\n  | sort --unique\n```\n\nIt should be something like:\n\n```bash\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon/aws-network-policy-agent:v1.1.2-eksbuild.1\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni-init:v1.18.2-eksbuild.1\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.18.2-eksbuild.1\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/aws-ebs-csi-driver:v1.32.0\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/coredns:v1.11.1-eksbuild.4\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/csi-attacher:v4.6.1-eks-1-30-8\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/csi-node-driver-registrar:v2.11.0-eks-1-30-8\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/csi-provisioner:v5.0.1-eks-1-30-8\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/csi-resizer:v1.11.1-eks-1-30-8\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/csi-snapshotter:v8.0.1-eks-1-30-8\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/kube-proxy:v1.29.0-minimal-eksbuild.1\n602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/livenessprobe:v2.13.0-eks-1-30-8\n960774936715.dkr.ecr.eu-west-1.amazonaws.com/aws-eks-example-dev/docdb-example:0.0.1\n960774936715.dkr.ecr.eu-west-1.amazonaws.com/aws-eks-example-dev/kubernetes-hello:v0.0.202406151349\n960774936715.dkr.ecr.eu-west-1.amazonaws.com/aws-eks-example-dev/otel-example:0.0.8\nghcr.io/stakater/reloader:v1.0.116\npublic.ecr.aws/aws-observability/adot-operator:0.94.1\npublic.ecr.aws/aws-observability/aws-otel-collector:v0.38.1\npublic.ecr.aws/aws-observability/mirror-kube-rbac-proxy:v0.15.0\npublic.ecr.aws/eks/aws-load-balancer-controller:v2.7.1\nquay.io/jetstack/cert-manager-cainjector:v1.14.3\nquay.io/jetstack/cert-manager-controller:v1.14.3\nquay.io/jetstack/cert-manager-package-debian:20210119.0\nquay.io/jetstack/cert-manager-webhook:v1.14.3\nquay.io/jetstack/trust-manager:v0.11.0\nregistry.k8s.io/external-dns/external-dns:v0.14.0\n```\n\nLog in the container registry:\n\n**NB** You are logging in at the registry level. You are not logging in at the\nrepository level.\n\n```bash\naws ecr get-login-password \\\n  --region \"$(terramate run -C stacks/ecr terraform output -raw registry_region)\" \\\n  | docker login \\\n      --username AWS \\\n      --password-stdin \\\n      \"$(terramate run -C stacks/ecr terraform output -raw registry_domain)\"\n```\n\n**NB** This saves the credentials in the `~/.docker/config.json` local file.\n\nInspect the created example container image:\n\n```bash\nimage=\"$(terramate run -C stacks/ecr terraform output -json images | jq -r '.\"otel-example\"')\"\necho \"image: $image\"\ncrane manifest \"$image\" | jq .\n```\n\nDownload the created example container image from the created container image\nrepository, and execute it locally:\n\n```bash\ndocker run --rm \"$image\"\n```\n\nDelete the local copy of the created container image:\n\n```bash\ndocker rmi \"$image\"\n```\n\nLog out the container registry:\n\n```bash\ndocker logout \\\n  \"$(terramate run -C stacks/ecr terraform output -raw registry_domain)\"\n```\n\nDelete the example image resource:\n\n```bash\nterramate run -C stacks/ecr \\\n  terraform destroy -target='terraform_data.ecr_image[\"otel-example\"]'\n```\n\nAt the ECR AWS Management Console, verify that the example image no longer\nexists (actually, it's the image index/tag that no longer exists).\n\nDo an `terraform apply` to verify that it recreates the example image:\n\n```bash\nterramate run terraform apply\n```\n\nDestroy the example:\n\n```bash\nterramate run --reverse terraform destroy\n```\n\n**NB** For some unknown reason, terraform shows the following Warning message. If you known how to fix it, please let me known!\n\n```\n╷\n│ Warning: EC2 Default Network ACL (acl-004fd900909c20039) not deleted, removing from state\n│\n│\n╵\n```\n\nList this repository dependencies (and which have newer versions):\n\n```bash\nGITHUB_COM_TOKEN='YOUR_GITHUB_PERSONAL_TOKEN' ./renovate.sh\n```\n\n# Caveats\n\n* After `terraform destroy`, the following resources will still remain in AWS:\n  * KMS Kubernetes cluster encryption key.\n    * It will be automatically deleted after 30 days (the default value\n      of the `kms_key_deletion_window_in_days` eks module property).\n  * CloudWatch log groups.\n    * These will be automatically deleted after 90 days (the default value\n      of the `cloudwatch_log_group_retention_in_days` eks module property)\n* When running `terraform destroy`, the current user (aka the cluster creator)\n  is eagerly removed from the cluster, which means, when there are problems, we\n  are not able to continue or troubleshoot without manually granting our role\n  the `AmazonEKSClusterAdminPolicy` access policy. For example, when using SSO\n  roles, we need to add an IAM access entry like:\n\n  | Property          | Value                                                                                                                   |\n  |-------------------|-------------------------------------------------------------------------------------------------------------------------|\n  | IAM principal ARN | `arn:aws:iam::123456:role/aws-reserved/sso.amazonaws.com/eu-west-1/AWSReservedSSO_AdministratorAccess_0000000000000000` |\n  | Type              | `Standard`                                                                                                              |\n  | Username          | `arn:aws:sts::123456:assumed-role/AWSReservedSSO_AdministratorAccess_0000000000000000/{{SessionName}}`                  |\n  | Access policies   | `AmazonEKSClusterAdminPolicy`                                                                                           |\n\n  You can list the current access entries with:\n\n  ```bash\n  aws eks list-access-entries \\\n    --cluster-name \"$(\n      terramate run -C stacks/eks-workloads \\\n        terraform output -raw cluster_name)\"\n  ```\n\n  Which should include the above `IAM principal ARN` value.\n\n# Notes\n\n* Its not possible to create multiple container image registries.\n  * A single registry is automatically created when the AWS Account is created.\n  * You have to create a separate repository for each of your container images.\n    * A repository name can include several path segments (e.g. `hello/world`).\n* Terramate does not support flowing Terraform outputs into other Terraform\n  program input variables. Instead, Terraform programs should use Terraform\n  data sources to find the resources that are already created. Those resources\n  should be found by their metadata (e.g. name) defined in a Terramate global.\n  * See https://github.com/terramate-io/terramate/discussions/525\n  * See https://github.com/terramate-io/terramate/discussions/571#discussioncomment-3542867\n  * See https://github.com/terramate-io/terramate/discussions/1090#discussioncomment-6659130\n* OpenID Connect Provider for EKS (aka [Enable IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html)) is enabled.\n  * a [aws_iam_openid_connect_provider resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_openid_connect_provider) is created.\n\n# References\n\n* [Environment variables to configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html)\n* [Token provider configuration with automatic authentication refresh for AWS IAM Identity Center](https://docs.aws.amazon.com/cli/latest/userguide/sso-configure-profile-token.html) (SSO)\n* [Managing access keys (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey)\n* [AWS General Reference](https://docs.aws.amazon.com/general/latest/gr/Welcome.html)\n  * [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)\n* [Amazon ECR private registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html)\n  * [Private registry authentication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html)\n* [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)\n* [Amazon EKS add-ons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html)\n* [Amazon EKS VPC-CNI](https://github.com/aws/amazon-vpc-cni-k8s)\n* [EKS Workshop](https://www.eksworkshop.com)\n  * [Using Terraform](https://www.eksworkshop.com/docs/introduction/setup/your-account/using-terraform)\n    * [aws-samples/eks-workshop-v2 example repository](https://github.com/aws-samples/eks-workshop-v2/tree/main/cluster/terraform)\n* [Official Amazon EKS AMI awslabs/amazon-eks-ami repository](https://github.com/awslabs/amazon-eks-ami)\n* [terramate-quickstart-aws](https://github.com/terramate-io/terramate-quickstart-aws)\n* [aws-ia/terraform-aws-eks-blueprints](https://github.com/aws-ia/terraform-aws-eks-blueprints)\n* [aws-ia/terraform-aws-eks-blueprints-addons](https://github.com/aws-ia/terraform-aws-eks-blueprints-addons)\n* [aws-ia/terraform-aws-eks-blueprints-addon](https://github.com/aws-ia/terraform-aws-eks-blueprints-addon)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frgl%2Fterramate-aws-eks-example","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frgl%2Fterramate-aws-eks-example","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frgl%2Fterramate-aws-eks-example/lists"}