Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/dasmeta/terraform-aws-eks

All terraform modules that are related or supporting EKS setup
https://github.com/dasmeta/terraform-aws-eks

aws cluster eks kubernetes module terraform terraform-module

Last synced: about 1 month ago
JSON representation

All terraform modules that are related or supporting EKS setup

Awesome Lists containing this project

README

        

# Why

To spin up complete eks with all necessary components.
Those include:
- vpc (NOTE: the vpc submodule moved into separate repo https://github.com/dasmeta/terraform-aws-vpc)
- eks cluster
- alb ingress controller
- fluentbit
- external secrets
- metrics to cloudwatch

## How to run
```hcl
*data "aws_availability_zones" "available" {}

*locals {
cluster_endpoint_public_access = true
cluster_enabled_log_types = ["audit"]
vpc = {
create = {
name = "dev"
availability_zones = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
cidr = "172.16.0.0/16"
public_subnet_tags = {
"kubernetes.io/cluster/dev" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/dev" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
}
cluster_name = "your-cluster-name-goes-here"
alb_log_bucket_name = "your-log-bucket-name-goes-here"

fluent_bit_name = "fluent-bit"
log_group_name = "fluent-bit-cloudwatch-env"
*}

*#(Basic usage with example of using already created VPC)
*data "aws_availability_zones" "available" {}

*locals {
cluster_endpoint_public_access = true
cluster_enabled_log_types = ["audit"]

vpc = {
link = {
id = "vpc-1234"
private_subnet_ids = ["subnet-1", "subnet-2"]
}
}
cluster_name = "your-cluster-name-goes-here"
alb_log_bucket_name = "your-log-bucket-name-goes-here"

fluent_bit_name = "fluent-bit"
log_group_name = "fluent-bit-cloudwatch-env"
*}

*# Minimum

*module "cluster_min" {
source = "dasmeta/eks/aws"
version = "0.1.1"

cluster_name = local.cluster_name
users = local.users

vpc = {
link = {
id = "vpc-1234"
private_subnet_ids = ["subnet-1", "subnet-2"]
}
}

*}

*# Max @TODO: the max param passing setup needs to be checked/fixed

module "cluster_max" {
source = "dasmeta/eks/aws"
version = "0.1.1"

### VPC
vpc = {
create = {
name = "dev"
availability_zones = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
cidr = "172.16.0.0/16"
public_subnet_tags = {
"kubernetes.io/cluster/dev" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/dev" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
}

cluster_enabled_log_types = local.cluster_enabled_log_types
cluster_endpoint_public_access = local.cluster_endpoint_public_access

### EKS
cluster_name = local.cluster_name
manage_aws_auth = true

# IAM users username and group. By default value is ["system:masters"]
user = [
{
username = "devops1"
group = ["system:masters"]
},
{
username = "devops2"
group = ["system:kube-scheduler"]
},
{
username = "devops3"
}
]

# You can create node use node_group when you create node in specific subnet zone.(Note. This Case Ec2 Instance havn't specific name).
# Other case you can use worker_group variable.

node_groups = {
example = {
name = "nodegroup"
name-prefix = "nodegroup"
additional_tags = {
"Name" = "node"
"ExtraTag" = "ExtraTag"
}

instance_type = "t3.xlarge"
max_capacity = 1
disk_size = 50
create_launch_template = false
subnet = ["subnet_id"]
}
}

node_groups_default = {
disk_size = 50
instance_types = ["t3.medium"]
}

worker_groups = {
default = {
name = "nodes"
instance_type = "t3.xlarge"
asg_max_size = 3
root_volume_size = 50
}
}

workers_group_defaults = {
launch_template_use_name_prefix = true
launch_template_name = "default"
root_volume_type = "gp2"
root_volume_size = 50
}

### ALB-INGRESS-CONTROLLER
alb_log_bucket_name = local.alb_log_bucket_name

### FLUENT-BIT
fluent_bit_name = local.fluent_bit_name
log_group_name = local.log_group_name

# Should be refactored to install from cluster: for prod it has done from metrics-server.tf
### METRICS-SERVER
# enable_metrics_server = false
metrics_server_name = "metrics-server"
```

## Requirements

| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | ~> 1.3 |
| [aws](#requirement\_aws) | >= 3.31, < 5.0.0 |
| [helm](#requirement\_helm) | >= 2.4.1 |
| [kubectl](#requirement\_kubectl) | ~>1.14 |

## Providers

| Name | Version |
|------|---------|
| [aws](#provider\_aws) | >= 3.31, < 5.0.0 |
| [helm](#provider\_helm) | >= 2.4.1 |
| [kubernetes](#provider\_kubernetes) | n/a |

## Modules

| Name | Source | Version |
|------|--------|---------|
| [adot](#module\_adot) | ./modules/adot | n/a |
| [alb-ingress-controller](#module\_alb-ingress-controller) | ./modules/aws-load-balancer-controller | n/a |
| [api-gw-controller](#module\_api-gw-controller) | ./modules/api-gw | n/a |
| [autoscaler](#module\_autoscaler) | ./modules/autoscaler | n/a |
| [cloudwatch-metrics](#module\_cloudwatch-metrics) | ./modules/cloudwatch-metrics | n/a |
| [cw\_alerts](#module\_cw\_alerts) | dasmeta/monitoring/aws//modules/alerts | 1.3.5 |
| [ebs-csi](#module\_ebs-csi) | ./modules/ebs-csi | n/a |
| [efs-csi-driver](#module\_efs-csi-driver) | ./modules/efs-csi | n/a |
| [eks-cluster](#module\_eks-cluster) | ./modules/eks | n/a |
| [external-dns](#module\_external-dns) | ./modules/external-dns | n/a |
| [external-secrets](#module\_external-secrets) | ./modules/external-secrets | n/a |
| [flagger](#module\_flagger) | ./modules/flagger | n/a |
| [fluent-bit](#module\_fluent-bit) | ./modules/fluent-bit | n/a |
| [metrics-server](#module\_metrics-server) | ./modules/metrics-server | n/a |
| [nginx-ingress-controller](#module\_nginx-ingress-controller) | ./modules/nginx-ingress-controller/ | n/a |
| [node-problem-detector](#module\_node-problem-detector) | ./modules/node-problem-detector | n/a |
| [olm](#module\_olm) | ./modules/olm | n/a |
| [portainer](#module\_portainer) | ./modules/portainer | n/a |
| [priority\_class](#module\_priority\_class) | ./modules/priority-class/ | n/a |
| [sso-rbac](#module\_sso-rbac) | ./modules/sso-rbac | n/a |
| [vpc](#module\_vpc) | dasmeta/vpc/aws | 1.0.1 |
| [weave-scope](#module\_weave-scope) | ./modules/weave-scope | n/a |

## Resources

| Name | Type |
|------|------|
| [helm_release.cert-manager](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.kube-state-metrics](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [kubernetes_namespace.meta-system](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
| [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source |

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [account\_id](#input\_account\_id) | AWS Account Id to apply changes into | `string` | `null` | no |
| [additional\_priority\_classes](#input\_additional\_priority\_classes) | Defines Priority Classes in Kubernetes, used to assign different levels of priority to pods. By default, this module creates three Priority Classes: 'high'(1000000), 'medium'(500000) and 'low'(250000) . You can also provide a custom list of Priority Classes if needed. |

list(object({
name = string
value = string # number in string form
}))
| `[]` | no |
| [adot\_config](#input\_adot\_config) | accept\_namespace\_regex defines the list of namespaces from which metrics will be exported, and additional\_metrics defines additional metrics to export. |
object({
accept_namespace_regex = optional(string, "(default|kube-system)")
additional_metrics = optional(list(string), [])
log_group_name = optional(string, "adot")
log_retention = optional(number, 14)
helm_values = optional(any, null)
logging_enable = optional(bool, false)
resources = optional(object({
limit = object({
cpu = optional(string, "200m")
memory = optional(string, "200Mi")
})
requests = object({
cpu = optional(string, "200m")
memory = optional(string, "200Mi")
})
}), {
limit = {
cpu = "200m"
memory = "200Mi"
}
requests = {
cpu = "200m"
memory = "200Mi"
}
})
})
|
{
"accept_namespace_regex": "(default|kube-system)",
"additional_metrics": [],
"helm_values": null,
"log_group_name": "adot",
"log_retention": 14,
"logging_enable": false,
"resources": {
"limit": {
"cpu": "200m",
"memory": "200Mi"
},
"requests": {
"cpu": "200m",
"memory": "200Mi"
}
}
}
| no |
| [adot\_version](#input\_adot\_version) | The version of the AWS Distro for OpenTelemetry addon to use. | `string` | `"v0.78.0-eksbuild.1"` | no |
| [alarms](#input\_alarms) | Alarms enabled by default you need set sns topic name for send alarms for customize alarms threshold use custom\_values |
object({
enabled = optional(bool, true)
sns_topic = string
custom_values = optional(any, {})
})
| n/a | yes |
| [alb\_log\_bucket\_name](#input\_alb\_log\_bucket\_name) | n/a | `string` | `""` | no |
| [alb\_log\_bucket\_path](#input\_alb\_log\_bucket\_path) | ALB-INGRESS-CONTROLLER | `string` | `""` | no |
| [api\_gateway\_resources](#input\_api\_gateway\_resources) | Nested map containing API, Stage, and VPC Link resources |
list(object({
namespace = string
api = object({
name = string
protocolType = string
})
stages = optional(list(object({
name = string
namespace = string
apiRef_name = string
stageName = string
autoDeploy = bool
description = string
})))
vpc_links = optional(list(object({
name = string
namespace = string
})))
}))
| `[]` | no |
| [api\_gw\_deploy\_region](#input\_api\_gw\_deploy\_region) | Region in which API gatewat will be configured | `string` | `""` | no |
| [autoscaler\_image\_patch](#input\_autoscaler\_image\_patch) | The patch number of autoscaler image | `number` | `0` | no |
| [autoscaler\_limits](#input\_autoscaler\_limits) | n/a |
object({
cpu = string
memory = string
})
|
{
"cpu": "100m",
"memory": "600Mi"
}
| no |
| [autoscaler\_requests](#input\_autoscaler\_requests) | n/a |
object({
cpu = string
memory = string
})
|
{
"cpu": "100m",
"memory": "600Mi"
}
| no |
| [autoscaling](#input\_autoscaling) | Weather enable autoscaling or not in EKS | `bool` | `true` | no |
| [bindings](#input\_bindings) | Variable which describes group and role binding |
list(object({
group = string
namespace = string
roles = list(string)

}))
| `[]` | no |
| [cluster\_enabled\_log\_types](#input\_cluster\_enabled\_log\_types) | A list of the desired control plane logs to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` | `[]` | no |
| [cluster\_endpoint\_public\_access](#input\_cluster\_endpoint\_public\_access) | n/a | `bool` | `true` | no |
| [cluster\_name](#input\_cluster\_name) | Creating eks cluster name. | `string` | n/a | yes |
| [cluster\_version](#input\_cluster\_version) | Allows to set/change kubernetes cluster version, kubernetes version needs to be updated at leas once a year. Please check here for available versions https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html | `string` | `"1.27"` | no |
| [create](#input\_create) | Whether to create cluster and other resources or not | `bool` | `true` | no |
| [create\_cert\_manager](#input\_create\_cert\_manager) | If enabled it always gets deployed to the cert-manager namespace. | `bool` | `false` | no |
| [ebs\_csi\_version](#input\_ebs\_csi\_version) | EBS CSI driver addon version | `string` | `"v1.15.0-eksbuild.1"` | no |
| [efs\_id](#input\_efs\_id) | EFS filesystem id in AWS | `string` | `null` | no |
| [efs\_storage\_classes](#input\_efs\_storage\_classes) | Additional storage class configurations: by default, 2 storage classes are created - efs-sc and efs-sc-root which has 0 uid. One can add another storage classes besides these 2. |
list(object({
name : string
provisioning_mode : optional(string, "efs-ap")
file_system_id : string
directory_perms : optional(string, "755")
base_path : optional(string, "/")
uid : optional(number)
}))
| `[]` | no |
| [enable\_alb\_ingress\_controller](#input\_enable\_alb\_ingress\_controller) | Whether alb ingress controller enabled. | `bool` | `true` | no |
| [enable\_api\_gw\_controller](#input\_enable\_api\_gw\_controller) | Weather enable API-GW controller or not | `bool` | `false` | no |
| [enable\_ebs\_driver](#input\_enable\_ebs\_driver) | Weather enable EBS-CSI driver or not | `bool` | `true` | no |
| [enable\_efs\_driver](#input\_enable\_efs\_driver) | Weather install EFS driver or not in EKS | `bool` | `false` | no |
| [enable\_external\_secrets](#input\_enable\_external\_secrets) | Whether to enable external-secrets operator | `bool` | `true` | no |
| [enable\_kube\_state\_metrics](#input\_enable\_kube\_state\_metrics) | Enable kube-state-metrics | `bool` | `false` | no |
| [enable\_metrics\_server](#input\_enable\_metrics\_server) | METRICS-SERVER | `bool` | `false` | no |
| [enable\_node\_problem\_detector](#input\_enable\_node\_problem\_detector) | n/a | `bool` | `true` | no |
| [enable\_olm](#input\_enable\_olm) | To install OLM controller (experimental). | `bool` | `false` | no |
| [enable\_portainer](#input\_enable\_portainer) | Enable Portainer provisioning or not | `bool` | `false` | no |
| [enable\_sso\_rbac](#input\_enable\_sso\_rbac) | Enable SSO RBAC integration or not | `bool` | `false` | no |
| [enable\_waf\_for\_alb](#input\_enable\_waf\_for\_alb) | Enables WAF and WAF V2 addons for ALB | `bool` | `false` | no |
| [external\_dns](#input\_external\_dns) | Allows to install external-dns helm chart and related roles, which allows to automatically create R53 records based on ingress/service domain/host configs |
object({
enabled = optional(bool, false)
configs = optional(any, {})
})
|
{
"enabled": false
}
| no |
| [external\_secrets\_namespace](#input\_external\_secrets\_namespace) | The namespace of external-secret operator | `string` | `"kube-system"` | no |
| [flagger](#input\_flagger) | Allows to create/deploy flagger operator to have custom rollout strategies like canary/blue-green and also it allows to create custom flagger metric templates |
object({
enabled = optional(bool, false)
namespace = optional(string, "ingress-nginx") # The flagger operator helm being installed on same namespace as mesh/ingress provider so this field need to be set based on which ingress/mesh we are going to use, more info in https://artifacthub.io/packages/helm/flagger/flagger
configs = optional(any, {}) # available options can be found in https://artifacthub.io/packages/helm/flagger/flagger
metric_template_configs = optional(any, {}) # available options can be found in https://github.com/dasmeta/helm/tree/flagger-metric-template-0.1.0/charts/flagger-metric-template
enable_metric_template = optional(bool, false)
enable_loadtester = optional(bool, false)
})
|
{
"enabled": false
}
| no |
| [fluent\_bit\_configs](#input\_fluent\_bit\_configs) | Fluent Bit configs |
object({
enabled = optional(string, true)
fluent_bit_name = optional(string, "")
log_group_name = optional(string, "")
system_log_group_name = optional(string, "")
log_retention_days = optional(number, 90)
values_yaml = optional(string, "")
configs = optional(object({
inputs = optional(string, "")
filters = optional(string, "")
outputs = optional(string, "")
cloudwatch_outputs_enabled = optional(bool, true)
}), {})
drop_namespaces = optional(list(string), [])
log_filters = optional(list(string), [])
additional_log_filters = optional(list(string), [])
kube_namespaces = optional(list(string), [])
image_pull_secrets = optional(list(string), [])
})
|
{
"additional_log_filters": [
"ELB-HealthChecker",
"Amazon-Route53-Health-Check-Service"
],
"configs": {
"cloudwatch_outputs_enabled": true,
"filters": "",
"inputs": "",
"outputs": ""
},
"drop_namespaces": [
"kube-system",
"opentelemetry-operator-system",
"adot",
"cert-manager",
"opentelemetry.*",
"meta.*"
],
"enabled": true,
"fluent_bit_name": "",
"image_pull_secrets": [],
"kube_namespaces": [
"kube.*",
"meta.*",
"adot.*",
"devops.*",
"cert-manager.*",
"git.*",
"opentelemetry.*",
"stakater.*",
"renovate.*"
],
"log_filters": [
"kube-probe",
"health",
"prometheus",
"liveness"
],
"log_group_name": "",
"log_retention_days": 90,
"system_log_group_name": "",
"values_yaml": ""
}
| no |
| [manage\_aws\_auth](#input\_manage\_aws\_auth) | n/a | `bool` | `true` | no |
| [map\_roles](#input\_map\_roles) | Additional IAM roles to add to the aws-auth configmap. |
list(object({
rolearn = string
username = string
groups = list(string)
}))
| `[]` | no |
| [metrics\_exporter](#input\_metrics\_exporter) | Metrics Exporter, can use cloudwatch or adot | `string` | `"adot"` | no |
| [metrics\_server\_name](#input\_metrics\_server\_name) | n/a | `string` | `"metrics-server"` | no |
| [nginx\_ingress\_controller\_config](#input\_nginx\_ingress\_controller\_config) | Nginx ingress controller configs |
object({
enabled = optional(bool, false)
name = optional(string, "nginx")
create_namespace = optional(bool, true)
namespace = optional(string, "ingress-nginx")
replicacount = optional(number, 3)
metrics_enabled = optional(bool, true)
})
|
{
"create_namespace": true,
"enabled": false,
"metrics_enabled": true,
"name": "nginx",
"namespace": "ingress-nginx",
"replicacount": 3
}
| no |
| [node\_groups](#input\_node\_groups) | Map of EKS managed node group definitions to create | `any` |
{
"default": {
"desired_size": 2,
"iam_role_additional_policies": [
"arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
],
"instance_types": [
"t3.large"
],
"max_size": 4,
"min_size": 2
}
}
| no |
| [node\_groups\_default](#input\_node\_groups\_default) | Map of EKS managed node group default configurations | `any` |
{
"disk_size": 50,
"iam_role_additional_policies": [
"arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
],
"instance_types": [
"t3.large"
]
}
| no |
| [node\_security\_group\_additional\_rules](#input\_node\_security\_group\_additional\_rules) | n/a | `any` |
{
"ingress_cluster_10250": {
"description": "Metric server to node groups",
"from_port": 10250,
"protocol": "tcp",
"self": true,
"to_port": 10250,
"type": "ingress"
},
"ingress_cluster_8443": {
"description": "Metric server to node groups",
"from_port": 8443,
"protocol": "tcp",
"source_cluster_security_group": true,
"to_port": 8443,
"type": "ingress"
}
}
| no |
| [portainer\_config](#input\_portainer\_config) | Portainer hostname and ingress config. |
object({
host = optional(string, "portainer.dasmeta.com")
enable_ingress = optional(bool, true)
})
| `{}` | no |
| [prometheus\_metrics](#input\_prometheus\_metrics) | Prometheus Metrics | `any` | `[]` | no |
| [region](#input\_region) | AWS Region name. | `string` | `null` | no |
| [roles](#input\_roles) | Variable describes which role will user have K8s |
list(object({
actions = list(string)
resources = list(string)
}))
| `[]` | no |
| [scale\_down\_unneeded\_time](#input\_scale\_down\_unneeded\_time) | Scale down unneeded in minutes | `number` | `2` | no |
| [send\_alb\_logs\_to\_cloudwatch](#input\_send\_alb\_logs\_to\_cloudwatch) | Whether send alb logs to CloudWatch or not. | `bool` | `true` | no |
| [users](#input\_users) | List of users to open eks cluster api access | `list(any)` | `[]` | no |
| [vpc](#input\_vpc) | VPC configuration for eks, we support both cases create new vpc(create field) and using already created one(link) |
object({
# for linking using existing vpc
link = optional(object({
id = string
private_subnet_ids = list(string) # please have the existing vpc public/private subnets(at least 2 needed) tagged with corresponding tags(look into create case subnet tags defaults)
}), { id = null, private_subnet_ids = null })
# for creating new vpc
create = optional(object({
name = string
availability_zones = list(string)
cidr = string
private_subnets = list(string)
public_subnets = list(string)
public_subnet_tags = optional(map(any), {}) # to pass additional tags for public subnet or override default ones. The default ones are: {"kubernetes.io/cluster/${var.cluster_name}" = "shared","kubernetes.io/role/elb" = 1}
private_subnet_tags = optional(map(any), {}) # to pass additional tags for public subnet or override default ones. The default ones are: {"kubernetes.io/cluster/${var.cluster_name}" = "shared","kubernetes.io/role/internal-elb" = 1}
}), { name = null, availability_zones = null, cidr = null, private_subnets = null, public_subnets = null })
})
| n/a | yes |
| [weave\_scope\_config](#input\_weave\_scope\_config) | Weave scope namespace configuration variables |
object({
create_namespace = bool
namespace = string
annotations = map(string)
ingress_host = string
ingress_class = string
ingress_name = string
service_type = string
weave_helm_release_name = string
})
|
{
"annotations": {},
"create_namespace": true,
"ingress_class": "",
"ingress_host": "",
"ingress_name": "weave-ingress",
"namespace": "meta-system",
"service_type": "NodePort",
"weave_helm_release_name": "weave"
}
| no |
| [weave\_scope\_enabled](#input\_weave\_scope\_enabled) | Weather enable Weave Scope or not | `bool` | `false` | no |
| [worker\_groups](#input\_worker\_groups) | Worker groups. | `any` | `{}` | no |
| [workers\_group\_defaults](#input\_workers\_group\_defaults) | Worker group defaults. | `any` |
{
"launch_template_name": "default",
"launch_template_use_name_prefix": true,
"root_volume_size": 50,
"root_volume_type": "gp2"
}
| no |

## Outputs

| Name | Description |
|------|-------------|
| [cluster\_certificate](#output\_cluster\_certificate) | EKS cluster certificate used for authentication/access in helm/kubectl/kubernetes providers |
| [cluster\_host](#output\_cluster\_host) | EKS cluster host name used for authentication/access in helm/kubectl/kubernetes providers |
| [cluster\_iam\_role\_name](#output\_cluster\_iam\_role\_name) | n/a |
| [cluster\_id](#output\_cluster\_id) | n/a |
| [cluster\_primary\_security\_group\_id](#output\_cluster\_primary\_security\_group\_id) | n/a |
| [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id) | n/a |
| [cluster\_token](#output\_cluster\_token) | EKS cluster token used for authentication/access in helm/kubectl/kubernetes providers |
| [eks\_auth\_configmap](#output\_eks\_auth\_configmap) | n/a |
| [eks\_module](#output\_eks\_module) | n/a |
| [eks\_oidc\_root\_ca\_thumbprint](#output\_eks\_oidc\_root\_ca\_thumbprint) | Grab eks\_oidc\_root\_ca\_thumbprint from oidc\_provider\_arn. |
| [map\_user\_data](#output\_map\_user\_data) | n/a |
| [oidc\_provider\_arn](#output\_oidc\_provider\_arn) | ## CLUSTER |
| [role\_arns](#output\_role\_arns) | n/a |
| [role\_arns\_without\_path](#output\_role\_arns\_without\_path) | n/a |
| [vpc\_cidr\_block](#output\_vpc\_cidr\_block) | The cidr block of the vpc |
| [vpc\_default\_security\_group\_id](#output\_vpc\_default\_security\_group\_id) | The ID of default security group created for vpc |
| [vpc\_id](#output\_vpc\_id) | The newly created vpc id |
| [vpc\_nat\_public\_ips](#output\_vpc\_nat\_public\_ips) | The list of elastic public IPs for vpc |
| [vpc\_private\_subnets](#output\_vpc\_private\_subnets) | The newly created vpc private subnets IDs list |
| [vpc\_public\_subnets](#output\_vpc\_public\_subnets) | The newly created vpc public subnets IDs list |