Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/camunda/camunda-tf-eks-module
Terraform modules targeting AWS EKS Cluster creation for Camunda 8 Self-Managed usage on AWS
https://github.com/camunda/camunda-tf-eks-module
c8 camunda camunda-infraex eks team-infrastructure-experience terraform
Last synced: 2 months ago
JSON representation
Terraform modules targeting AWS EKS Cluster creation for Camunda 8 Self-Managed usage on AWS
- Host: GitHub
- URL: https://github.com/camunda/camunda-tf-eks-module
- Owner: camunda
- License: apache-2.0
- Created: 2023-10-25T12:20:30.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-29T08:53:40.000Z (3 months ago)
- Last Synced: 2024-10-29T09:31:31.731Z (3 months ago)
- Topics: c8, camunda, camunda-infraex, eks, team-infrastructure-experience, terraform
- Language: Go
- Homepage: https://docs.camunda.io/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-terraform/
- Size: 537 KB
- Stars: 2
- Watchers: 5
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Camunda Terraform EKS Modules
[![Camunda](https://img.shields.io/badge/Camunda-FC5D0D)](https://www.camunda.com/)
[![tests](https://github.com/camunda/camunda-tf-eks-module/actions/workflows/tests.yml/badge.svg?branch=main)](https://github.com/camunda/camunda-tf-eks-module/actions/workflows/tests.yml)
[![License](https://img.shields.io/github/license/camunda/camunda-tf-eks-module)](LICENSE)Terraform module which creates AWS EKS (Kubernetes) resources with an opinionated configuration targeting Camunda 8, an AWS Aurora RDS cluster and an OpenSearch domain.
**⚠️ Warning:** This project is not intended for production use but rather for demonstration purposes only. There are no guarantees or warranties provided.
## Documentation
The related [guide](https://docs.camunda.io/docs/next/self-managed/setup/deploy/amazon/amazon-eks/eks-terraform/) describes more detailed usage.
Consider installing Camunda 8 via [this guide](https://docs.camunda.io/docs/next/self-managed/setup/deploy/amazon/amazon-eks/eks-helm/) after deploying the AWS EKS cluster.## Usage
Below is a simple example configuration for deploying both an EKS cluster, an Aurora PostgreSQL database and an OpenSearch domain.
See [AWS EKS Cluster inputs](./modules/eks-cluster/README.md#inputs), [AWS Aurora RDS inputs](./modules/aurora/README.md#inputs) and [AWS OpenSearch inputs](./modules/opensearch/README.md#inputs) for further configuration options and how they affect the cluster and database creation.
```hcl
module "eks_cluster" {
source = "github.com/camunda/camunda-tf-eks-module/modules/eks-cluster"region = "eu-central-1"
name = "cluster-name"cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"
}
``````hcl
module "postgresql" {
source = "github.com/camunda/camunda-tf-eks-module/modules/aurora"
engine_version = "15.4"
auto_minor_version_upgrade = false
cluster_name = "cluster-name-postgresql"username = "username"
password = "password"
vpc_id = module.eks_cluster.vpc_id
subnet_ids = module.eks_cluster.private_subnet_ids
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
instance_class = "db.t3.medium"
iam_auth_enabled = truedepends_on = [module.eks_cluster]
}
``````hcl
module "opensearch_domain" {
source = "github.com/camunda/camunda-tf-eks-module/modules/opensearch"domain_name = "my-opensearch-domain"
subnet_ids = module.eks_cluster.private_subnet_ids
vpc_id = module.eks_cluster.vpc_id
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)instance_type = "t3.small.search"
instance_count = 3
ebs_volume_size = 100advanced_security_enabled = true
advanced_security_internal_user_database_enabled = true
advanced_security_master_user_name = "admin"
advanced_security_master_user_password = "password"depends_on = [module.eks_cluster]
}
```#### Deletion Known Issues
During the deletion process (`terraform destroy`) of the EKS Cluster, you may encounter an error message related to the `kubernetes_storage_class`:
````
Error: Get "http://localhost/apis/storage.k8s.io/v1/storageclasses/ebs-sc": dial tcp [::1]:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_storage_class_v1.ebs_sc,
│ on .terraform/modules/eks_cluster/modules/eks-cluster/cluster.tf line 156, in resource "kubernetes_storage_class_v1" "ebs_sc":
│ 156: resource "kubernetes_storage_class_v1" "ebs_sc" {
│
╵
````To resolve this issue, you can set the variable `create_ebs_gp3_default_storage_class` to `false`, which skips the creation of the `kubernetes_storage_class` resource. This helps to avoid dependency issues during deletion. Run the following command:
```bash
terraform destroy -var="create_ebs_gp3_default_storage_class=false"
```If you still encounter the issue, you may need to manually remove the state for the storage class:
```bash
terraform state rm module.eks_cluster.kubernetes_storage_class_v1.ebs_sc
```After performing these steps, re-run `terraform destroy` to complete the deletion process without further interruptions.
#### GitHub Actions
You can automate the deployment and deletion of the EKS cluster and Aurora database using GitHub Actions.
**Note:** This is recommended only for development and testing purposes, not for production use.
Below are examples of GitHub Actions workflows for deploying and deleting these resources.
For more details, refer to the corresponding [EKS Actions README](./.github/actions/eks-manage-cluster/README.md), [Aurora Actions README](./.github/actions/aurora-manage-cluster/README.md) and [OpenSearch Actions README](./.github/actions/opensearch-manage-cluster/README.md), [Cleanup Actions README](./.github/actions/eks-cleanup-resources/README.md).
An example workflow can be found in [here](./.github/workflows/test-gha-eks.yml).
#### Advanced usage with IRSA
This documentation provides a step-by-step guide to creating an EKS cluster, an Aurora RDS instance, and an OpenSearch domain with IRSA (IAM Roles for Service Accounts) roles using Terraform modules. The modules create the necessary IAM roles and policies for Aurora and OpenSearch. To simplify the configuration, the modules use the outputs of the EKS cluster module to define the IRSA roles and policies.
For further details and a more in-depth configuration, it is recommended to refer to the official documentation at:
- [Amazon EKS Terraform setup](https://docs.camunda.io/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-terraform/)
- [IRSA roles setup](https://docs.camunda.io/docs/self-managed/setup/deploy/amazon/amazon-eks/irsa/)### Aurora IRSA Role and Policy
The Aurora module uses the following outputs from the EKS cluster module to define the IRSA role and policy:
- `module.eks_cluster.oidc_provider_arn`: The ARN of the OIDC provider for the EKS cluster.
- `module.eks_cluster.oidc_provider_id`: The ID of the OIDC provider for the EKS cluster.
- `var.account_id`: Your AWS account id
- `var.aurora_region`: Your Aurora AWS Region
- `var.aurora_irsa_username`: The username used to access AuroraDB. This username is different from the superuser. The user must also be created manually in the database to enable the IRSA connection, as described in [the steps below](#create-irsa-user-on-the-database).
- `var.aurora_namespace`: The kubernetes namespace to allow access
- `var.aurora_service_account`: The kubernetes ServiceAccount to allow accessYou need to define the IAM role trust policy and access policy for Aurora. Here's an example of how to define these policies using the outputs of the EKS cluster module:
```hcl
module "postgresql" {
# ...
iam_roles_with_policies = [
{
role_name = "AuroraRole-your-cluster" # ensure uniqueness of this one# Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
# Since the DbiResourceId may be unknown during the apply process, and because each instance of the RDS cluster contains its own DbiResourceId,
# we use the wildcard `dbuser:*` to apply to all database instances.access_policy = <
annotations:
eks.amazonaws.com/role-arn: :role/AuroraRole>
```
You can retrieve the role ARN from the module output: `aurora_iam_role_arns['Aurora-your-cluster']`.**OpenSearch Service Account**
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: opensearch-service-account
namespace:
annotations:
eks.amazonaws.com/role-arn: :role/OpenSearchRole>
```
You can retrieve the role ARN from the module output: `opensearch_iam_role_arns['OpenSearch-your-cluster']`.## Support
Please note that the modules have been tested with **[Terraform](https://github.com/hashicorp/terraform)** in the version described in the [.tool-versions](./.tool-versions) of this project.