Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup

Terraform module to provision a scheduled Lambda function which will delete old AWS ElasticSearch indices
https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup

delete elasticsearch elasticsearch-curator indexes lambda lambda-function log-search logs purge retention terraform terraform-modules vacuum

Last synced: 2 days ago
JSON representation

Terraform module to provision a scheduled Lambda function which will delete old AWS ElasticSearch indices

Awesome Lists containing this project

README

        

Project Banner


Latest ReleaseLast UpdatedSlack CommunityTests

Terraform module to provision a scheduled Lambda function which will
delete old Elasticsearch indexes using [SigV4Auth](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) authentication. The
lambda function can optionally send output to an SNS topic if the
topic ARN is given. This module was largely inspired by
[aws-lambda-es-cleanup](https://github.com/cloudreach/aws-lambda-es-cleanup)

> [!TIP]
> #### πŸ‘½ Use Atmos with Terraform
> Cloud Posse uses [`atmos`](https://atmos.tools) to easily orchestrate multiple environments using Terraform.

> Works with [Github Actions](https://atmos.tools/integrations/github-actions/), [Atlantis](https://atmos.tools/integrations/atlantis), or [Spacelift](https://atmos.tools/integrations/spacelift).
>
>
> Watch demo of using Atmos with Terraform
>

> Example of running atmos to manage infrastructure from our Quick Start tutorial.
>

## Usage

For a complete example, see [examples/complete](examples/complete).

For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test](test).

```hcl
module "elasticsearch_cleanup" {
source = "https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup.git?ref=master"
es_endpoint = module.elasticsearch.domain_endpoint
es_domain_arn = module.elasticsearch.domain_arn
es_security_group_id = module.elasticsearch.security_group_id
vpc_id = module.vpc.vpc_id
namespace = "eg"
stage = "dev"
schedule = "cron(0 3 * * ? *)"
}
```

Indexes are expected to be in the format `name-date` where `date` is in the format specified by `var.index_format`.
By default, all indexes except for the ones added by Kibana will be deleted based on the date part of the full
index name. The actual creation date of the index is not used.

Index matching is done with unanchored regular expresssion, so "bar" matches index "foobarbaz".

- If the full index name, including the date part, matches `skip_index_re`, then the index will be skipped (never deleted).
Kibana indexes are skipped by the default `skip_index_re` of `^\.kibana*` so if you specify a value for `skip_index_re`
you must include the Kibana exception in your regex if you want it excepted. (Since Kibana indexes do not have a
date part, this module should not delete them, but will complain about them having malformed dates if they are not excluded.)
- If the index name without the trailing `-date` part matches `index_re`, then it will be cleaned up according to the date part.

Keep in mind that, fundamentally, this module expects indexes to be in the format of `name-date` so it will not work
properly if the regexes end up selecting an index that does not end with `-date`. To avoid edge cases, it is wise not
to include dashes in your index name or date format.

## Migration

Prior to version 0.10.0, this moudle had inputs `index`, which was a comma-separated list of index names or the
special name "all" to indicate all but Kibana indexes, and `index_regex`, which was a regular expression for parsing
index name and date parts. There was no mechanism for specifying a list of indexes to exclude.
Starting with version 0.10.0 this module drops those inputs and instead takes `index_re` and `skip_index_re`,
both of which are regular expressions. (You probably want to anchor your regexes to the beginning of the index name
by starting with `^`).

| If you previously had | Now use |
|----------------------|----------|
|`index = "all"`| Default values for `index_re` and `skip_index_re`|
|`index = "a,xb,c0"` | `index_re = "^(a\|xb\|c0)"` and `skip_index_re = "^$"`|
|`index_regex = "(ipat)-(dpat)"`|`index_re = "ipat"` and be sure `index_format` is correct for your date format|

> [!IMPORTANT]
> In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation
> and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version
> you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic
> approach for updating versions to avoid unexpected changes.

## Makefile Targets
```text
Available targets:

build Build Lambda function zip
dependencies Install dependencies
help Help screen
help/all Display help for all targets
help/short This help short screen
lint Lint terraform code

```

## Module: cloudposse/terraform-aws-lambda-elasticsearch-cleanup

This module creates a scheduled Lambda function which will delete old
Elasticsearch indexes using SigV4Auth authentication. The lambda
function can optionally send output to an SNS topic if the topic ARN
is given

## Requirements

| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 1.0.0 |
| [aws](#requirement\_aws) | >= 3.0 |
| [null](#requirement\_null) | >= 3.0 |

## Providers

| Name | Version |
|------|---------|
| [aws](#provider\_aws) | >= 3.0 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| [artifact](#module\_artifact) | cloudposse/module-artifact/external | 0.8.0 |
| [label](#module\_label) | cloudposse/label/null | 0.25.0 |
| [this](#module\_this) | cloudposse/label/null | 0.25.0 |

## Resources

| Name | Type |
|------|------|
| [aws_cloudwatch_event_rule.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_rule) | resource |
| [aws_cloudwatch_event_target.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) | resource |
| [aws_iam_role.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource |
| [aws_iam_role_policy.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource |
| [aws_iam_role_policy_attachment.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource |
| [aws_lambda_function.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function) | resource |
| [aws_lambda_permission.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_permission) | resource |
| [aws_security_group.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group_rule.egress_from_lambda_to_es_cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.ingress_to_es_cluster_from_lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.tcp_dns_egress_from_lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.udp_dns_egress_from_lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_iam_policy_document.assume_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.es_logs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.sns](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| [artifact\_git\_ref](#input\_artifact\_git\_ref) | Git ref of the lambda artifact to use. Use latest version if null. | `string` | `""` | no |
| [artifact\_url](#input\_artifact\_url) | URL template for the remote artifact | `string` | `"https://artifacts.cloudposse.com/$${module_name}/$${git_ref}/$${filename}"` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element. | `list(string)` | `[]` | no |
| [context](#input\_context) | Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional\_tag\_map, which are merged. | `any` |

{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
| no |
| [delete\_after](#input\_delete\_after) | Number of days to preserve | `number` | `15` | no |
| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`{
format = string
labels = list(string)
}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
| [es\_domain\_arn](#input\_es\_domain\_arn) | The Elasticsearch domain ARN | `string` | n/a | yes |
| [es\_endpoint](#input\_es\_endpoint) | The Elasticsearch endpoint for the Lambda function to connect to | `string` | n/a | yes |
| [es\_security\_group\_id](#input\_es\_security\_group\_id) | The Elasticsearch cluster security group ID | `string` | n/a | yes |
| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`. | `number` | `null` | no |
| [index\_format](#input\_index\_format) | Combined with 'index' variable and is used to evaluate the index age | `string` | `"%Y.%m.%d"` | no |
| [index\_re](#input\_index\_re) | Regular Expression that matches the index names to clean up (not including trailing dash and date) | `string` | `".*"` | no |
| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`. | `string` | `null` | no |
| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`. | `string` | `null` | no |
| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` |
[
"default"
]
| no |
| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
| [python\_version](#input\_python\_version) | The Python version to use | `string` | `"3.12"` | no |
| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| [schedule](#input\_schedule) | CloudWatch Events rule schedule using cron or rate expression | `string` | `"cron(0 3 * * ? *)"` | no |
| [skip\_index\_re](#input\_skip\_index\_re) | Regular Expression that matches the index names to ignore (not clean up). Takes precedence over `index_re`.
BY DEFAULT (when value is `null`), a pattern is used to exclude Kibana indexes.
Use `"^$"` if you do not want to skip any indexes. Include an exclusion for `kibana` if you
want to use a custom value and also exclude the kibana indexes. | `string` | `null` | no |
| [sns\_arn](#input\_sns\_arn) | SNS ARN to publish alerts | `string` | `""` | no |
| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
| [subnet\_ids](#input\_subnet\_ids) | Subnet IDs | `list(string)` | n/a | yes |
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| [timeout](#input\_timeout) | Timeout for Lambda function in seconds | `number` | `300` | no |
| [vpc\_id](#input\_vpc\_id) | The VPC ID for the Lambda function | `string` | n/a | yes |

## Outputs

| Name | Description |
|------|-------------|
| [lambda\_function\_arn](#output\_lambda\_function\_arn) | ARN of the Lambda Function |
| [lambda\_function\_source\_code\_size](#output\_lambda\_function\_source\_code\_size) | The size in bytes of the function .zip file |
| [security\_group\_id](#output\_security\_group\_id) | Security Group ID of the Lambda Function |

## Related Projects

Check out these related projects.

- [terraform-aws-vpc](https://github.com/cloudposse/terraform-aws-vpc) - Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways
- [terraform-aws-dynamic-subnets](https://github.com/cloudposse/terraform-aws-dynamic-subnets) - Terraform module for dynamic subnets provisioning.
- [terraform-aws-elasticsearch](https://github.com/cloudposse/terraform-aws-elasticsearch) - Terraform module for AWS Elasticsearch provisioning.

> [!TIP]
> #### Use Terraform Reference Architectures for AWS
>
> Use Cloud Posse's ready-to-go [terraform architecture blueprints](https://cloudposse.com/reference-architecture/) for AWS to get up and running quickly.
>
> βœ… We build it together with your team.

> βœ… Your team owns everything.

> βœ… 100% Open Source and backed by fanatical support.

>
> Request Quote
> πŸ“š Learn More
>
>

>
> Cloud Posse is the leading [**DevOps Accelerator**](https://cpco.io/commercial-support?utm_source=github&utm_medium=readme&utm_campaign=cloudposse/terraform-aws-lambda-elasticsearch-cleanup&utm_content=commercial_support) for funded startups and enterprises.
>
> *Your team can operate like a pro today.*
>
> Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.
> #### Day-0: Your Foundation for Success
> - **Reference Architecture.** You'll get everything you need from the ground up built using 100% infrastructure as code.
> - **Deployment Strategy.** Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
> - **Site Reliability Engineering.** Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
> - **Security Baseline.** Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
> - **GitOps.** Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.
>
> Request Quote
>
> #### Day-2: Your Operational Mastery
> - **Training.** Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
> - **Support.** Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
> - **Troubleshooting.** Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
> - **Code Reviews.** Enhance your team’s code quality with our expert feedback, fostering continuous improvement and collaboration.
> - **Bug Fixes.** Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
> - **Migration Assistance.** Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
> - **Customer Workshops.** Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.
>
> Request Quote
>

## ✨ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:



For πŸ› bug reports & feature requests, please use the [issue tracker](https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/issues).

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
1. Review our [Code of Conduct](https://github.com/cloudposse/terraform-aws-lambda-elasticsearch-cleanup/?tab=coc-ov-file#code-of-conduct) and [Contributor Guidelines](https://github.com/cloudposse/.github/blob/main/CONTRIBUTING.md).
2. **Fork** the repo on GitHub
3. **Clone** the project to your own machine
4. **Commit** changes to your own branch
5. **Push** your work back up to your fork
6. Submit a **Pull Request** so that we can review your changes

**NOTE:** Be sure to merge the latest changes from "upstream" before making a pull request!

### 🌎 Slack Community

Join our [Open Source Community](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse/terraform-aws-lambda-elasticsearch-cleanup&utm_content=slack) on Slack. It's **FREE** for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally *sweet* infrastructure.

### πŸ“° Newsletter

Sign up for [our newsletter](https://cpco.io/newsletter?utm_source=github&utm_medium=readme&utm_campaign=cloudposse/terraform-aws-lambda-elasticsearch-cleanup&utm_content=newsletter) and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know.
Dropped straight into your Inbox every week β€” and usually a 5-minute read.

### πŸ“† Office Hours

[Join us every Wednesday via Zoom](https://cloudposse.com/office-hours?utm_source=github&utm_medium=readme&utm_campaign=cloudposse/terraform-aws-lambda-elasticsearch-cleanup&utm_content=office_hours) for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a _live Q&A_ that you can’t find anywhere else.
It's **FREE** for everyone!
## License

License

Preamble to the Apache License, Version 2.0



Complete license is available in the [`LICENSE`](LICENSE) file.

```text
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
```

## Trademarks

All other trademarks referenced herein are the property of their respective owners.

---
Copyright Β© 2017-2024 [Cloud Posse, LLC](https://cpco.io/copyright)

README footer

Beacon