https://github.com/sourcefuse/terraform-aws-arc-s3
terraform module for provisioning s3 buckets
https://github.com/sourcefuse/terraform-aws-arc-s3
Last synced: 6 months ago
JSON representation
terraform module for provisioning s3 buckets
- Host: GitHub
- URL: https://github.com/sourcefuse/terraform-aws-arc-s3
- Owner: sourcefuse
- License: apache-2.0
- Created: 2024-06-06T11:02:14.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-05-08T10:16:25.000Z (9 months ago)
- Last Synced: 2025-08-01T05:55:58.978Z (6 months ago)
- Language: HCL
- Homepage:
- Size: 4.23 MB
- Stars: 0
- Watchers: 5
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README

# [terraform-aws-arc-s3](https://github.com/sourcefuse/terraform-aws-arc-s3)
 
[](https://sonarcloud.io/summary/new_code?id=sourcefuse_terraform-aws-arc-s3)
[](https://github.com/sourcefuse/terraform-aws-arc-s3/actions/workflows/snyk.yaml)
## Overview
SourceFuse AWS Reference Architecture (ARC) Terraform module for managing the s3 module.
## Features
- Manages S3 Buckets: Handles the creation, deletion, and maintenance of Amazon S3 (Simple Storage Service) buckets, which are containers for storing data in the cloud.
- Supports Lifecycle Rules: Enables the setup and management of lifecycle rules that automate the transition of data between different storage classes and the deletion of objects after a specified period.
- Configurable Bucket Policies and Access Controls: Allows for the configuration of bucket policies and access control lists (ACLs) to define permissions and manage access to the data stored in S3 buckets, ensuring data security and compliance.
- Supports CORS and Website Configurations: Provides support for Cross-Origin Resource Sharing (CORS) configurations to manage cross-origin requests to the bucket's resources, and allows for configuring the bucket to host static websites, including setting index and error documents.
- Cross-Region Replication: Facilitates the automatic, asynchronous copying of objects across different AWS regions to enhance data availability, disaster recovery, and data compliance requirements.
## Introduction
SourceFuse's AWS Reference Architecture (ARC) Terraform module for managing Amazon S3 buckets using Terraform. It simplifies the creation, configuration, and management of S3 buckets by providing a set of predefined settings and options. The module supports advanced features such as bucket policies, access control lists (ACLs), lifecycle rules, and versioning. It also includes support for configuring Cross-Origin Resource Sharing (CORS) and cross-region replication for enhanced data availability and resilience. By leveraging this module, users can ensure consistent, secure, and efficient management of their S3 resources within an infrastructure-as-code (IaC) framework.
## Usage
To see a full example, check out the [main.tf](https://github.com/sourcefuse/terraform-aws-arc-s3/blob/feature/fix-docs/examples/simple/main.tf) file in the example folder.
```hcl
module "s3" {
source = "sourcefuse/arc-s3/aws"
version = "0.0.1"
name = var.name
acl = var.acl
lifecycle_config = local.lifecycle_config
tags = module.tags.tags
}
```
## Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | ~> 1.3, < 2.0.0 |
| [aws](#requirement\_aws) | ~> 5.0 |
| [random](#requirement\_random) | ~> 3.0 |
## Providers
No providers.
## Modules
| Name | Source | Version |
|------|--------|---------|
| [bucket](#module\_bucket) | ./modules/bucket | n/a |
| [replication](#module\_replication) | ./modules/replication | n/a |
## Resources
No resources.
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [acl](#input\_acl) | Please node ACL is deprecated by AWS in favor of bucket policies.
Defaults to "private" for backwards compatibility,recommended to set `s3_object_ownership` to "BucketOwnerEnforced" instead. | `string` | `"private"` | no |
| [availability\_zone\_id](#input\_availability\_zone\_id) | The ID of the availability zone. | `string` | `""` | no |
| [bucket\_logging\_data](#input\_bucket\_logging\_data) | (optional) Bucket logging data |
object({
enable = optional(bool, false)
target_bucket = optional(string, null)
target_prefix = optional(string, null)
}) | {
"enable": false,
"target_bucket": null,
"target_prefix": null
} | no |
| [bucket\_policy\_doc](#input\_bucket\_policy\_doc) | (optional) S3 bucket Policy doc | `string` | `null` | no |
| [cors\_configuration](#input\_cors\_configuration) | List of S3 bucket CORS configurations | list(object({
id = optional(string)
allowed_headers = optional(list(string))
allowed_methods = optional(list(string))
allowed_origins = optional(list(string))
expose_headers = optional(list(string))
max_age_seconds = optional(number)
})) | `[]` | no |
| [create\_bucket](#input\_create\_bucket) | (optional) Whether to create bucket | `bool` | `true` | no |
| [create\_s3\_directory\_bucket](#input\_create\_s3\_directory\_bucket) | Control the creation of the S3 directory bucket. Set to true to create the bucket, false to skip. | `bool` | `false` | no |
| [enable\_versioning](#input\_enable\_versioning) | Whether to enable versioning for the bucket | `bool` | `true` | no |
| [event\_notification\_details](#input\_event\_notification\_details) | (optional) S3 event notification details | object({
enabled = bool
lambda_list = optional(list(object({
lambda_function_arn = string
events = optional(list(string), ["s3:ObjectCreated:*"])
filter_prefix = string
filter_suffix = string
})), [])
queue_list = optional(list(object({
queue_arn = string
events = optional(list(string), ["s3:ObjectCreated:*"])
})), [])
topic_list = optional(list(object({
topic_arn = string
events = optional(list(string), ["s3:ObjectCreated:*"])
})), [])
}) | {
"enabled": false
} | no |
| [force\_destroy](#input\_force\_destroy) | (Optional, Default:false) Boolean that indicates all objects (including any locked objects) should be deleted from the bucket when the bucket is destroyed so that the bucket can be destroyed without error. These objects are not recoverable. This only deletes objects when the bucket is destroyed, not when setting this parameter to true. Once this parameter is set to true, there must be a successful terraform apply run before a destroy is required to update this value in the resource state. Without a successful terraform apply after this parameter is set, this flag will have no effect. If setting this field in the same operation that would require replacing the bucket or destroying the bucket, this flag will not work. Additionally when importing a bucket, a successful terraform apply is required to set this value in state before it will take effect on a destroy operation. | `bool` | `false` | no |
| [lifecycle\_config](#input\_lifecycle\_config) | (optional) S3 Lifecycle configuration | object({
enabled = bool
expected_bucket_owner = optional(string, null)
rules = list(object({
id = string
expiration = optional(object({
date = optional(string, null)
days = optional(string, null)
expired_object_delete_marker = optional(bool, false)
}), null)
transition = optional(object({
date = string
days = number
storage_class = string
}), null)
noncurrent_version_expiration = optional(object({
newer_noncurrent_versions = number
noncurrent_days = number
}), null)
noncurrent_version_transition = optional(object({
newer_noncurrent_versions = number
noncurrent_days = number
storage_class = string
}), null)
filter = optional(object({
object_size_greater_than = string
object_size_less_than = string
prefix = string
tags = map(string)
}), null)
}))
}) | {
"enabled": false,
"rules": []
} | no |
| [name](#input\_name) | Bucket name. If provided, the bucket will be created with this name instead of generating the name from the context | `string` | n/a | yes |
| [object\_lock\_config](#input\_object\_lock\_config) | (optional) Object Lock configuration | object({
mode = optional(string, "COMPLIANCE")
days = optional(number, 30)
}) | {
"days": 30,
"mode": "COMPLIANCE"
} | no |
| [object\_lock\_enabled](#input\_object\_lock\_enabled) | (Optional, Forces new resource) Indicates whether this bucket has an Object Lock configuration enabled. Valid values are true or false. This argument is not supported in all regions or partitions. | `string` | `false` | no |
| [object\_ownership](#input\_object\_ownership) | (Optional) Object ownership. Valid values: BucketOwnerPreferred, ObjectWriter or BucketOwnerEnforced
BucketOwnerPreferred - Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control canned ACL.
ObjectWriter - Uploading account will own the object if the object is uploaded with the bucket-owner-full-control canned ACL.
BucketOwnerEnforced - Bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. | `string` | `"BucketOwnerPreferred"` | no |
| [public\_access\_config](#input\_public\_access\_config) | (Optional)
block\_public\_acls - Whether Amazon S3 should block public ACLs for this bucket. Defaults to false. Enabling this setting does not affect existing policies or ACLs. When set to true causes the following behavior:
PUT Bucket acl and PUT Object acl calls will fail if the specified ACL allows public access.
PUT Object calls will fail if the request includes an object ACL.
block\_public\_policy - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access.
ignore\_public\_acls - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access.
restrict\_public\_buckets - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access. | object({
block_public_acls = optional(bool, true)
block_public_policy = optional(bool, true)
ignore_public_acls = optional(bool, true)
restrict_public_buckets = optional(bool, true)
}) | {
"block_public_acls": true,
"block_public_policy": true,
"ignore_public_acls": true,
"restrict_public_buckets": true
} | no |
| [replication\_config](#input\_replication\_config) | Replication configuration for S3 bucket | object({
enable = bool
role_name = optional(string, null) // if null , it will create new role
rules = list(object({
id = optional(string, null) // if null "${var.source_bucket_name}-rule-index"
filter = optional(list(object({
prefix = optional(string, null)
tags = optional(map(string), {})
})), [])
delete_marker_replication = optional(string, "Enabled")
source_selection_criteria = optional(object({
replica_modifications = optional(object({
status = optional(string, "Enabled")
}))
kms_key_id = optional(string, null)
sse_kms_encrypted_objects = optional(object({
status = optional(string, "Enabled")
}))
}))
destinations = list(object({
bucket = string
storage_class = optional(string, "STANDARD")
encryption_configuration = optional(object({
replica_kms_key_id = optional(string, null)
}))
}))
}))
}) | {
"enable": false,
"role_name": null,
"rules": []
} | no |
| [server\_side\_encryption\_config\_data](#input\_server\_side\_encryption\_config\_data) | (optional) S3 encryption details | object({
bucket_key_enabled = optional(bool, true)
sse_algorithm = optional(string, "AES256")
kms_master_key_id = optional(string, null)
}) | {
"bucket_key_enabled": true,
"kms_master_key_id": null,
"sse_algorithm": "AES256"
} | no |
| [tags](#input\_tags) | Tags to assign the resources. | `map(string)` | `{}` | no |
| [transfer\_acceleration\_enabled](#input\_transfer\_acceleration\_enabled) | (optional) Whether to enable Trasfer accelaration | `bool` | `false` | no |
## Outputs
| Name | Description |
|------|-------------|
| [bucket\_arn](#output\_bucket\_arn) | Bucket ARN |
| [bucket\_id](#output\_bucket\_id) | Bucket ID or Name |
| [destination\_buckets](#output\_destination\_buckets) | n/a |
| [role\_arn](#output\_role\_arn) | Role used to S3 replication |
## Development
### Prerequisites
- [terraform](https://learn.hashicorp.com/terraform/getting-started/install#installing-terraform)
- [terraform-docs](https://github.com/segmentio/terraform-docs)
- [pre-commit](https://pre-commit.com/#install)
- [golang](https://golang.org/doc/install#install)
- [golint](https://github.com/golang/lint#installation)
### Configurations
- Configure pre-commit hooks
```sh
pre-commit install
```
### Versioning
while Contributing or doing git commit please specify the breaking change in your commit message whether its major,minor or patch
For Example
```sh
git commit -m "your commit message #major"
```
By specifying this , it will bump the version and if you don't specify this in your commit message then by default it will consider patch and will bump that accordingly
### Tests
- Tests are available in `test` directory
- Configure the dependencies
```sh
cd test/
go mod init github.com/sourcefuse/terraform-aws-refarch-
go get github.com/gruntwork-io/terratest/modules/terraform
```
- Now execute the test
```sh
go test -timeout 30m
```
## Authors
This project is authored by:
- SourceFuse ARC Team