An open API service indexing awesome lists of open source software.

https://github.com/chrispsheehan/aws-serverless-github-deploy

Terraform + GitHub Actions for AWS serverless: Lambda + API Gateway with CodeDeploy blue/green (all-at-once/canary/linear) and provisioned concurrency (none/fixed/autoscaled), shipped via OIDC + Just recipes.
https://github.com/chrispsheehan/aws-serverless-github-deploy

apigateway aws-serverless cicd codedeploy lamda terraform terragrunt

Last synced: about 2 months ago
JSON representation

Terraform + GitHub Actions for AWS serverless: Lambda + API Gateway with CodeDeploy blue/green (all-at-once/canary/linear) and provisioned concurrency (none/fixed/autoscaled), shipped via OIDC + Just recipes.

Awesome Lists containing this project

README

          

# aws-serverless-github-deploy

**Terraform + GitHub Actions for AWS serverless deployments.**
Lambda + API Gateway with CodeDeploy rollouts and provisioned concurrency controls — driven by clean module variables and `just` recipes.

---

## 🚀 setup roles for ci

```sh
just tg ci aws/oidc apply
just tg dev aws/oidc apply
just tg prod aws/oidc apply
```

## đŸ› ī¸ local plan some infra

Given a terragrunt file is found at `infra/live/dev/aws/api/terragrunt.hcl`

```sh
just tg dev aws/api plan
```

## âš™ī¸ types of lambda provisioned concurrency

```hcl
module "lambda_example" {
source = "../lambda"
...
provisioned_config = var.your_provisioned_config
}
```

#### ✅ [default] No provisioned lambdas
- use case: background processes
- we can handle an initial lag while lambda warms up/boots
```hcl
provisioned_config = {
fixed = 0
reserved_concurrency = 2 # only allow 2 concurrent executions THIS ALSO SERVES AS A LIMIT TO AVOID THROTTLING
}
```

#### 🔒 X number of provisioned lambdas
- use case: high predictable usage
- we never want lag due to warm up and can predict traffic
```hcl
provisioned_config = {
fixed = 10
reserved_concurrency = 50
}
```

#### 📈 Scale provisioning when usage exceeds % tolerance
- use case: react to traffic i.e. api backend
- limit the cost with autoscale.max
- ensure minimal concurrency (no cold starts) with autoscale.min
- set tolerance to amount of used concurrent executions. Below will trigger when 70% are used and add more to meet demands.
- set cool down seconds to reasonable time before you would like the system to react.
```hcl
provisioned_config = {
auto_scale = {
max = 3,
min = 1,
trigger_percent = 70
cool_down_seconds = 60
}
}
```
- before scaling the lambda alias will match the minmum value
![a](docs/lambda-config-before.png)
- when the trigger percent is exceeded the lambda moves into `In progress (1/2)` state as an additional provisioned lambda is added.
![a](docs/lamba-scaling-up.png)
- after scaling the lambda alias will show an additional provisioned lambda
![a](docs/lambda-config-after.png)

## đŸšĻ types of lambda deploy

```hcl
module "lambda_example" {
source = "../_shared/lambda"
...
deployment_config = var.your_deployment_config
}
```

#### ⚡ [default] All at once (fastest):

- use case: background processes
```hcl
deployment_config = {
strategy = "all_at_once"
}
```

#### 🐤 canary deployment:

- use case: api or service serving traffic
- incrementally rolls out new version to 10% of lambdas and rolls back if errors detected. If not goes to 100%.
- waits to make a decision on health after 1 minute
```hcl
deployment_config = {
strategy = "canary"
percentage = 10
interval_minutes = 1
}
```

#### đŸ“ļ linear deployment:

- use case: api or service serving traffic
- checks for lambda health on 10% of lambdas and rolls back if errors detected
- rolls out changes on increments of 1 minute
```hcl
deployment_config = {
strategy = "linear"
percentage = 10
interval_minutes = 1
}
```

## đŸ”Ĩâ†Šī¸ deployment roll-back

- use cloudwatch metrics and alarms to automatically roll-back a deployment
- create a [cloudwatch_metric_alarm](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_metric_alarm) resource and pass in as per below

```hcl
module "lambda_example" {
source = "../_shared/lambda"
...
codedeploy_alarm_names = [
local.api_5xx_alarm_name
]
}
```
- if the alarm triggers during a deployment you will see the below in the CI

```
đŸ“Ļ Running: lambda-deploy
🚀 Started deployment: d-40UUQH3DF
Attempt 1: Deployment status is InProgress
Attempt 2: Deployment status is InProgress
Attempt 3: Deployment status is InProgress
Attempt 4: Deployment status is InProgress
Attempt 5: Deployment status is Stopped
❌ Deployment d-40UUQH3DF failed or was stopped.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| GetDeployment |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ErrorCode | ALARM_ACTIVE |
| ErrorMessage| One or more alarms have been activated according to the Amazon CloudWatch metrics you selected, and the affected deployments have been stopped. Activated alarms: |
| Status | Stopped |
+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
error: Recipe `lambda-deploy` failed with exit code 1
Error: Process completed with exit code 1.

```