Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/in4it/ecs-deploy

ecs-deploy is a continuous deployment platform for AWS ECS. It automates deploys based a simple json/yaml file which can be integrated in your CI/CD
https://github.com/in4it/ecs-deploy

aws continuous-delivery ecs

Last synced: 3 months ago
JSON representation

ecs-deploy is a continuous deployment platform for AWS ECS. It automates deploys based a simple json/yaml file which can be integrated in your CI/CD

Awesome Lists containing this project

README

        

# ECS deploy
ECS Deploy is a REST API server written in Go that can be used to deploy services on ECS from anywhere. It typically is executed as part of your deployment pipeline. Continuous Integration software (like Jenkins, CircleCI, Bitbucket or others) often don't have proper integration with ECS. This API server can be deployed on ECS and will be used to provide continuous deployment on ECS.

* Registers services in DynamoDB
* Creates ECR repository
* Creates necessary IAM roles
* Creates ALB target and listener rules
* Creates and updates ECS Services based on json/yaml input
* SAML supported Web UI to redeploy/rollback versions, add/update/delete parameters, examine event/container logs, scale, and run manual tasks
* Support to scale out and scale in ECS Container Instances

## The UI





## Installation

### Download

You can download ecs-deploy and ecs-client from the [releases page](https://github.com/in4it/ecs-deploy/releases) or you can use the [image from dockerhub](https://hub.docker.com/r/in4it/ecs-deploy/).

### Bootstrap ECS cluster

You can bootstrap a new ECS cluster using the ecs-deploy binary. Make sure to have downloaded the ecs-deploy and ecs-client binary for your operating system at the [releases page](https://github.com/in4it/ecs-deploy/releases).

The bootstrap command will create an autoscaling group, an Application Load Balancer, an IAM role for the ECS EC2 instances, and the ECS cluster itself.

Create an SSH key for the EC2 instance using for example the following command:
```
ssh-keygen -f ~/.ssh/mykey
```

Then run ecs-deploy with the bootstrap option. To see all the flags, use ./ecs-deploy -h
```
./ecs-deploy --bootstrap \
--ecs-subnets subnet-123456 \
--ecs-vpc-id vpc-123456 \
--cloudwatch-logs-enabled \
--cloudwatch-logs-prefix mycluster \
--cluster-name mycluster \
--ecs-desired-size 1 \
--ecs-max-size 1 \
--ecs-min-size 1 \
--environment staging \
--instance-type t2.micro \
--key-name mykey \
--loadbalancer-domain ecs-deploy.in4it.io \
--paramstore-enabled \
--paramstore-prefix mycluster \
--profile your-aws-profile \
--region your-aws-region
```

If you want to delete the cluster, you can run the same command with specifying --delete-cluster:
```
./ecs-deploy --delete-cluster mycluster \
--profile your-aws-profile \
--region your-aws-region

```

### Bootstrap with terraform
Alternatively you can use terraform to deploy the ecs cluster. See [terraform/README.md](https://github.com/in4it/ecs-deploy/blob/master/terraform/README.md) for a terraform module that spins up an ecs cluster.

### Deploy a service with ECS Cluster

To deploy the examples (an nginx server and a echoserver), use ecs-client:

Login interactively:
```
./ecs-client login --url http://yourdomain/ecs-deploy
```

Login with environment variables:
```
ECS_DEPLOY_LOGIN=deploy ECS_DEPLOY_PASSWORD=password ./ecs-client login --url http://yourdomain/ecs-deploy
```

Deploy:
```
./ecs-client deploy -f examples/services/multiple-services/multiple-services.yaml
```

## Configuration (Environment variables)

The environment variables are read from the parameter store. It is enabled with the `--paramstore-enabled` flag during the bootstrap.

### AWS Specific variables:

* AWS\_REGION=region # mandatory

### Authentication variables;
* JWT\_SECRET=secret # mandatory
* DEPLOY\_PASSWORD=deploy # mandatory
* DEVELOPER\_PASSWORD=developer # mandatory

### Service specific variables
These will be used when deploying services

* AWS\_ACCOUNT\_ENV=dev|staging|testing|qa|prod
* PARAMSTORE\_ENABLED=yes
* PARAMSTORE\_PREFIX=mycompany
* PARAMSTORE\_KMS\_ARN=
* CLOUDWATCH\_LOGS\_ENABLED=yes
* CLOUDWATCH\_LOGS\_PREFIX=mycompany
* LOADBALANCER\_DOMAIN=mycompany.com

### DynamoDB specific variables
* DYNAMODB\_TABLE=Services

### ECR

* ECR\_SCAN\_ON\_PUSH=true

### SAML

SAML can be enabled using the following environment variables
* SAML\_ENABLED=yes
* SAML\_ACS\_URL=https://mycompany.com/url-prefix
* SAML\_CERTIFICATE=contents of your certificate
* SAML\_PRIVATE\_KEY=contents of your private key
* SAML\_METADATA\_URL=https://identity-provider/metadata.xml

To create a new key and certificate, the following openssl command can be used:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 3650 -nodes -subj "/CN=myservice.mycompany.com"
```

# Web UI

* PARAMSTORE\_ASSUME\_ROLE=arn # arn to assume when querying the parameter store

# Autoscaling (down and up)

## Setup

* Create an SNS topic, add https subscriber with URL https://your-domain.com/ecs-deploy/webhook
* Create a [CloudWatch Event for ECS tasks/services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch_event_stream.html)
* Create an [EC2 Auto Scaling Lifecycle hook](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html), and a CloudWatch event to capture the Lifecycle hook
* Let the SNS topic be the trigger for the CloudWatch events

## Usage

* Autoscaling (up) will be triggered when the largest container (in respect to mem/cpu) cannot be scheduled on the cluster
* Autoscaling (down) will be triggered when there is enough capacity available on the cluster to remove an instance (instance size + largest container + buffer)

## Configuration

The defaults are set for the most common use cases, but can be changed by setting environment variables:

| Environment variable | Default value | Description |
| --------------------- | ------------- | ----------- |
| PARAMSTORE\_ENABLED | no | Use "yes" to enable the parameter store. |
| PARAMSTORE\_PREFIX | "" | Prefix to use for the parameter store. mycompany will result in /mycompany/servicename/variable |
| PARAMSTORE\_KMS\_ARN | "" | Specify a KMS ARN to encrypt/decrypt variables |
| PARAMSTORE\_INJECT | no | Use "Yes" to enable injection of secrets into the task definition |
| AUTOSCALING\_STRATEGIES | LargestContainerUp,LargestContainerDown | List of autoscaling strategies to apply. See below for different types |
| AUTOSCALING\_DOWN\_STRATEGY | gracefully | Only gracefully supported now (uses interval and period before executing the scaling down operation) |
| AUTOSCALING\_UP\_STRATEGY | immediately | Scale up strategy (immediatey, gracefully) |
| AUTOSCALING\_DOWN\_COOLDOWN | 5 | Cooldown period after scaling down |
| AUTOSCALING\_DOWN\_INTERVAL | 60 | Seconds between intervals to check resource usage before scaling, after a scaling down operation is detected |
| AUTOSCALING\_DOWN\_PERIOD | 5 | Periods to check before scaling |
| AUTOSCALING\_UP\_COOLDOWN | 5 | Cooldown period after scaling up |
| AUTOSCALING\_UP\_INTERVAL | 60 | Seconds between intervals to check resource usage before scaling, after a scaling up operation is detected |
| AUTOSCALING\_UP\_PERIOD | 5 | Periods to check before scaling |
| SERVICE\_DISCOVERY\_TTL | 60 | TTL for service discovery records |
| SERVICE_DISCOVERY_FAILURETHRESHOLD | 3 | Failure threshold for service discovery records |
| AWS\_RESOURCE\_CREATION\_ENABLED | yes | Let ecs-deploy create AWS IAM resources for you |
| SLACK\_WEBHOOKS | "" | Comma seperated Slack webhooks, optionally with a channel (format: url1:#channel,url2:#channel) |
| SLACK\_USERNAME | ecs-deploy | Slack username |
| ECS\_TASK\_ROLE\_PERMISSION\_BOUNDARY\_ARN | "" | permission boundary for ecs task roles |
| ECR\_SCAN\_ON\_PUSH | false | Enable ECR image scanning |
| DEPLOY_MAX_WAIT_SECONDS | 900 | wait 15 minutes for a deployment to complete |

### Autoscaling Strategies

| Strategy | Description |
| ---------------| ----------- |
| LargestContainerUp | Scale when the largest container (+buffer) in the cluster cannot be scheduled anymore on a node |
| LargestContainerDown | Scale down when there is enough capacity to schedule the largest container (buffer) after a node is removed |
| Polling | Poll all services every minute to check if a task can't be scheduled due to resource constraints (10 services per api call, only 1 call per second) |