Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/roeeelnekave/high-availability-terraform
Deploy high availability flask application enable monitoring with cloudwatch with terraform, aws , ansible and github actions
https://github.com/roeeelnekave/high-availability-terraform
ansible aws aws-alb aws-asg aws-ec2 aws-iam aws-rds-postgres aws-secrets-manager github-actions terraform
Last synced: 22 days ago
JSON representation
Deploy high availability flask application enable monitoring with cloudwatch with terraform, aws , ansible and github actions
- Host: GitHub
- URL: https://github.com/roeeelnekave/high-availability-terraform
- Owner: roeeelnekave
- Created: 2024-08-29T15:23:56.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-09-10T09:54:24.000Z (4 months ago)
- Last Synced: 2024-11-07T19:45:54.175Z (2 months ago)
- Topics: ansible, aws, aws-alb, aws-asg, aws-ec2, aws-iam, aws-rds-postgres, aws-secrets-manager, github-actions, terraform
- Language: HCL
- Homepage:
- Size: 35.2 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Setup highly available faultoularant application using ec2 asg, alb, rds with terraform, ansible and github actions
# Prerequisites- Ansible
- Terrafrom
- Github
- Github actions### Create a github repository and clone it and Setup a folder structure
```bash
mkdir -p ./.github/workflows
mkdir -p ./terraform
mkdir -p ./ansible
mkdir -p ./templates
mkdir -p ./tempec2
```### Setup infrastructure
- Create a provider file `touch ./terraform/providers.tf` and paste the following
```
provider "aws" {
region = "us-west-2"
}
```
1. To setup a infrastructure we create a vpc to do so we create `touch ./terraform/vpc.tf` and paste the following in `./terraform/vpc.tf`
```
resource "aws_vpc" "high-av" {
cidr_block = var.vpc-cidr
instance_tenancy = "default"tags = {
Name = "high-av"
}
}
```
2. We then create subnet to create subnet create a file `touch ./terraform/subnets.tf` and paste the following in `./terraform/subnets.tf`
```
resource "aws_subnet" "public_subnets" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.high-av.id
cidr_block = element(var.public_subnet_cidrs, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)tags = {
Name = "Public Subnet ${count.index + 1}"
}
}resource "aws_subnet" "private_subnets" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.high-av.id
cidr_block = element(var.private_subnet_cidrs, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)tags = {
Name = "Private Subnet ${count.index + 1}"
}
}resource "aws_subnet" "rds_subnets" {
count = length(var.rds_subnet_cidrs)
vpc_id = aws_vpc.high-av.id
cidr_block = element(var.rds_subnet_cidrs, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)tags = {
Name = "RDS Subnet ${count.index + 1}"
}
}resource "aws_db_subnet_group" "rds_subnet_group" {
name = "rds-subnet-group"
subnet_ids = aws_subnet.rds_subnets[*].idtags = {
Name = "RDS Subnet Group"
}
}data "aws_availability_zones" "available" {
state = "available"
}```
3. Then we create natgateway to do so create a file `touch ./terraform/natgateway.tf` and paste the following in it:
```
resource "aws_eip" "nat" {
count = length(var.public_subnet_cidrs)
domain = "vpc"
}resource "aws_nat_gateway" "natgw" {
count = length(var.public_subnet_cidrs)
allocation_id = aws_eip.nat[count.index].id
subnet_id = aws_subnet.public_subnets[count.index].id # Attach each NAT Gateway to the corresponding public subnettags = {
Name = "NAT Gateway ${count.index + 1}"
}
}resource "aws_route_table" "private_route_table" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.high-av.idroute {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.natgw[count.index].id # Associate each route table with the corresponding NAT Gateway
}tags = {
Name = "Private Route Table ${count.index + 1}"
}
}resource "aws_route_table_association" "private_subnet_association" {
count = length(var.private_subnet_cidrs)
subnet_id = aws_subnet.private_subnets[count.index].id # Associate each private subnet with its corresponding route table
route_table_id = aws_route_table.private_route_table[count.index].id
}```
4. Then we create a route tables to create that `touch ./terraform/routetable.tf````
resource "aws_route_table" "second_rt" {
vpc_id = aws_vpc.high-av.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "2nd Route Table"
}
}resource "aws_route_table_association" "public_subnet_asso" {
count = length(var.public_subnet_cidrs)
subnet_id = element(aws_subnet.public_subnets[*].id, count.index)
route_table_id = aws_route_table.second_rt.id
}```
5. We create also create a internet gateway to do create a file `touch ./terraform/igw.tf` and paste the following to it
```
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.high-av.id
tags = {
Name = "Project VPC IG"
}
}
```6. We create a security groups `touch ./terraform/securitygroup.tf`
```
resource "aws_security_group" "app_sg" {
vpc_id = aws_vpc.high-av.idingress {
from_port = 5000
to_port = 5000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Opens port 5000 to anywhere
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "App Security Group"
}
}resource "aws_security_group" "elb_sg" {
name = "high-av-elb-sg"
description = "Security group for high availability ELB"
vpc_id = aws_vpc.high-av.idingress {
from_port = 5000
to_port = 5000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Open to all for port 5000
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] # Allow all outbound traffic
}tags = {
Name = "high-av-elb-sg"
}
}resource "aws_security_group" "db_sg" {
name = "my-db-sg"
vpc_id = aws_vpc.high-av.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Adjust for your security needs
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] # Allow all outbound traffic
}
}resource "aws_security_group" "efs_sg" {
name = "efs-sg"
description = "Security group for EFS"
vpc_id = aws_vpc.high-av.idingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Adjust this as necessary
}egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "EFS SG"
}
}```
7. We create a iam role for ec2 to fetch secrets manager and database secrets to do that create a file `touch ./terraform/iam.tf` and paste the following
```
resource "aws_iam_role" "high-av-ec2-role" {name = "high-av-role"
assume_role_policy = <')
def simulate(num_users):
thread = threading.Thread(target=simulate_users, args=(num_users,))
thread.start()
return f'Simulating {num_users} users...'# Route to get CPU and memory usage
@app.route('/usage')
def get_usage():
cpu_percent = psutil.cpu_percent(interval=1)
memory_percent = psutil.virtual_memory().percent
return {'cpu': cpu_percent, 'memory': memory_percent}if __name__ == '__main__':
create_table()
app.run(debug=True, host="0.0.0.0")```
2. create `touch ./gunicorn.py` paste the following
```python
bind = "0.0.0.0:5000"
workers = 2
```3. create `touch ./requirements.txt` paste the following
```bash
blinker==1.8.2
click==8.1.7
Flask==3.0.3
itsdangerous==2.2.0
Jinja2==3.1.4
MarkupSafe==2.1.5
psutil==6.0.0
psycopg2-binary==2.9.9
Werkzeug==3.0.3
boto3
gunicorn```
4. Create a templates
- `touch ./templates/index.html` and paste the following
```html
Stress Test
function fetchUsage() {
fetch('/usage')
.then(response => response.json())
.then(data => {
document.getElementById('cpu-usage').innerText = `CPU Usage: ${data.cpu}%`;
document.getElementById('memory-usage').innerText = `Memory Usage: ${data.memory}%`;
})
.catch(error => console.error('Error fetching usage data:', error));
}// Fetch usage data every 2 seconds
setInterval(fetchUsage, 2000);
Stress Test Form
Submit
Submissions
View All Submissions
Simulate Users
Simulate 10 Users
System Usage
CPU Usage: 0%
Memory Usage: 0%
```
- `touch ./templates/submissions.html` paste the following in it
```html
Submissions
All Submissions
- {{ submission[1] }}
{% for submission in submissions %}
{% endfor %}
Back to Form
```
# Setup CI-CD Structure
1. Sigin in AWS Console navigate to IAM console
2. On IAM console click on **Identity providers**
3. Click on **Add providers**
4. On **Provider type** select **Open ID Connect**
5. On **Provider URL** add `token.actions.githubusercontent.com`
6. On **Audience** add `sts.amazonaws.com`
7. Click on **Add Provider**
8. Click on the provider that you have copy and note down the **ARN** the we just created then click on **Assign Role**
9. Select **Create a New Role** then click **Next**.
10. In **Trusted entity type** select **Web Identity**
11. Scroll down on **Web identity** on **Audience** select `sts.amazonaws.com`
12. On **GitHub organization** give your github username
13. Leave all to default click on **Next**
14. Select appropriate permission if required for now let's give `AdministratorAccess` permission select **AdministratorAccess** scroll down and click **Next**.
15. On **Role name** give it name as `github-action-role`
16. On **Description** give a description like
```text
Github Actions role to give terrafrom to setup infrastructure for the application
```
17. Now scroll down and click **Create Role**
18. Now navigate to role that we just created copy and down the **ARN** of that role.
19. Now go to the github navigate to your project repository click on **Settings**
20. Scroll down Under *Security* section click **Secret and Variables** then click on **Action**.
21. Then click on **New Repository Secret**
22. Give it a name **IAMROLE_GITHUB** and paste the role arn that you have copied in step 18 then click **Add Secret**.
23. We also create ssh key run the following to create a ssh key
```bash
ssh-keygen -t rsa -b 4096 -f id_rsa -N ""
```
24. Again click on **New Repository Secret** give it a name as `MYKEY_PUB` and copy the content of `id_rsa.pub` by `cat ./id_rsa.pub` and then click **Add Secret**.
24. create the github actions file for to deploy ci `touch ./.github/workflows/build.yaml` and paste the following and replace with the `ARN` of the role that you copied in step 18 this will checkout the repo and do aws configure.
```yaml
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: us-west-2
role-to-assume: ${{ secrets.IAMROLE_GITHUB }}
role-session-name: BuildSession
```
3. To Install necessary packages paste the following in `./.github/workflows/build.yaml`
```yaml
- name: Install unzip, AWS CLI, Ansible, and Terraform
run: |
sudo apt-get update
sudo apt-get install -y unzip awscli gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update
sudo apt-get install terraform -y
pip install ansible
```
4. To create a server ami maker we create an instance and configure with it with ansible to do that paste the following in `./.github/workflows/build.yaml`
```yaml
- name: Generate SSH Key Pair and Launch EC2 instances
run: |
echo "[cpu-api]" > inventory
ssh-keygen -t rsa -b 4096 -f id_rsa -N ""
terraform init
terraform apply --auto-approve
working-directory: tempec2
continue-on-error: true
- name: Ansible Playbook
run: |
sleep 30
chmod 400 id_rsa
ansible-playbook -i inventory --private-key id_rsa install-app.yml
working-directory: tempec2
continue-on-error: true
```
5. TO Create a AMI and Wait until ami creation is finished paste the following in `./.github/workflows/build.yaml`
```yaml
- name: Retrieve Instance ID
id: get_instance_id
run: |
INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=amimaker" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
echo "INSTANCE_ID=$INSTANCE_ID" >> $GITHUB_ENV
continue-on-error: true
- name: Create AMI
id: create_ami
run: |
OUTPUT=$(aws ec2 create-image \
--instance-id ${{ env.INSTANCE_ID }} \
--name "MyNewAMI-${{ github.run_number }}" \
--no-reboot)
AMI_ID=$(echo $OUTPUT | jq -r '.ImageId')
echo "AMI_ID=$AMI_ID" >> $GITHUB_ENV
continue-on-error: true
- name: Wait for AMI to be available
id: wait_for_ami
run: |
AMI_ID=${{ env.AMI_ID }}
echo "Waiting for AMI $AMI_ID to be available..."
STATUS="pending"
while [ "$STATUS" != "available" ]; do
STATUS=$(aws ec2 describe-images \
--image-ids $AMI_ID \
--query "Images[0].State" \
--output text)
echo "Current status: $STATUS"
if [ "$STATUS" == "available" ]; then
echo "AMI $AMI_ID is available."
break
fi
sleep 30 # Wait for 30 seconds before checking again
done
continue-on-error: true
```
6. To grab ami id the we have just created and destroy the ami maker ec2 paste the following in the `./.github/workflows/build.yaml`
```yaml
- name: Output AMI ID
run: echo "AMI ID is ${{ env.AMI_ID }}"
continue-on-error: true
- name: Terraform destroy
run: |
terraform destroy --auto-approve
working-directory: tempec2
continue-on-error: true
```
7. To setup our infra paste the following and handle a failure to destroy paste the following in `./.github/workflows/build.yaml`
```yaml
- name: Terraform Update the infrastructure
run: |
echo "$MY_PUB_KEY" > mykey.pub
terraform init
terraform apply -var="ami-id=${{ env.AMI_ID }}" --auto-approve
working-directory: terraform
continue-on-error: true
env:
MY_PUB_KEY: ${{ secrets.MYKEY_PUB }}
- name: Terraform Destroy on Failure
if: failure()
run: terraform destroy --auto-approve
working-directory: terraform
```
Now run the following commands to push to your github repository
```bash
git add .
git commit -m "Updated the repos"
git push
```
Check the github actions and your infrastructre