{"id":19297395,"url":"https://github.com/dmtkac/automatic-aws-deploy","last_synced_at":"2026-04-14T05:31:59.045Z","repository":{"id":261624183,"uuid":"879808024","full_name":"dmtkac/automatic-aws-deploy","owner":"dmtkac","description":"Automated deployment of a multi-container web app on AWS using Terraform, Packer, and Docker for infrastructure provisioning, continuous deployment and comprehensive monitoring stack.","archived":false,"fork":false,"pushed_at":"2025-04-23T11:14:12.000Z","size":5525,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-23T12:25:15.683Z","etag":null,"topics":["aws","bash","cicd-pipeline","devops","docker","grafana","javascript","loki","packer","powershell","prometheus","scripting","security-automation","terraform","testing","ubuntu-server","webapp"],"latest_commit_sha":null,"homepage":"https://matematikum.xyz/en/","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dmtkac.png","metadata":{"files":{"readme":"Readme.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-10-28T15:26:30.000Z","updated_at":"2025-04-23T11:14:15.000Z","dependencies_parsed_at":"2024-11-07T15:34:00.604Z","dependency_job_id":"fb71ad1b-cf33-4e46-9076-78cbeff80902","html_url":"https://github.com/dmtkac/automatic-aws-deploy","commit_stats":null,"previous_names":["dmtkac/automatic-aws-deploy"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/dmtkac/automatic-aws-deploy","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmtkac%2Fautomatic-aws-deploy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmtkac%2Fautomatic-aws-deploy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmtkac%2Fautomatic-aws-deploy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmtkac%2Fautomatic-aws-deploy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dmtkac","download_url":"https://codeload.github.com/dmtkac/automatic-aws-deploy/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmtkac%2Fautomatic-aws-deploy/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31784251,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-14T02:24:21.117Z","status":"ssl_error","status_checked_at":"2026-04-14T02:24:20.627Z","response_time":153,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aws","bash","cicd-pipeline","devops","docker","grafana","javascript","loki","packer","powershell","prometheus","scripting","security-automation","terraform","testing","ubuntu-server","webapp"],"created_at":"2024-11-09T23:04:36.392Z","updated_at":"2026-04-14T05:31:59.039Z","avatar_url":"https://github.com/dmtkac.png","language":"Shell","funding_links":["https://www.paypal.com/donate?hosted_button_id=WWVH67M22965A"],"categories":[],"sub_categories":[],"readme":"# Automated web application deployment with AWS\n\n## Table of contents\n- [Introduction](#Introduction)\n    - [Interface](#Interface)\n    - [Prerequisites](#Prerequisites)\n    - [Architecture](#Architecture)\n    - [Technical stack](#technical-stack)\n- [General deploying algorithm (local + EC2)](#General-deploying-algorithm)\n- [Detailed deploying algorithm on EC2](#detailed-autorun-of-installsh-EC2-part)\n- [Project's directories and files](#Projects-directories-and-files)\n    - [.github/workflows](#githubworkflows)\n    - [configs](#configs)\n    - [cypress](#cypress)\n    - [docker](#docker)\n    - [prometheus/loki/grafana](#Monitoring-stack)\n    - [illustrations](#illustrations)\n    - [key](#key)\n    - [packer](#packer)\n    - [scripts](#scripts)\n    - [terraform](#terraform)\n    - [utilities](#utilities)\n    - [root level files](#root-level-files)\n- [Detailed Terraform setup](#Terraform-setup)\n    - [AWS EC2 instances](#AWS-EC2-instances)\n    - [Application load balancer (ALB)](#Application-load-balancer-ALB)\n    - [Target groups](#Target-groups)\n    - [IAM roles and policies](#IAM-roles-and-policies)\n    - [Security groups](#Security-groups)\n    - [S3 bucket](#S3-bucket)\n    - [Route 53](#Route-53)\n    - [IP whitelisting](#IP-whitelisting)\n    - [Outputs](#Outputs)\n- [CI/CD pipeline](#CICD-pipeline)\n  - [1st part (GitHub Actions)](#GitHub-Actions-workflow)\n  - [2nd part (EC2)](#ec2-part-of-workflow)\n- [Monitoring stack](#Monitoring-stack)\n  - [AWS Cloudwatch metrics in Grafana](#AWS-Cloudwatch-metrics-in-Grafana)\n- [Testing](#Testing)\n    - ['frontend' container testing](#frontend-container-testing)\n        - [Raw code syntax check (ESLint)](#Raw-code-syntax-check-ESLint)\n        - [Integration testing (Jest)](#Integration-testing-Jest)\n        - [End-to-End testing (Cypress)](#End-to-End-testing-Cypress)\n            - [UI tests](#UI-tests)\n            - [Interaction tests](#Interaction-tests)\n    - ['postgres' container testing](#postgres-container-testing)\n    - ['gateway' container testing](#gateway-container-testing)\n- [Future work](#Future-work)\n    - [Scalability](#Scalability)\n    - [UX](#UX)\n    - [Security](#Security)\n    - [CI/CD](#CICD)\n- [Licensing](#Licensing)\n- [Support this project](#support-this-project)\n\n## Introduction\nThis project is part of my DevOps portfolio, featuring technologies such as [AWS](#Architecture), [Docker](#docker), [Terraform](#Terraform-Setup), [Packer](#packer) [GitHub Actions CI/CD pipelining](#GitHub-Actions-CICD-workflow) and various [testing frameworks](#Testing). It is designed for quick, automated deployment of a full-stack, multi-container web application. The web app demonstrated here is a demo version of a full-scale math quiz application developed within the scope of my other larger [project](https://matematikum.xyz/en/index_en.html). While I’ve implemented some of the security measures used in my original setup, others remain undisclosed for obvious reasons.\n\nThe security layers presented in this project include:\n\n- A proactive Web Application Firewall (WAF) using [ModSecurity](./docker/libmodsecurity/);\n- A reactive banning system with [Fail2ban](./configs/jail.local) integrated directly with iptables;\n- Country-level [GeoIP blocking](./docker/nginx.conf) powered by the MaxMind free GeoLite database;\n- A restrictive UFW rule set.\n\nFeel free to experiment with and modify this setup as needed. See licensing conditions [here](#Licensing).\n\nThe web app itself is written in [JavaScript](./docker/frontend/server.js) and [HTML](./docker/frontend//public/index.html), running on Node.js within the 'frontend' container, serving as a demonstration of the DevOps processes behind it. However, you're free to replace the 'frontend' container with any other service, app, or static website, and the deployment will proceed in just a few minutes (depending on the size of your application). In the current setup, deployment takes about 30 minutes, with roughly 17 minutes spent compiling a custom Nginx server with third-party libraries (ModSecurity and GeoIP).\n\nThe 'frontend' container relies on the 'postgres' container being fully operational, as its logic is tightly integrated with the database. The PostgreSQL [database](./docker/postgres-init/Sample.sql) in this demo contains 40 single- and multiple-choice math questions, taken from a larger set of over 1,000 questions in the full version of my web app. These questions are shuffled and presented randomly in batches of 20 every time the user submits answers and refreshes the page. Additionally, IP blocking is in place on the [backend code](./docker/frontend/server.js) level to prevent abuse, with short-term (15 minutes) and long-term (24 hours) blocks applied to users excessively refreshing the page without interacting with the app's interface (see Image 1).\n\nTo start the deployment process, simply run one of the [deploy scripts](#root-level-files) (available in both PowerShell and Bash) located in the root directory. The general deploying algorithm is shown [here](#General-deploying-algorithm).\n\n### Interface\nThe web app posseses [adaptive interface](./docker/frontend/public/styles.css) and runs smoothly on any device via web browser.\n\n![Interface of web app](https://matematikum.xyz/images/thumbs/device_collage_en.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 1. Interface of web app\u003c/i\u003e\u003c/p\u003e\n\n### Monitoring\n[Monitoring stack](#Monitoring-stack) of this project comprises wide variety of services, such as Prometheus, Loki, Process Exporter, Postgres Exporter and Prom-client combined with Grafana's dashboard capabilities.\n\n![Monitoring stack](./docker/monitoring_stack/screenshots/2-quick_server_grafana_panels.png)\n![Monitoring stack](./docker/monitoring_stack/screenshots/3-server_logs_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 2. Monitoring stack\u003c/i\u003e\u003c/p\u003e\n\n[More screenshots](#monitoring-stack)\n\n### Prerequisites\n- AWS account with programmatic access and required policies (AmazonEC2FullAccess, AmazonS3FullAccess, etc.);\n- GitHub Action Secrets configured for automated deployment (AWS credentials, ECR repository, etc.);\n- Software installed on local machine where deploy script runs: AWS CLI, Packer, Terraform, Git, SSH.\n\nRun either of [deploy scripts](#root-level-files) for details.\n\n### Architecture\n- 2 EC2 instances (Ubuntu) with Docker installed\n- S3 bucket for storing application assets\n- Elastic Load Balancer for traffic distribution\n- (Optional) Elastic Container Registry (ECR), Route 53, and Amazon Certificate Manager for domain name and SSL management\n\nWeb app:\n  - Nginx reverse proxy\n  - Node.js web application\n  - PostgreSQL database\n\nMonitoring:\n  - Telegraf/Prometheus/Grafana pipeline for monitoring server's metrics\n  - Process_exporter/Prometheus/Grafana pipeline for monitoring server's processes\n  - Postgres_exporter/Prometheus/Grafana pipeline for monitoring PostgreSQL database\n  - Prom-client/Prometheus/Grafana pipeline for monitoring Node.js metrics\n  - Custom 'Log_pusher'/Loki/Grafana pipeline for monitoring server's logs\n\n### Technical stack\n- Infrastructure:\n  - [AWS/Terraform](#terraform)\n  - [Docker](#docker)\n  - [Packer](#packer)\n  - [GitHub Actions](#GitHub-Actions-CICD-workflow)\n  - [Bash](./deploy.sh)\n  - [PowerShell](./deploy.ps1)\n  - Python: [1: log parser](./scripts/every_minute_modsec_dump_parsed.py); [2: log pusher service for Loki](./docker/monitoring_stack/log_pusher/push_logs.py)\n\n- Security:\n  - WAF ([ModSecurity](./docker/nginx.conf/), [Fail2ban](./configs/jail.local))\n  - [Nginx proxy](./docker/nginx.conf)\n  - [GeoIP blocking](./docker/nginx.conf)\n  - [Used libraries](./docker/frontend/package.json):\n    - helmet \n    - rate-limiter\n\n- Frontend:\n  - [HTML](./docker/frontend/public/index.html)\n  - [CSS](./docker/frontend//public/styles.css)\n\n- Backend:\n  - [Node.js](./docker/frontend/server.js)\n  - [Express.js](./docker/frontend/server.js)\n  - [PostgreSQL](./docker/postgres-init/Sample.sql)\n\n- Monitoring\n  - [Grafana](./docker/monitoring_stack/grafana/)\n  - [Prometheus](./docker/monitoring_stack/prometheus.yml)\n  - [Telegraf](./docker/monitoring_stack/telegraf.conf)\n  - [Process exporter](./docker/monitoring_stack/process_exporter_config.yml)\n  - [Loki](./docker/monitoring_stack/loki/loki-config.yml)\n  - [Custom log pusher for Loki](./docker/monitoring_stack/log_pusher/push_logs.py)\n\n- Testing:\n  - [Jest](#integration-testing-jest)\n  - [Cypress](#end-to-end-testing-cypress)\n  - [ESlint](#raw-code-syntax-check-eslint)\n  - [PostgreSQL](#gateway-container-testing)\n\n## General deploying algorithm\n\n```mermaid\ngraph TD\n    A[Start Deploy Script] --\u003e B[Display AWS Prerequisites]\n    B --\u003e BA{Check Required Soft}\n    BA -- All installed --\u003e C{AWS CLI Configured?}\n    BA -- Some missing --\u003e BB[Exit Script]\n    C -- Yes --\u003e D{Change Working Branch}\n    C -- No --\u003e E[Exit Script]\n    D -- source code --\u003eDA['main' Stays Generic]\n    D -- working branch --\u003e F[Create 'main-configured']\n    F --\u003e G[Generate SSH Key Pair]\n    G --\u003e H[Import Key Pair to AWS]\n    H --\u003e I{Enable CI/CD Pipeline?}\n    I -- Yes --\u003e J[Create AWS ECR]\n    I -- No --\u003e K[Skip CI/CD Pipeline]\n    K --\u003e L{Has Custom Domain?}\n    J --\u003e L{Has Custom Domain?}\n    L -- Yes --\u003e M[Create ACM]\n    L -- No --\u003e N[Use ELB DNS for Access]\n    M --\u003e O{Fetch User IP Address?}\n    N --\u003e O\n    O -- Yes --\u003e P[Update SSH Whitelist]\n    O -- No --\u003e PA[Exit Script]\n    P --\u003e Q[Fetch Latest Ubuntu AMI ID]\n    Q --\u003e R[Update Packer Template]\n    R --\u003e S[Build New AMI with Packer]\n    S --\u003e T[Initialize Terraform]\n    T --\u003e U[Apply Terraform Plan]\n    U --\u003e V1[Run 'install.sh' on EC2]\n    U --\u003e VA1[Run 'install.sh' on EC2]     \n    subgraph V [Autorun 'install.sh' on 1 EC2]\n        V1[Install Necessary Packages]\n        V1 --\u003e V2[Set Up AWS CLI]\n        V2 --\u003e V3[Configure UFW and IPTables]\n        V3 --\u003e V4[Install Docker]\n        V4 --\u003e V5[Fetch S3 Bucket Name]\n        V5 --\u003e V6[Log into AWS ECR]\n        V6 --\u003e V7[Create Docker Override]\n        V7 --\u003e V8[Add public IP to 'nginx.conf']\n        V8 --\u003e V9[Generate self-signed cert.]\n        V9 --\u003e V10[Run Docker Compose]\n        V10 --\u003e V11[Set WAF 'ModSecurity']\n        V11 --\u003e V12[Set Fail2Ban and Cron Serv.]\n        V12 --\u003e V13[Final Checks and Cleanups]\n    end\n    \n    subgraph VA [Autorun 'install.sh' on 2 EC2]\n        VA1[Install Necessary Packages]\n        VA1 --\u003e VA2[Set Up AWS CLI]\n        VA2 --\u003e VA3[Configure UFW and IPTables]\n        VA3 --\u003e VA4[Install Docker]\n        VA4 --\u003e VA5[Fetch S3 Bucket Name]\n        VA5 --\u003e VA6[Log into AWS ECR]\n        VA6 --\u003e VA7[Create Docker Override]\n        VA7 --\u003e VA8[Add public IP to 'nginx.conf']\n        VA8 --\u003e VA9[Generate self-signed cert.]\n        VA9 --\u003e VA10[Run Docker Compose]\n        VA10 --\u003e VA11[Set WAF 'ModSecurity']\n        VA11 --\u003e VA12[Set Fail2Ban and Cron Serv.]\n        VA12 --\u003e VA13[Final Checks and Cleanups]\n    end\n    \n    subgraph VC1 [Docker Service]\n        VF1[Database: Postgres]\n        VF1 \u003c--\u003e VF2[Frontend: Node.js]\n        VF2 \u003c--\u003e VF3[Gateway: Nginx]\n        VF1 \u003c--\u003e VF4[Monitoring  stack]\n        VF2 \u003c--\u003e VF4[Monitoring  stack]\n        VF3 \u003c--\u003e VF4[Monitoring  stack]\n    end\n    V10 -.-\u003e VC1\n    \n    subgraph VC2 [Docker Service]\n        VF1A[Database: Postgres]\n        VF1A \u003c--\u003e VF2A[Frontend: Node.js]\n        VF2A \u003c--\u003e VF3A[Gateway: Nginx]\n        VF1A \u003c--\u003e VF4A[Monitoring  stack]\n        VF2A \u003c--\u003e VF4A[Monitoring  stack]\n        VF3A \u003c--\u003e VF4A[Monitoring  stack]\n    end\n    VA10 -.-\u003e VC2\n\n    U --\u003e VB[Fetch SSH conn. commands]\n    VB --\u003e VC[Upload content to S3]\n    VC --\u003e W[Push 'main-configured']\n    W --\u003e X{Connect to EC2 Instances?}\n    X -- Yes --\u003e Y{Connect via SSH?}\n    Y -- Yes --\u003e AA{Which EC2 instance?}\n    AA --\u003e 1 --\u003e V\n    AA --\u003e 2 --\u003e VA\n    Y -- No --\u003e AB[Connect Later]\n    AB --\u003e ABA[Exit Script]\n    X -- No/SSH failed --\u003e Z{Destroy Infrastructure?}\n    Z -- Yes --\u003e AC{Confirm Destroying?}\n    AC -- Yes --\u003e ACA[Destroying...]\n    AC -- No --\u003e ACB[Exit Script]\n    Z -- No --\u003e AD[Use Dedicated Script Later]\n    AD --\u003e ADA[Exit Script]\n```\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 3. General deploying algorithm\u003c/i\u003e\u003c/p\u003e\n\n## Detailed autorun of '[install.sh](./scripts/install.sh)' (EC2 part)\n\n```mermaid\ngraph TD;\n    A[install.sh executes] --\u003e B[Log setup: define logfile and error handling]\n    B --\u003e C[Preconfigure iptables-persistent]\n    C --\u003e D[Read allowed IPs from file: allowed_aws_regional_ips.txt]\n    D --\u003e E[Configure passwordless sudo for 'ubuntu' user]\n    \n    E --\u003e F[Update system packages]\n    F --\u003e G[Install necessary packages: Fail2ban, Docker, AWS CLI, UFW, OpenSSH, etc.]\n    G --\u003e H[Verify AWS CLI installation]\n    \n    H --\u003e I[Configure UFW firewall]\n    I --\u003e J[Set default deny for incoming traffic]\n    J --\u003e K[Allow ports: 22, 80, 443, 3000]\n    K --\u003e L[Whitelist SSH access for allowed IPs]\n    L --\u003e M[Save iptables rules for reboot persistence]\n\n    M --\u003e N[Install Docker]\n    N --\u003e O[Start Docker service]\n    O --\u003e P[Verify Docker installation with hello-world container]\n    P --\u003e Q[Add 'ubuntu' user to Docker group]\n\n    Q --\u003e R{Check for AWS ECR}\n    R -- No AWS ECR found --\u003e U1\n    R -- AWS ECR found --\u003e S2[Login to AWS ECR]\n\n    S2 --\u003e U[Set up cron jobs for Docker containers' updates]\n    U --\u003e U1[Preparing directories for Loki and Log pusher]\n    U1 --\u003e W[Run Nginx config update script]\n    W --\u003e X[Build and start Docker containers with Docker Compose]\n    X --\u003e Y[Set Docker containers to restart on reboot]\n\n    Y --\u003e Z[Configure ModSecurity and Fail2Ban]\n    Z --\u003e AA[Create Fail2Ban jail and filter files]\n    AA --\u003e AB[Create ModSecurity log files]\n\n    AB --\u003e AC[Configure SSH service]\n    AC --\u003e AD[Enable SSH, generate host keys if needed]\n    AD --\u003e AE[Disable root login for SSH]\n    AE --\u003e AF[Restart Docker, Fail2ban, SSH, and cron services]\n\n    AF --\u003e AG[Check and log UFW status]\n    AG --\u003e AH[Check and log Docker service status]\n    AH --\u003e AI[Check and log Fail2ban status]\n    AI --\u003e AJ[Check cron job status]\n\n    AJ --\u003e AK[Cleanup sensitive information and lock down permissions]\n    AK --\u003e AL[Display disk space usage]\n\n    AL --\u003e AM{Were there errors?}\n    AM -- Yes --\u003e AN[Log errors in install_script.log]\n    AM -- No --\u003e AO[Log success message in install_script.log]\n\n    AO --\u003e AP[End of install.sh script execution]\n    AN --\u003e AP\n```\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 4. Detailed autorun of 'install.sh' (EC2 part)\u003c/i\u003e\u003c/p\u003e\n\n## Project's directories and files\n### .github/workflows\n[docker-deploy.yml](./.github/workflows/docker-deploy.yml): this directory contains GitHub Actions [workflow](#GitHub-Actions-CICD-workflow) file, defining the CI/CD pipeline to build, test, and deploy Docker containers to AWS ECR. It automates tasks such as setting up AWS credentials, building Docker images, and running tests.\n\n### configs\n[jail.local](./configs/jail.local): this is the configuration file for Fail2ban, defining jail rules to monitor and block suspicious IPs based on logs. The IP addresses included at the beginning are whitelisted user's IP and IP ranges of EU and US AWS regions to enable 'EC2 instance connect' feature, as a back up way for connecting to EC2 isntances.\n\n### cypress\n- [integration](./cypress/integration): contains test files for Cypress that run e2e (UI and interaction) tests:\n  - [ui.spec.js](./cypress/integration/ui/ui.spec.js): focuses on testing static elements on the frontend;\n  - [interaction.spec.js](./cypress/integration/interaction/interaction.spec.js): handles dynamic element testing such as user interactions and quiz logic;\n  - [plugins](./cypress/plugins): Cypress plugins to extend Cypress functionality;\n  - [support](./cypress/support): additional utilities and configuration files to support Cypress tests;\n  - [cypress.config.js](./cypress/cypress.config.js): cypress configuration file that manages test setup, plugins, and environment configurations.\n\n### docker\n- [certs](./docker/certs/): placeholder directory that gets populated with a self-signed certificate and key on each provisioned EC2 instance if no custom domain is provided during the local deployment phase. If a custom domain is specified, the custom certificate and key are uploaded to AWS Certificate Manager (ACM) during the local phase of deployment from the user's machine;\n- [frontend](./docker/frontend/): contains the frontend source code and tests:\n  - [tests](./docker/frontend/__tests__): integration tests run by Jest:\n    - [server.test.js](./docker/frontend/__tests__/server.test.js): integration testing of the backend API; fetches and validates questions and their options, verifies the existence of illustrations;\n    - [illustrations.json](./docker/frontend/__tests__/illustrations.json): a list of illustration filenames used for verifying the existence of specific images within the [illustrations](./illustrations/) folder during tests;\n  - [index.html](./docker/frontend/public/index.html): main HTML file of the frontend web application;\n  - [api.test.js](./docker/frontend/api.test.js): Jest testing file for validating frontend API calls;\n  - [Dockerfile.dev](./docker/frontend/Dockerfile.dev): Dockerfile for building the 'frontend' container;\n  - [package.json](./docker/frontend/package.json): contains the dependencies and scripts required to build and run the frontend;\n  - [server.js](./docker/frontend/server.js): backend server logic for serving the frontend or handling API requests;\n- [geoip_db](./docker/geoip_db/): this directory should contain proprietary GeoLite2 IP country-level database '.mmdb' from MaxMind; you can register there and get your copy of database with follow up updates for free;\n- [libmodsecurity](./docker/libmodsecurity/): holds ModSecurity-related files for web application firewall (WAF) setup;\n- [monitoring_stack] (./docker/monitoring_stack/): contains configuration files for Telegraf, Prometheus, custom Log_pusher written in Python, Loki, EC2 Process exporter and Grafana's dashboards:\n  - [grafana/provisioning](./docker/monitoring_stack/grafana/provisioning/): 'dashboard' contains config and layout files for the main dashboard itself and 'datasources' contains input soruces (Prometheus, Loki);\n  - [grafana/grafana.ini](./docker/monitoring_stack/grafana/grafana.ini): ini-file for auxiliary functions, such as thresholds and sending alerts based on them;\n  - [log_pusher](./docker/monitoring_stack/grafana/log_pusher/): contains Dockerfile for buidling the custom service that works with logs and the actual app written in Python, which uses Loki's ability to intake logs via its API;\n  - [loki](./docker/monitoring_stack/grafana/loki/): configuration files for Loki database - Dockerfile for building container, configuration YAML file, entrypoint shell script for assigning correct permissions to important structural directories within the service;\n  - [screenshots](./docker/monitoring_stack/screenshots/): screenshots of Grafana's dashboard in action;\n  - [process_exporter_config.yml](./docker/monitoring_stack/process_exporter_config.yml): config file for scraping AWS EC2 processes' metrics (e.g., CPU and RAM usage);\n  - [prometheus.yml](./docker/monitoring_stack/prometheus.yml): config file for Prometheus database;\n  - [telegraf.conf](./docker/monitoring_stack/telegraf.conf): config file for Prometheus' service agent that scrapes system metrics on AWS EC2; \n- [postgres](./docker/postgres/): contains Dockerfile for building the 'postgres' container;\n- [postgres-init](./docker/postgres-init/): contains 40 questions demo 'Sample' database for setting up the 'postgres' container;\n- [docker-compose.yml](./docker/docker-compose.yml): Docker Compose file for running the multi-container development environment;\n- [Dockerfile.nginx](./docker/Dockerfile.nginx): Dockerfile for setting up the 'gateway' container for serving 'frontend' and reverse proxy;\n- [nginx.conf](./docker/nginx.conf): Nginx configuration file with server and proxy settings;\n- [test_db.sql](./docker/test_db.sql): set of SQL commands for testing the database schema and validating 'Sample' database's content.\n\n### [illustrations](./illustrations/)\nStores illustration assets for the web app, which will be uploaded to an S3 bucket during the deployment process.\n\n### key\nStores public and private SSH keys for secure access to the provisioned EC2 instances (not included, generated by deploy script):\n- my-local-key-pair.pem: Private key for SSH connections;\n- my-local-key-pair.pub: Public key for SSH connections.\n\nThis directory is excluded in .gitignore from being committed to GitHub repository, so public key movement happens only between your machine and provisioned EC2 instances. Private key, of course, always stays on your machine.\n### packer\n- [packer-template.json](./packer/packer-template.json): Packer template defines what files will be baked inside AMI (and, thus, OS). It is used for continuing deploying process on provisioned EC2, particularly by including in AMI autorun '[install.sh](./scripts/install.sh)' script.\n\n### scripts\n- [install.sh](#detailed-autorun-of-installsh-EC2-part): this is the main script that runs on the EC2 instance, handling all configurations and installations (e.g., Docker, UFW, Fail2ban);\n- [every_min_dump.sh](./scripts/every_min_dump.sh): shell script providing gathering docker logs from 'gateway' container and dumping them into special log file once per minute;\n- [every_minute_modsec_dump.sh](./scripts/every_minute_modsec_dump.sh): shell scripts related to logging and ModSecurity raw dumps;\n- [every_minute_modsec_dump_parsed.py](./scripts/every_minute_modsec_dump_parsed.py): Python script that parses ModSecurity raw dumps into format readable for Failban banning system;\n- [update_nginx_conf.sh](./scripts/update_nginx_conf.sh): script that for the first on new EC2 updates the '[nginx.conf](./docker/nginx.conf)' with the instance's public IP, generates self-signed SSL certificate with respective key and palces them inside 'gateway' container;\n- [update_nginx_conf_after_ecr_pull.sh](/scripts/update_nginx_conf_after_ecr_pull.sh): script that updates the Nginx configuration with the instance's public IP, tests syntax of '[nginx.conf](./docker/nginx.conf)' and restarts the updated container built from 'gateway' image [pulled from AWS ECR](#ec2-part-of-workflow).\n\n### terraform\nContains all Terraform configurations for provisioning AWS resources (see detailed description [here](#Terraform-Setup)):\n- [main.tf](./terraform/main.tf): main Terraform configuration file that defines the EC2 instances;\n- [elb.tf](./terraform/elb.tf): defines the AWS Application Load Balancer and its listeners;\n- [iam.tf](./terraform/iam.tf): defines the IAM roles and permissions for the EC2 instances;\n- [ip_ranges.tf](./terraform/ip_ranges.tf): manages the IP ranges allowed for EC2 Instance Connect;\n- [output.tf](./terraform/output.tf): defines output values such as DNS names and instance IDs after provisioning;\n- [provider.tf](./terraform/provider.tf): specifies the AWS provider for Terraform to interact with AWS resources;\n- [route53.tf](./terraform/route53.tf): configures Route 53 for DNS management;\n- [s3.tf](./terraform/s3.tf): provisions the S3 bucket for storing web app assets;\n- [security_groups.tf](./terraform/security_groups.tf): defines security group rules for the web application.\n\n### utilities\nContains additional utility scripts for managing deployed infrastructure:\n- [destroy.ps1](./utilities/destroy.ps1): PowerShell script for automatic deleting the entire provisioned AWS infrastructure; \n- [destroy.sh](./utilities/destroy.sh): Bash script for automatic deleting the entire provisioned AWS infrastructure;\n- [test_aws_balancer.sh](./utilities/test_aws_balancer.sh): Bash script for testing the provisioned Application Load Balancer (ALB).\n\n### root level files\n- [deploy.ps1](./deploy.ps1): PowerShell script for automating the deployment process; \n- [deploy.sh](./deploy.sh): Bash script for automating the deployment process; \n- [jest.config.js](./jest.config.js): configuration for Jest used for integration testing in the project;\n- [.eslintrc.json](./.eslintrc.json): configuration for ESLint used for source code syntax check in the project;\n- Readme.md: documentation file (you are reading it right now).\n\n## Terraform setup\n### AWS EC2 instances\n([file](./terraform/main.tf))\n- aws_instance.web_app_instance: provisions two EC2 instances using a custom AMI. The instances are configured to automatically run the '[install.sh](./scripts/install.sh)' script in the background using user data. The instances are associated with an IAM role to allow access to AWS services, and they are secured by a VPC security group. Public subnets are dynamically chosen for each instance.\n\n### Application load balancer (ALB)\n([file](./terraform/elb.tf))\n- aws_lb.web_app_alb: creates an external application load balancer to distribute incoming traffic to the EC2 instances. The ALB is associated with a security group and spans across public subnets in the default VPC. Two listeners are configured:\n  - aws_lb_listener.http: handles HTTP requests, either forwarding traffic to the target group or redirecting to HTTPS (based on SSL certificate availability).\n  - aws_lb_listener.https: handles HTTPS requests if an SSL certificate is provided. It forwards traffic to the secure target group.\n\n### Target groups\n([file](./terraform/elb.tf))\n- aws_lb_target_group.http/https: these manage the HTTP and HTTPS traffic. Each target group is connected to the EC2 instances and performs health checks on the [/healthcheck](./docker/nginx.conf) endpoint to ensure instances are available and healthy.\n\n### IAM roles and policies\n([file](./terraform/iam.tf))\n- aws_iam_role.ec2_role: defines an IAM role with permissions for EC2, S3, ECR, and EC2 Instance Connect services.\n- aws_iam_instance_profile.ec2_instance_profile: links the IAM role to the EC2 instances, granting them the necessary permissions to interact with AWS services.\n\n### Security groups\n([file](./terraform/security_groups.tf))\n- aws_security_group.web_app_sg: configures security rules for the EC2 instances and ALB, allowing traffic on ports 22 (SSH), 80 (HTTP), 443 (HTTPS), 3000, and 9999. It includes additional rules to allow EC2 Instance Connect for specific IP ranges.\n\n### S3 bucket\n([file](./terraform/s3.tf))\n- aws_s3_bucket.web_app_bucket: creates an S3 bucket to store web application assets, such as graphical files. The bucket is tagged and will be destroyed along with the infrastructure.\n\n### Route 53\n([file](./terraform/route53.tf))\n- aws_route53_zone.web_app_zone: configures a Route 53 DNS zone for the application domain, if a domain name is provided.\n- aws_route53_record.web_app_record: creates a DNS A-record to route traffic to the ALB, associating the domain name with the ALB DNS.\n\n### IP whitelisting\n([file](./terraform/ip_ranges.tf))\n- ip_ranges.tf: defines a set of CIDR blocks representing AWS region-specific IP ranges and the user's public IP. These are used to allow SSH access via EC2 Instance Connect.\n\n### Outputs\n([file](./terraform/output.tf))\n- captures and outputs important details such as the ALB DNS name, instance IDs, instance public IPs, and the dynamically generated S3 bucket name. This information is used for configuration and deployment steps.\n\n## Monitoring stack\n([directory](./docker/monitoring_stack/))\n\nMonitoring stack of this project comprises wide variety of services, such as Prometheus, Loki, AWS Cloudwatch-related metrics (EC2, VPC, S3, SES ) Process Exporter, Postgres Exporter and Prom-client combined with Grafana's dashboard capabilities. Overall there are 7 logical sub-dashboards:\n\n![All sub-dashboards collapsed](./docker/monitoring_stack/screenshots/1-all_collapsed_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 5. All sub-dashboards collapsed\u003c/i\u003e\u003c/p\u003e\n\n![Quick server vitals](./docker/monitoring_stack/screenshots/2-quick_server_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 6. Quick server vitals\u003c/i\u003e\u003c/p\u003e\n\n![Gateway logs with geographical mapping, syslog and Fail2ban logs](./docker/monitoring_stack/screenshots/3-server_logs_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 7. Gateway logs with geographical mapping, syslog and Fail2ban logs\u003c/i\u003e\u003c/p\u003e\n\n![Detailed server metrics over times](./docker/monitoring_stack/screenshots/4-detailed_server_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 8. Detailed server metrics over time\u003c/i\u003e\u003c/p\u003e\n\n![Detailed server logs](./docker/monitoring_stack/screenshots/5-server_logs_grafana_panels_2.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 9. Detailed server logs\u003c/i\u003e\u003c/p\u003e\n\n![Detailed Docker metrics](./docker/monitoring_stack/screenshots/6-docker_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 10. Detailed Docker metrics\u003c/i\u003e\u003c/p\u003e\n\n![Detailed Docker metrics over time](./docker/monitoring_stack/screenshots/7-docker_grafana_panels_2.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 11. Detailed Docker metrics over time\u003c/i\u003e\u003c/p\u003e\n\n![Detailed EC2 Disc IOPS](./docker/monitoring_stack/screenshots/8-disk_iops_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 12. Detailed EC2 Disc IOPS\u003c/i\u003e\u003c/p\u003e\n\n![Detailed PostgreSQL DB connections](./docker/monitoring_stack/screenshots/9-postgresql_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 13. Detailed PostgreSQL DB connections\u003c/i\u003e\u003c/p\u003e\n\n![Detailed Node.js metrics in web app](./docker/monitoring_stack/screenshots/10-webapp_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 14. Detailed Node.js metrics in web app\u003c/i\u003e\u003c/p\u003e\n\n### AWS Cloudwatch metrics in Grafana\nThis project's custom Grafana dashboard comprises AWS metrics for Billing (charges: current quick, current detailed, historical over last 30 days), numerous S3 bucket metrics for thorough object assets monitoring and metrics for SES mailing service. They reqiure configuring separate IAM user or policy in AWS account, defining respective credentials in Grafana and adding Cloudwatch as datasource.\n\n![Detailed Node.js metrics in web app](./docker/monitoring_stack/screenshots/11-AWS_grafana_panels.png)\n\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 15. AWS Billing, S3 and SES metrics\u003c/i\u003e\u003c/p\u003e\n\n## CI/CD pipeline\nIt consists of two parts:\n\n  1. GitHub Actions -\u003e AWS ECR\n  2. AWS ECR -\u003e AWS EC2\n\n### GitHub Actions workflow\nThis [workflow](./.github/workflows/docker-deploy.yml) automates the process of building and testing Docker containers within the GitHub Actions environment whenever changes are pushed to the 'main-configured' branch, followed by deploying the corresponding images to AWS ECR:\n\n```mermaid\ngraph TD;\n    A[Triggered by push to 'main-configured' branch]\n\n    A --\u003e C1[Checkout code]\n    C1 --\u003e C2[Configure AWS credentials]\n    C2 --\u003e C3[Log in to AWS ECR]\n\n    C3 --\u003e D{Check ECR repository existence}\n    D -- Repo exists --\u003e E[Detect changed files]\n    D -- Repo not found --\u003e F[Exit gracefully]\n\n    E --\u003e G[Create .env file for Docker Compose]\n    \n    G --\u003e H1[Install Docker Compose]\n    H1 --\u003e H2[Set up Docker Buildx]\n    H2 --\u003e H3{Which container changed?}\n\n    H3 -- 'frontend' --\u003e I1[Set up Node.js]\n    I1 --\u003e I2[Clean Node cache]\n    I2 --\u003e I3[Install dependencies]\n    I3 --\u003e I4[Run ESLint on source code]\n    I4 --\u003e I5[Run Jest integration tests]\n    I5 --\u003e I6[Install Cypress]\n    I6 --\u003e I7[Start Cypress test server]\n    I7 --\u003e I8[Run Cypress e2e tests]\n    I8 --\u003e I9[Stop Cypress test server]\n    \n    I9 --\u003e I10[Build 'frontend' container]\n    I10 --\u003e I11[Push 'frontend' image to AWS ECR]\n    I11 --\u003e I12[Stop and remove 'frontend' container]\n\n    H3 -- 'postgres' --\u003e J3[Build 'postgres' container]\n    J3 --\u003e J4[Start 'postgres' container for verification]\n    J4 --\u003e J5[Run PostgreSQL integration tests]\n    J5 --\u003e J6[Push 'postgres' image to AWS ECR]\n    J6 --\u003e J7[Stop and remove 'postgres' container]\n\n    H3 -- 'gateway' --\u003e K2[Build 'gateway' container]\n    K2 --\u003e K3[Start 'gateway' container for verification]\n    K3 --\u003e K4[Push 'gateway' image to AWS ECR]\n    K4 --\u003e K5[Stop and remove 'gateway' container]\n\n    F --\u003e N[End workflow]\n    J7 --\u003e O[Clean up Docker resources]\n    I12 --\u003e O\n    K5 --\u003e O\n    O --\u003e N[End workflow]\n```\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 4. GitHub Actions workflow\u003c/i\u003e\u003c/p\u003e\n\n### EC2 part of workflow\nThe second part of the CI/CD pipeline, outlined in '[install.sh](./scripts/install.sh)', automates the pulling of the respective images from AWS ECR, along with building and testing (for 'gateway') the containers on AWS EC2 instances:\n\n```mermaid\ngraph TD;\n    O[... 'install.sh' runs first time] --\u003e A[Fetch ECR repository name]\n    A --\u003e B{Is ECR repository found?}\n    B -- No --\u003e C[Continue running 'install.sh'...]\n    B -- Yes --\u003e D[Extract repository name]\n    D --\u003e F1[Proceed with ECR-related steps]\n\n    F1 --\u003e H[Docker login to AWS ECR]\n    H -- Login failed --\u003e I[Exit with error]\n    H -- Login succeeded --\u003e K[Setup Docker run cron jobs]\n\n    K --\u003e L{Cron jobs setup successful?}\n    L -- No --\u003e M[Exit with error]\n\n    L -- Yes --\u003e N[Cron job setup succeeded] --\u003e C[Continue running 'install.sh'...]\n\n    N --\u003e P1[Setup cron job for 'postgres' container]\n    P1 --\u003e P2[Pull image from AWS ECR]\n    P2 --\u003e P3[Replace with Docker run]\n    P3 --\u003e P4[Start container]\n    \n    N --\u003e Q1[Setup cron job for 'frontend' container]\n    Q1 --\u003e Q2[Pull image from AWS ECR]\n    Q2 --\u003e Q3[Replace with Docker run]\n    Q3 --\u003e Q4[Start container]\n\n    N --\u003e R1[Setup cron job for 'gateway' container]\n    R1 --\u003e R2[Pull image from AWS ECR]\n    R2 --\u003e R3[Replace with Docker run]\n    R3 --\u003e R4[Start container]\n    R4 --\u003e R5[Run 'update_nginx_conf_after_ecr_pull.sh']\n```\n\u003cp align=\"center\"\u003e\u003ci\u003eImage 5. EC2 part of workflow\u003c/i\u003e\u003c/p\u003e\n\n## Testing\n### 'frontend' container testing\nWithin the scope of this project only the 'frontend' container undergoes the most thorough testing with source code checks ([ESLint](./.eslintrc.json), [Jest](./jest.config.js)) and end-to-end tests in near-real conditions ([Cypress](./cypress/cypress.config.js)):\n#### Raw code syntax check (ESLint)\n- [.eslintrc.json](./.eslintrc.json): runs across the codebase to detect potential syntax errors and code style violations before building containers.\n\n#### Integration testing (Jest)\n- [api.test.js](./docker/frontend/api.test.js): tests the behavior of API endpoints ([GET /api/questions, POST /api/check-answers](./docker/frontend/server.js)) with 42 total tests, focusing on endpoint functionality and response structure;\n- [server.test.js](./docker/frontend/__tests__/server.test.js): tests the server's response to requests, ensures static files and questions are handled correctly with 41 total tests.\n\n#### End-to-End testing (Cypress)\n##### UI tests\n- [ui.spec.js](./cypress/integration/ui/ui.spec.js): tests the static elements of the page, ensures correct visibility of buttons (Check/Refresh) and loading indicators (basic page structure, static content validation).\n##### Interaction tests\n- [interaction.spec.js](./cypress/integration/interaction/interaction.spec.js): tests dynamic elements such as user interactions with quiz questions and options, validates the correctness of input types (radio/checkbox) are rendered and that buttons are enabled based on user input (ensures interactive elements behave as expected, including option selection and question-answer validation).\n\n### 'postgres' container testing\nThe 'postgres' container's '[Sample](./docker/postgres-init/Sample.sql)' database is validated for integrity and accuracy using a custom [SQL script](./docker/test_db.sql):\n- [test_db.sql](./docker/test_db.sql): checks the '[Sample](./docker/postgres-init/Sample.sql)' database against set of defined rules, verifies the schema, table and column existence, and ensures the correct number of records (40 questions, 160 options) are present in the database, providing detailed error messages in case of discrepancies.\n\n### 'gateway' container testing\nThe 'gateway' container is verified within the GitHub Actions workflow and tested for '[nginx.conf](./docker/nginx.conf)' syntax correctness on each EC2 instance after [pulling the 'gateway' image from AWS ECR](#ec2-part-of-workflow):\n- Validation: brings 'gateway' container up after it is built during the GitHub Actions workflow run;\n- Nginx syntax check: carried out individually on each provisioned EC2 instance after 'gateway' container is pulled from AWS ECR. \n\n## Future work\n### Scalability\n- Dynamically adjust the number of provisioned EC2 instances based on user input (as opposed to 2 hardcoded EC2 instances now);\n\n### UX\n- In deploy scripts automatically assign single core IAM policy and fetch additional policies without prior manual work in AWS account;\n- Replace self-signed certificates with \"Let's Encrypt\" when no custom domain is provided to eliminate browser warnings;\n\n### Security\n- Automate and schedule periodic updates (twice a week) for the MaxMind GeoIP database used by the ModSecurity module on the Nginx server;\n\n### CI/CD\n- Integrate a notification system (e.g., Postfix server on EC2 + AWS SES) to alert when updates are available in the pipeline;\n- In the deploy script automate lifecycle policy attachment for AWS ECR images, so they would not incur storage charges;\n- In GitHub Actions workflow implement container rebuild caching to improve efficiency;\n- In GitHub Actions workflow update to the latest versions all deprecated modules (if any).\n\n## Licensing\nThis work is licensed under the GNU General Public License v3.0 ([GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html)).\n\n- You are free to use, modify, and distribute this software, as long as any modified versions are also distributed under the same GPL license.\n- You must give appropriate credit to the original author.\n- This software is distributed without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.\n\nFor the full terms, see the [LICENSE](./LICENSE.txt) file in this repository or visit [https://www.gnu.org/licenses/gpl-3.0.html](https://www.gnu.org/licenses/gpl-3.0.html).\n\n![GPLv3 License](https://img.shields.io/badge/License-GPL%20v3-blue.svg)\n\n## Support this project\nIf you enjoyed this project or found it useful, consider supporting me! Your donations help me to maintain and develop future improvements.\n\n[![Donate with PayPal button](https://www.paypalobjects.com/en_US/i/btn/btn_donate_LG.gif)](https://www.paypal.com/donate?hosted_button_id=WWVH67M22965A)\n\n[![Donate with Crypto](https://img.shields.io/badge/Donate%20with-Crypto-green?style=flat-square\u0026logo=bitcoin)](./qr_crypto.png)","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmtkac%2Fautomatic-aws-deploy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdmtkac%2Fautomatic-aws-deploy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmtkac%2Fautomatic-aws-deploy/lists"}