{"id":20185146,"url":"https://github.com/aeternity/infrastructure","last_synced_at":"2026-03-02T15:03:55.218Z","repository":{"id":27009547,"uuid":"110813628","full_name":"aeternity/infrastructure","owner":"aeternity","description":null,"archived":false,"fork":false,"pushed_at":"2025-10-07T13:09:45.000Z","size":891,"stargazers_count":23,"open_issues_count":10,"forks_count":12,"subscribers_count":21,"default_branch":"master","last_synced_at":"2025-10-07T15:13:34.419Z","etag":null,"topics":["devops"],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"isc","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aeternity.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2017-11-15T09:35:36.000Z","updated_at":"2025-10-07T13:09:49.000Z","dependencies_parsed_at":"2024-02-27T07:56:34.413Z","dependency_job_id":"6e84753c-bcfb-49ce-b15e-8ab8fe92e3b7","html_url":"https://github.com/aeternity/infrastructure","commit_stats":null,"previous_names":[],"tags_count":60,"template":false,"template_full_name":null,"purl":"pkg:github/aeternity/infrastructure","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aeternity%2Finfrastructure","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aeternity%2Finfrastructure/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aeternity%2Finfrastructure/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aeternity%2Finfrastructure/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aeternity","download_url":"https://codeload.github.com/aeternity/infrastructure/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aeternity%2Finfrastructure/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29913655,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-27T19:37:42.220Z","status":"ssl_error","status_checked_at":"2026-02-27T19:37:41.463Z","response_time":57,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["devops"],"created_at":"2024-11-14T03:11:42.474Z","updated_at":"2026-02-27T21:01:47.831Z","avatar_url":"https://github.com/aeternity.png","language":"Shell","readme":"# Infrastructure management automation for æternity nodes\n\nInfrastructure is orchestrated with [Terraform](https://www.terraform.io) in the following repositories:\n- [Mainnet seed nodes](https://github.com/aeternity/terraform-aws-mainnet)\n- [Mainnet API gateway](https://github.com/aeternity/terraform-aws-mainnet-api)\n- [Testnet seed and miner nodes](https://github.com/aeternity/terraform-aws-testnet)\n- [Devnet environments (integration, next, dev1, dev2, etc...)](https://github.com/aeternity/terraform-aws-devnet)\n- [Miscellaneous services (release repository, backups, etc...)](https://github.com/aeternity/terraform-aws-misc)\n\nThis repository contains Ansible playbooks and scripts to bootstrap, manage, maintenance and deploy nodes.\nAnsible playbooks are run against [dynamic host inventories](http://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html).\n\nBelow documentation is meant for manual testing and additional details. It's already integrated in CircleCI workflow.\n\n## Requirements\n\nThe only requirement is Docker. All the libraries and packages are built in the docker image.\nIf for some reason one needs to setup the requirements on the host system see the Dockerfile.\n\n## Getting started\n\nThis is intended to be used as fast setup recipe, for additional details read the documentation below.\n\nSetup Vault authentication:\n\n```bash\nexport AE_VAULT_ADDR=https://the.vault.address/\nexport AE_VAULT_GITHUB_TOKEN=your_personal_github_token\n```\n\nRun the container:\n\n```\ndocker pull aeternity/infrastructure\ndocker run -it -e AE_VAULT_ADDR -e AE_VAULT_GITHUB_TOKEN aeternity/infrastructure\n```\n\nMake sure there are no authentication errors after running the container.\n\nSSH to any host:\n\n```bash\nmake cert\nssh aeternity@192.168.1.1\n```\n\n## Credentials\n\nAll secrets are managed with [Hashicorp Vault](https://www.vaultproject.io),\nso that only authentication to Vault must be configured explicitly, it needs an address and authentication secret(s):\n\n- Vault address (can be found in the private communication channels)\n    * The Vault server address can be set by `AE_VAULT_ADDR` environment variable.\n- Vault secret can be provided in either of the following methods:\n    - [GitHub Auth](https://www.vaultproject.io/docs/auth/github.html) by using [GitHub personal token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/#creating-a-token)\n    set as `AE_VAULT_GITHUB_TOKEN` environment variable. Any valid GitHub access token with the read:org scope can be used for authentication.\n    - [AppRole Auth](https://www.vaultproject.io/docs/auth/approle.html) set as `VAULT_ROLE_ID` and `VAULT_SECRET_ID` environment variables.\n    - [Token Auth](https://www.vaultproject.io/docs/auth/token.html) by setting `VAULT_AUTH_TOKEN` environment variable (translates to `VAULT_TOKEN` by docker entry point). `VAULT_AUTH_TOKEN` is highest priority compared to other credentials.\n\nAccess to secrets is automatically set based on Vault policies of the authenticated account.\n\n### Token refresh\n\nVault tokens expire after a certain amount of time. To continue working one MUST refresh the token.\n\n```\nmake -B secrets\n```\n\n## Docker image\n\nA Docker image `aeternity/infrastructure` is build and published to DockerHub. To use the image one should configure all the required credentials as documented above and run the container (always make sure you have the latest docker image):\n\n```bash\ndocker pull aeternity/infrastructure\ndocker run -it -e AE_VAULT_ADDR -e AE_VAULT_GITHUB_TOKEN aeternity/infrastructure\n```\n\nFor convenience all the environment variables are listed in `env.list` file that can be used instead of explicit CLI variables list,\nhowever the command below is meant to be run in a path of this repository clone:\n\n```bash\ndocker run -it --env-file env.list aeternity/infrastructure\n```\n\n### SSH\n\nAccording to the Vault authentication token permissions, one can ssh to any node they have access to by running:\n\n```bash\nmake ssh HOST=192.168.1.1\n```\n\n#### Certificates\n\nSSH certificates (and keys) can be explicitly generated by running:\n\n```bash\nmake cert\n```\n\nThen the regular ssh/scp commands could be run:\n```bash\nssh aeternity@192.168.1.1\n```\n\n#### Users\n\n`ssh` and `cert` targets are shorthands that actually run `ssh-aeternity` and `cert-aeternity`.\nNote the `ssh-%` and `cert-%` target suffix, it could be any supported node username, e.g. `ssh-master`.\nFor example to ssh with `master` user (given the Vault token have the sufficient permissions):\n```bash\nmake ssh-master HOST=192.168.1.1\n```\n\n## Ansible playbooks\n\nYou can run any playbook from `ansible/` using `make ansible/\u003cplaybook\u003e.yml`.\n\nMost of the playbooks can be run with aliases (used in the following examples)\n\nThe playbooks can be controlled by certain environment variables.\n\nMost playbooks require `DEPLOY_ENV` which is the deployment environment of the node instance.\n\nHere is a list of other optional vars that can be passed to all playbooks:\n\n- `CONFIG_KEY` - [Vault configuration env](#vault-node-ansible-configuration), in cases when config env includes region or does not match `DEPLOY_ENV` (default: `$DEPLOY_ENV`)\n- `DEPLOY_CONFIG` - Specify a local file to use instead of an autogenerated config from vault. *NOTE: The file should not be located in vault output path (`/tmp/config/`) else it will be regenerated.*\n- `LIMIT` - Ansible's `--limit` option (default: `tag_env_$DEPLOY_ENV:\u0026tag_role_aenode`)\n- `HOST` - Pass IP (or a comma separated list) to use specific host\n   - This will ignore `LIMIT` (uses ansible's `-i` instead of `--limit`). \n   - Make sure you run `make list-inventory` first.\n- `PYTHON` - Full path of the python interpreter (default: `/usr/bin/python3`)\n- `ANSIBLE_EXTRA_PARAMS` - Additional params to append to the `ansible-playbook` command\n    (e.g. `ANSIBLE_EXTRA_PARAMS=--tags=task_tag -e var=val`)\n\nCertain playbooks require additional vars, see below.\n\n### SSH setup\n\nTo run any of the Ansible playbooks a SSH certificate (and keys) must be setup in advance.\nDepending on the playbook it requires either `aeternity` or `master` SSH remote user access.\n\nBoth can be setup by running:\n```bash\nmake cert-aeternity\n```\n\nand/or\n\n```bash\nmake cert-master\n```\n\nPlease note that only devops are authorized to request `master` user certificates.\n\n### Ansible dynamic inventory\n\nCheck that your AWS credentials are setup and dynamic inventory is working as excepted:\n```bash\ncd ansible \u0026\u0026 ansible-inventory --list\n```\n\n### List inventory\n\nGet a list of ansible inventory grouped by seed nodes and peers\n\n```\nmake list-inventory\n```\n\nInventory data is stored in local file `ansible/inventory-list.json`. To refresh it you can `make -B list-inventory`\n\n### Setup\n\nTo setup environment of nodes, `make setup` can be used,\nfor example to setup `integration` environment nodes run:\n```bash\nmake setup DEPLOY_ENV=integration\n```\n\nNodes are usually already setup during the bootstrap process of environment creation and maintenance.\n\n### Manage nodes\n\nStart, stop, restart or ping nodes by running:\n```bash\nmake manage-node DEPLOY_ENV=integration CMD=start\nmake manage-node DEPLOY_ENV=integration CMD=stop\nmake manage-node DEPLOY_ENV=integration CMD=restart\nmake manage-node DEPLOY_ENV=integration CMD=ping\n```\n\n### Deploy\n\nTo deploy aeternity package run:\n```bash\nexport PACKAGE=https://github.com/aeternity/aeternity/releases/download/v1.4.0/aeternity-1.4.0-ubuntu-x86_64.tar.gz\nmake deploy DEPLOY_ENV=integration\n```\n\nAdditional parameters:\n- DEPLOY_DOWNTIME - schedule a downtime period (in seconds) to mute monitoring alerts (0 by default e.g. monitors are not muted)\n- DEPLOY_COLOR - some environments might be colored to enable blue/green deployments (not limits by default)\n- DEPLOY_KIND - deploy to different kind of nodes, current is seed / peer / api (not limit by default)\n- DEPLOY_REGION - deploy to different AWS Region i.e.: eu_west_2 (notice _ instead of -)\n- DEPLOY_DB_VERSION - chain db directory suffix that can be bumped to purge the old db (1 by default)\n- ROLLING_UPDATE - Define batch size for rolling updates: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#rolling-update-batch-size default 100%\n\n#### Custom node config\n\nExample for deploying by specifying config with region:\n```bash\nmake deploy DEPLOY_ENV=uat_mon CONFIG_KEY=uat_mon@ap-southeast-1\n```\n\nExample for deploying by specifying custom node config file:\n```bash\nmake deploy DEPLOY_ENV=dev1 DEPLOY_CONFIG=/tmp/dev1.yml\n```\n\n#### Deploy to mainnet\n\nFull example for deploying 1.4.0 release to all mainnet nodes.\n\n```bash\nDEPLOY_VERSION=1.4.0\nexport DEPLOY_ENV=main\nexport DEPLOY_DOWNTIME=1800 #30 minutes\nexport DEPLOY_DB_VERSION=1 # Get the version with 'curl https://raw.githubusercontent.com/aeternity/aeternity/v${DEPLOY_VERSION}/deployment/DB_VERSION'\nexport PACKAGE=https://releases.aeternity.io/aeternity-${DEPLOY_VERSION}-ubuntu-x86_64.tar.gz\nexport ROLLING_UPDATE=100%\n\n#ROLLING_UPDATE optional default=100%\n# - examples:\"50%\" run on 50% of nodes in run\n# - \"1\" ron on one node at a time\n# - '[1, 2]' run 1, node then on 2 nodes etc...\n# - \"['10%', '50%']\" run on 10% nodes then on 50% etc...\n# Define batch size for rolling updates: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#rolling-update-batch-size\n\nmake cert \u0026\u0026 make deploy\n```\n\n### Reset network of nodes\n\nTo reset a network of nodes run:\n```bash\nmake reset-net DEPLOY_ENV=integration\n```\n\nThe playbook does:\n\n- delete blockchain data\n- delete logs\n- delete chain keys\n\n### Vault node ansible configuration\n\nPlaybook configurations are stored in YAML format by the Vault's KV store named 'secret'\nunder path `secret2/aenode/config/\u003cENV_TAG\u003e` as field `ansible_vars`\n\n`\u003cENV_TAG\u003e` should be considered to be a node's \"configuration\" environment. \nFor instance 'terraform' setups certain nodes to look for `\u003cenv@region\u003e`, e.g. `main_mon@us-west-1`. \n\nEach AWS instance `\u003cENV_TAG\u003e` is generated from the EC2 `env` tag or is fully specified by `bootstrap_config` tag.\nIt should point to the location of the vault's `ansible_vars` field (path only).\nIf `bootstrap_config` is missing, empty or is set to the string `none` it will use the instance's `env` as fallback. \n\nWhen there is no env config stored in the KV database (and instance have no `bootstrap_config` tag), the bootstrapper will try to use a file in `/ansible/vars/\u003cenv\u003e.yml`.\n\nFor quick debugging of KV config repository there are few tools provided by make.\n\n#### List of all stored configurations:\n\nTo get a list of all Vault stored configuration \u003cENV_TAG\u003e's (environments) use:\n\n```bash\nmake vault-configs-list\n```\n\n#### Dumping configurations\n\nConfigurations will be downloaded as a YAML file with filename format `\u003cCONFIG_OUTPUT_DIR\u003e/\u003cENV_TAG\u003e.yml` \n\nBy default `CONFIG_OUTPUT_DIR` is `/tmp/config`. You can provide it as make env variable.\n\nYou can save all configurations as separate `.yml` files in `/tmp/config`:\n\n```bash\nmake vault-configs-dump\n```\n\nTo dump a single configuration use `make vault-config-\u003cENV_TAG\u003e`. Example for `dev1`:\n\n```bash\nmake vault-config-dev1\n```\n\nTip: To get and dump the contents in the console you can use:\n\n```bash\ncat `make -s vault-config-test`\n```\n\n#### Additional options\n\nENV vars can control the defaults:\n- `CONFIG_OUTPUT_DIR` - To override the output path where configs are dumped (default: `/tmp/config`)\n- `VAULT_CONFIG_ROOT` - Vault root path where config envs are stored (default: `secret2/aenode/config`)\n- `VAULT_CONFIG_FIELD` - Name of the field where the configuration YAML is stored (default: `ansible_vars`)\n\nExample:\n\n```bash\nmake vault-configs-dump \\\n    CONFIG_OUTPUT_DIR=/some/dir \\\n    VAULT_CONFIG_ROOT=secret/some/config \\\n    VAULT_CONFIG_FIELD=special_config\n```\n\n### Mnesia snapshots\n\nTo snapshot a Mnesia database run:\n```bash\nmake mnesia_snapshot DEPLOY_ENV=integration\n```\n\nTo snapshot a specific node instance with IP 1.2.3.4:\n\n```bash\nmake mnesia_snapshot DEPLOY_ENV=integration HOST=1.2.3.4 SNAPSHOT_SUFFIX=1234\n```\n\nAdditional parameters:\n- SNAPSHOT_SUFFIX - snapshot filename suffix, by default is date and time of the run, suffix can be used to set unique filename\n\n## Data share\n\nThe easiest way to share data between a container and the host is using [bind mounts](https://docs.docker.com/storage/bind-mounts/).\nFor example during development is much easier to edit the source on the host and run/test in the container,\nthat way you don't have to rebuild the container with each change to test it.\nBind mounting the source files to the container makes this possible:\n\n```bash\ndocker run -it --env-file env.list -v ${PWD}:/src -w /src aeternity/infrastructure\n```\n\nThe same method can be used to share data from the container to the host, it's two-way sharing.\n\nAn alternative method for one shot transfers could be [docker copy command](https://docs.docker.com/engine/reference/commandline/cp/).\n\n## Testing\n\n### Dockerfile\n\nTo test any Dockerfile or (entrypoint) changes a local container can be build and run:\n\n```bash\ndocker build -t aeternity/infrastructure:local .\ndocker run -it --env-file env.list aeternity/infrastructure:local\n```\n\n### Testing Ansible playbooks\n\n#### Dev environments\n\nThe most easy way to test Ansible playbooks is to run it against dev environments.\nFirst claim a dev environment in the chat and then run the playbook against it:\n\n#### Local docker\n\nLocal docker containers can be used for faster feedback loops at the price of some extra docker setup.\n\nTo enable network communication between the containers, all the containers that needs to communicate has to be in the same docker network:\n\n```bash\ndocker network create aeternity\n```\n\nThe infrastructure docker image cannot be used because it's based on Alpine but aeternity node should run on Ubuntu.\nThus an Ubuntu based container should be run, a convenient image with sshd is `rastasheep/ubuntu-sshd`.\nNote the `net` and `name` parameters:\n\n```bash\ndocker run -d --net aeternity --name aenode2204 aeternity/ubuntu-sshd:22.04\n```\n\nThe above command will run an Ubuntu 18.04 and Ubuntu 22.04 with sshd daemon running\nand reachable by other hosts in the same docker network at addresses `aenode1804.aeternity` and `aenode2204.aeternity`.\n\nOnce the test node is running, start an infrastructure container in the same docker network:\n\n```bash\ndocker run -it --env-file env.list -v ${PWD}:/src -w /src --net aeternity aeternity/infrastructure\n```\n\nRunning an Ansible playbook against the `aenode1804` and `aenode2204` containers requires setting [additional Ansible parameters](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#list-of-behavioral-inventory-parameters):\n\n- inventory host - i.e. `aenode2204.aeternity`\n- ssh user - `root`\n- ssh password - `root`\n- python interpreter - `/usr/bin/python3`\n\nFor example to run the `setup.yml` playbook:\n\n```bash\ncd ansible \u0026\u0026 ansible-playbook -i aenode2204.aeternity, \\\n  -e ansible_user=root \\\n  -e ansible_ssh_pass=root \\\n  -e ansible_python_interpreter=/usr/bin/python3 \\\n  setup.yml\n```\n\nRunning/testing playbooks on localhost with docker-compose helpers.\nThis will run infrastructure container link it to debian container.\n\n```bash\ndocker-compose up -d\n#attach to local infrastructure container\ndocker attach infrastructure-local\n./local_playbook_run.sh deploy.yml # + add required parameters\n```\n\nCertain playbooks require additional variables to be provided. The most convenient way is to import a `.yml` file in the ansible env:\n\n```bash\n./local_playbook_run.sh deploy.yml \\\n    -e \"@/tmp/config/test.yml\" # + add required parameters\n```\n\n*Note: To create a .yml for the 'test' deployment env, you can use `make vault-config-test`.\nSee [Dumping configurations](#dumping-configurations) section for more.*\n\nUse \u003ckbd\u003eCTRL+p\u003c/kbd\u003e, \u003ckbd\u003eq\u003c/kbd\u003e sequence to detach from the container.\n\n### Integration tests\n\nAs this repository Anisble playbooks and scripts are used to bootstrap the infrastructure, integration tests are mandatory.\nBy default it tests the integration of `master` branch of this repository with the latest stable version of deploy Terraform module.\nIn the continuous integration service (CircleCI), the integration tests will be run against the branch under test.\n\nIt can be run by:\n\n```\ncd test/terraform\nterraform init \u0026\u0026 terraform apply\n```\n\nAfter the fleet is created the expected functionality should be validated by using the AWS console or CLI.\nFor fast health check the Ansible playbook can be run, note that the above Terraform configuration creates an environment with name `test`:\n\n```bash\ncd ansible \u0026\u0026 ansible-playbook health-check.yml --limit=tag_env_test\n```\n\nDon't forget to cleanup the test environment after the tests are completed:\n\n```bash\ncd test/terraform \u0026\u0026 terraform destroy\n```\n\nAll of the above can be run with single `make` wrapper:\n\n```bash\nmake integration-tests\n```\n\n*Note these test are run automatically each day by the CI server, and can be run by other users as well. To prevent collisions you can specify unique environment ID (do not use special symbols other than \"_\", otherwise tests will not pass):*\n\n```bash\nmake integration-tests TF_VAR_envid=tf_test_my_test_env\n```\n\nTo run the tests against your branch locally, first push your branch to the remote and then:\n\n```bash\nmake integration-tests TF_VAR_envid=tf_test_my_test_env TF_VAR_bootstrap_version=my_branch\n```\n\n### CircleCI configuration\n\nCircleCI provides a [CLI tool](https://circleci.com/docs/2.0/local-cli/) that can be used to validate configuration and run jobs locally.\nHowever as the local jobs runner has it's limitation, to fully test a workflow it's acceptable to temporary change (as little as possible) the configuration to trigger the test. However, such changes are not accepted on `master` branch.\n\nTo debug failing jobs on CircleCI, it supports [SSH debug sessions](https://circleci.com/docs/2.0/ssh-access-jobs/), one can ssh to the build container/VM and inspect the environment.\n\n## Python requirements\n\nMain requirements are kept in the requirements.txt file while freezed full list is kept in requirements-lock.txt\nIt can be updated by changing requirements.txt and generating the lock file.\n\n```bash\npip3 install -r requirements.txt\npip3 freeze \u003e requirements-lock.txt\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faeternity%2Finfrastructure","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faeternity%2Finfrastructure","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faeternity%2Finfrastructure/lists"}