{"id":18596140,"url":"https://github.com/resource-watch/api-infrastructure","last_synced_at":"2025-11-02T06:30:21.022Z","repository":{"id":21855845,"uuid":"94240179","full_name":"resource-watch/api-infrastructure","owner":"resource-watch","description":null,"archived":false,"fork":false,"pushed_at":"2024-12-04T20:33:34.000Z","size":27648,"stargazers_count":2,"open_issues_count":3,"forks_count":1,"subscribers_count":5,"default_branch":"production","last_synced_at":"2024-12-26T21:08:49.484Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/resource-watch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-06-13T17:37:08.000Z","updated_at":"2022-05-30T07:05:37.000Z","dependencies_parsed_at":"2023-02-12T21:15:38.599Z","dependency_job_id":"9f4ba36c-a9c6-4404-abbe-2a0ff7ee4feb","html_url":"https://github.com/resource-watch/api-infrastructure","commit_stats":{"total_commits":662,"total_committers":9,"mean_commits":73.55555555555556,"dds":0.3474320241691843,"last_synced_commit":"06d726072b869be5836d50a931bace5fe663b31e"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/resource-watch%2Fapi-infrastructure","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/resource-watch%2Fapi-infrastructure/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/resource-watch%2Fapi-infrastructure/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/resource-watch%2Fapi-infrastructure/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/resource-watch","download_url":"https://codeload.github.com/resource-watch/api-infrastructure/tar.gz/refs/heads/production","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":239379325,"owners_count":19628684,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T01:23:13.649Z","updated_at":"2025-11-02T06:30:20.942Z","avatar_url":"https://github.com/resource-watch.png","language":"HTML","readme":"# Resource Watch API - Cluster setup\n\n**Important**: this repo uses [git lfs](https://git-lfs.github.com/).\n\nFor a description of the setup, see the infrastructure [section](https://resource-watch.github.io/doc-api/developer.html#infrastructure-configuration) of the developer documentation.\n\n## Setting up the AWS resources\n\nTo setup the cluster cloud resources, use the following command:\n\n```shell script\ncd ./terraform\nexport CLOUDFLARE_API_KEY=\u003ccloudflare api key\u003e CLOUDFLARE_EMAIL=\u003ccloudflare account email address\u003e\nterraform init -backend-config=vars/backend-\u003cenv\u003e.tfvars\nterraform plan -var-file=vars/terraform-\u003cenv\u003e.tfvars\n```\n\nSet `\u003cenv\u003e` to dev, staging or production - the environment that you're deploying to. Each environment has separate AWS account isolating resources including bastion server, jenkins server, EKS kubernetes cluster, etc.\n\nIf configuring `dev` environment resources and you'll be modifying kubernetes infrastructure, you may want to bring the cluster up from hibernation by setting `hibernate = false` in `./vars/terraform-dev.tfvars`. Finally apply your changes:\n\n```shell script\nterraform apply -var-file=vars/terraform-\u003cenv\u003e.tfvars\n```\n\nOn the last step, you'll be asked to confirm your action, as this is the step that \"does stuff\".\nDeploying the whole infrastructure may take about 15 minutes, so grad a drink.\n\nOnce it's done,you'll see some output like this:\n\n```shell script\nOutputs:\n\naccount_id = \u003cyour aws account id\u003e\nbastion_hostname = ec2-18-234-188-9.compute-1.amazonaws.com\nenvironment = dev\njenkins_hostname = ec2-34-203-238-24.compute-1.amazonaws.com\nkube_configmap = apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: aws-auth\n  namespace: kube-system\ndata:\n  mapRoles: |\n    - rolearn: arn:aws:iam::843801476059:role/eks_manager\n      username: system:node:{{EC2PrivateDNSName}}\n      groups:\n        - system:masters\n    - rolearn: arn:aws:iam::843801476059:role/eks-node-group-admin\n      username: system:node:{{EC2PrivateDNSName}}\n      groups:\n        - system:bootstrappers\n        - system:nodes\n\n\nkubectl_config = # see also: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html\n\napiVersion: v1\nclusters:\n- cluster:\n    server: https://\u003crandom string\u003e.gr7.us-east-1.eks.amazonaws.com\n    certificate-authority-data: \u003crandom base64 string\u003e\n  name: core-k8s-cluster-dev\ncontexts:\n- context:\n    cluster: core-k8s-cluster-dev\n    user: aws-rw-dev\n  name: aws-rw-dev\nkind: Config\npreferences: {}\ncurrent-context: aws-rw-dev\nusers:\n- name: aws-rw-dev\n  user:\n    exec:\n      apiVersion: client.authentication.k8s.io/v1beta1\n      command: aws\n      args:\n        - \"eks\"\n        - \"get-token\"\n        - \"--cluster-name\"\n        - \"core-k8s-cluster-dev\"\n        # - \"-r\"\n        # - \"\u003crole-arn\u003e\"\n      # env:\n        # - name: AWS_PROFILE\n        #   value: \"\u003caws-profile\u003e\"\n\nnat_gateway_ips = [\n  [\n    \"3.211.237.248\",\n    \"3.212.157.210\",\n    \"34.235.74.8\",\n    \"3.219.120.245\",\n    \"34.195.181.97\",\n    \"3.233.11.188\",\n  ],\n]\n```\n\nAt this point, most of your resources should already be provisioned, and some things will be wrapping up (for example, EC2 `userdata` scripts).\n\n## Accessing the Kubernetes cluster\n\nThe main resource you'll want to access at this stage is the bastion host. To do so, use ssh:\n\n```shell script\nssh ubuntu@\u003cbastion_hostname value from above\u003e\n```\n\nAssuming your public key was sent to the bastion host during the setup process, you should have access. Next, you'll want to configure access to the cluster. As the cluster is only available on the private VPC, you'll need to do so through the bastion host - hence the need to verify you have access to the bastion host.\n\nFrom here, there are multiple ways to proceed.\n\n### SSH tunnel\n\nPerhaps the most practical way to connect to the cluster is by creating an SSH tunnel that connects a local port to the cluster's API port, through the bastion. For this to work, a few things are needed:\n\n- Copy the `kubectl_config` settings from above into your local `~/.kube/config`\n- Modify the `server: https://\u003crandom string\u003e.gr7.us-east-1.eks.amazonaws.com` line by adding `:4433` at the end, so it looks like this: `server: https://\u003crandom string\u003e.gr7.us-east-1.eks.amazonaws.com:4433` (you can pick a different port if you want)\n- Modify your local `/etc/hosts` to include the following line: `127.0.0.1 \u003ceks api endpoint url\u003e`\n\n```shell script\nssh -N -L 4433:\u003ceks api endpoint url\u003e:443 \u003cbastion user\u003e@\u003cbastion hostname\u003e\n\n```\n\n### Access from bastion\n\nAnother way to connect to the cluster is doing so from a bash shell running on the bastion. However, this will require the actual bastion host to have access to the cluster. That is done using `kubectl` config - which is automatically taken care of during the cluster setup phase - and through IAM roles, which you need to configure using these steps.\n\n**Disclaimer**: the next steps will see you add AWS credentials to the AWS CLI in the bastion host. This is a VERY BAD IDEA, and it's done here as a temporary workaround. Be sure to remove the `~/.aws/credentials` file once you're done.\n\nRun `aws configure` and set the `AWS Access Key ID` and `AWS Secret Access Key` of the AWS user who created the cluster. If this was done correctly, you should see the following output now:\n\n```shell script\nubuntu@dev-bastion:~$ kubectl get pods\nNo resources found in default namespace.\n```\n\nNow that you have access to the cluster, you need to configure it to allow access based on an AWS IAM role, and not just to the user who created the cluster. To do so, you need to edit a Kubernetes configmap:\n\n```shell script\nKUBE_EDITOR=\"nano\" kubectl edit configmap aws-auth -n kube-system\n```\n\nYou'll need to replace the `data` section in this document with the one from the `terraform apply` command `kube_configmap`. Saving your changes and exiting the editor will push the new configuration to the cluster.\n\nNext, delete your local `~/.aws/credentials` file - this will ensure that no authentication information remains inside the cluster, and that all access management is done using IAM Roles, which is the recommended way.\n\nYou should now have access to the cluster from the bastion host.\n\n## Kubernetes base configuration\n\nWith access configured as above, and with the SSH tunnel active, you can now proceed to configuring the Kubernetes cluster itself.\n\n### Namespaces\n\nAt this point you should have the necessary cloud resources provisioned and access to them configured. The next steps will provision the logical resources (databases and other dependencies) on top of which the API will operate.\n\nThe first step will be to create the necessary [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) using the following command from inside the `terraform-k8s-infrastructure` folder:\n\n```shell script\nterraform apply -var-file=vars/terraform-\u003cyour environment\u003e.tfvars -target=module.k8s_namespaces\n```\n\n### Secrets\n\nOnce the namespaces are created, you should apply all the necessary [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/), as some of them will be needed by the components we'll provision in the next step. Refer to the secrets repository for more details.\n\n### Kubernetes configuration\n\nAfter the necessary secrets are created, you can deploy the rest of the infrastructure using the following command:\n\n```shell script\nterraform apply -var-file=vars/terraform-\u003cyour environment\u003e.tfvars\n```\n\nThe command above will provision most of the resources needed by the API. However, some resources will still require manual deployment after this - check the `k8s-aws` folder and its sub-folders for more details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fresource-watch%2Fapi-infrastructure","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fresource-watch%2Fapi-infrastructure","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fresource-watch%2Fapi-infrastructure/lists"}