{"id":15065244,"url":"https://github.com/aws-samples/kubernetes-for-java-developers","last_synced_at":"2025-04-04T06:09:36.780Z","repository":{"id":41065817,"uuid":"134336143","full_name":"aws-samples/kubernetes-for-java-developers","owner":"aws-samples","description":"A Day in Java Developer’s Life, with a taste of Kubernetes","archived":false,"fork":false,"pushed_at":"2022-10-27T09:21:24.000Z","size":60937,"stargazers_count":588,"open_issues_count":10,"forks_count":277,"subscribers_count":52,"default_branch":"master","last_synced_at":"2025-03-28T05:13:01.689Z","etag":null,"topics":["containers","docker","java","kubernetes","maven"],"latest_commit_sha":null,"homepage":null,"language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aws-samples.png","metadata":{"files":{"readme":"readme.adoc","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-05-21T23:37:03.000Z","updated_at":"2025-02-27T08:35:28.000Z","dependencies_parsed_at":"2023-01-19T21:32:41.166Z","dependency_job_id":null,"html_url":"https://github.com/aws-samples/kubernetes-for-java-developers","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fkubernetes-for-java-developers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fkubernetes-for-java-developers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fkubernetes-for-java-developers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aws-samples%2Fkubernetes-for-java-developers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aws-samples","download_url":"https://codeload.github.com/aws-samples/kubernetes-for-java-developers/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247128753,"owners_count":20888235,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["containers","docker","java","kubernetes","maven"],"created_at":"2024-09-25T00:35:44.890Z","updated_at":"2025-04-04T06:09:36.767Z","avatar_url":"https://github.com/aws-samples.png","language":"Java","readme":"= A Day in Java Developer's Life, with a taste of Kubernetes\n:toc:\n\nDeploying your Java application in a Kubernetes cluster could feel like Alice in Wonderland. You keep going down the rabbit hole and don't know how to make that ride comfortable. This repository explains how a Java application can be deployed, tested, debugged and monitored in Kubernetes. In addition, it also talks about canary deployment and deployment pipeline.\n\nA comprehensive hands-on course explaining these concepts is available at https://www.linkedin.com/learning/kubernetes-for-java-developers.\n\n== Application\n\nWe will use a simple Java application built using Spring Boot. The application publishes a REST endpoint that can be invoked at `http://{host}:{port}/hello`.\n\nThe source code is in the `app` directory.\n\n== Build and Test using Maven\n\n. Run application:\n\n\tcd app\n\tmvn spring-boot:run\n\n. Test application\n\n\tcurl http://localhost:8080/hello\n\n== Build and Test using Docker\n\n=== Build Docker Image using multi-stage Dockerfile\n\n. Create `m2.tar.gz`:\n\n\tmvn -Dmaven.repo.local=./m2 clean package\n\ttar cvf m2.tar.gz ./m2\n\n. Create Docker image:\n\n\tdocker image build -t arungupta/greeting .\n+\nExplain multi-stage Dockerfile.\n\n=== Build Docker Image using https://github.com/GoogleContainerTools/jib[Jib]\n\n. Create Docker image:\n\n    mvn compile jib:build -Pjib\n\nThe benefits of using Jib over a multi-stage Dockerfile build include:\n\n* Don't need to install Docker or run a Docker daemon\n* Don't need to write a Dockerfile or build the archive of m2 dependencies\n* Much faster\n* Builds reproducibly\n\nThe above builds directly to your Docker registry. Alternatively, Jib can also build to a Docker daemon:\n\n    mvn compile jib:dockerBuild -Pjib -Ddocker.name=arungupta/greeting\n\n=== Test built container using Docker\n\n. Run container:\n\n\tdocker container run --name greeting -p 8080:8080 -d arungupta/greeting\n\n. Access application:\n\n\tcurl http://localhost:8080/hello\n\n. Remove container:\n\n\tdocker container rm -f greeting\n\n== Minimal Docker Image using Custom JRE\n\n. Download http://download.oracle.com/otn-pub/java/jdk/11.0.1+13/90cf5d8f270a4347a95050320eef3fb7/jdk-11.0.1_linux-x64_bin.rpm[JDK 11] and `scp` to an https://aws.amazon.com/marketplace/pp/B00635Y2IW/ref=mkt_ste_ec2_lw_os_win[Amazon Linux] instance\n. Install JDK 11:\n\n\tsudo yum install jdk-11.0.1_linux-x64_bin.rpm\n\n. Create a custom JRE for the Spring Boot application:\n\n\tcp target/app.war target/app.jar\n\tjlink \\\n\t\t--output myjre \\\n\t\t--add-modules $(jdeps --print-module-deps target/app.jar),\\\n\t\tjava.xml,jdk.unsupported,java.sql,java.naming,java.desktop,\\\n\t\tjava.management,java.security.jgss,java.instrument\n\n. Build Docker image using this custom JRE:\n\n\tdocker image build --file Dockerfile.jre -t arungupta/greeting:jre-slim .\n\n. List the Docker images and show the difference in sizes:\n\n\t[ec2-user@ip-172-31-21-7 app]$ docker image ls | grep greeting\n\tarungupta/greeting   jre-slim            9eed25582f36        6 seconds ago       162MB\n\tarungupta/greeting   latest              1b7c061dad60        10 hours ago        490MB\n\n. Run the container:\n\n\tdocker container run -d -p 8080:8080 arungupta/greeting:jre-slim\n\n. Access the application:\n\n\tcurl http://localhost:8080/hello\n\n== Build and Test using Kubernetes\n\nA single node Kubernetes cluster can be easily created on a development machine using https://github.com/kubernetes/minikube[Minikube], https://microk8s.io/[MicroK8s], https://github.com/kubernetes-sigs/kind[KIND], and https://docs.docker.com/docker-for-mac/kubernetes/[Docker for Mac]. https://blog.tilt.dev//2019/08/21/why-does-developing-on-kubernetes-suck.html[Read] on why using these local development environments does not truly represent your prod cluster.\n\nThis tutorial will use Docker for Mac.\n\n. Ensure that Kubernetes is enabled in Docker for Mac\n. Show the list of contexts:\n\n    kubectl config get-contexts\n\n. Configure kubectl CLI for Kubernetes cluster\n\n\tkubectl config use-context docker-for-desktop\n\n. Install the Helm CLI:\n+\n\tbrew install kubernetes-helm\n+\nIf Helm CLI is already installed then use `brew upgrade kubernetes-helm`.\n+\n. Check Helm version:\n\n\thelm version\n\n. Install Helm in Kubernetes cluster:\n+\n\thelm init\n+\nIf Helm has already been initialized on the cluster, then you may have to upgrade Tiller:\n+\n\thelm init --upgrade\n+\n. Install the Helm chart:\n\n\tcd ..\n\thelm install --name myapp manifests/myapp\n\n. Check that the pod is running:\n\n\tkubectl get pods\n\n. Check that the service is up:\n\n\tkubectl get svc\n\n. Access the application:\n\n  \tcurl http://$(kubectl get svc/myapp-greeting \\\n  \t\t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello\n\n== Debug Docker and Kubernetes using IntelliJ\n\nYou can debug a Docker container and a Kubernetes Pod if they're running locally on your machine.\n\n=== Debug using Kubernetes\n\nThis was tested using Docker for Mac/Kubernetes. Use the previously deployed Helm chart.\n\n. Show service:\n+\n\tkubectl get svc\n\tNAME               TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE\n\tgreeting-service   LoadBalancer   10.101.39.100    \u003cpending\u003e     80:30854/TCP                    8m\n\tkubernetes         ClusterIP      10.96.0.1        \u003cnone\u003e        443/TCP                         90d\n\tmyapp-greeting     LoadBalancer   10.108.104.178   localhost     8080:32189/TCP,5005:31117/TCP   4s\n+\nHighlight the debug port is also forwarded.\n+\n. In IntelliJ, `Run`, `Debug`, `Remote`:\n+\nimage::images/docker-debug1.png[]\n+\n. Click on `Debug`, setup a breakpoint in the class:\n+\nimage::images/docker-debug2.png[]\n+\n. Access the application:\n\n\tcurl http://$(kubectl get svc/myapp-greeting \\\n\t\t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello\n\n. Show the breakpoint hit in IntelliJ:\n+\nimage::images/docker-debug3.png[]\n+\n. Delete the Helm chart:\n\n\thelm delete --purge myapp\n\n=== Debug using Docker\n\nThis was tested using Docker for Mac.\n\n. Run container:\n\n\tdocker container run --name greeting -p 8080:8080 -p 5005:5005 -d arungupta/greeting\n\n. Check container:\n\n\t$ docker container ls -a\n\tCONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                            NAMES\n\t724313157e3c        arungupta/greeting   \"java -jar app-swarm…\"   3 seconds ago       Up 2 seconds        0.0.0.0:5005-\u003e5005/tcp, 0.0.0.0:8080-\u003e8080/tcp   greeting\n\n. Setup breakpoint as explained above.\n. Access the application using `curl http://localhost:8080/resources/greeting`.\n\n== Kubernetes Cluster on AWS\n\nThis application will be deployed to an https://aws.amazon.com/eks/[Amazon EKS] cluster. If you're looking for a self-paced workshop that provide detailed instructions to get you started with EKS then https://eksworkshop.com[eksworkshop.com] is your place.\n\nLet's create the cluster first.\n\n. Install http://eksctl.io/[eksctl] CLI:\n\n\tbrew install weaveworks/tap/eksctl\n\n. Create EKS cluster:\n\n\teksctl create cluster --name myeks --nodes 4 --region us-west-2\n\t2018-10-25T13:45:38+02:00 [ℹ]  setting availability zones to [us-west-2a us-west-2c us-west-2b]\n\t2018-10-25T13:45:39+02:00 [ℹ]  using \"ami-0a54c984b9f908c81\" for nodes\n\t2018-10-25T13:45:39+02:00 [ℹ]  creating EKS cluster \"myeks\" in \"us-west-2\" region\n\t2018-10-25T13:45:39+02:00 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup\n\t2018-10-25T13:45:39+02:00 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=myeks'\n\t2018-10-25T13:45:39+02:00 [ℹ]  creating cluster stack \"eksctl-myeks-cluster\"\n\t2018-10-25T13:57:33+02:00 [ℹ]  creating nodegroup stack \"eksctl-myeks-nodegroup-0\"\n\t2018-10-25T14:01:18+02:00 [✔]  all EKS cluster resource for \"myeks\" had been created\n\t2018-10-25T14:01:18+02:00 [✔]  saved kubeconfig as \"/Users/argu/.kube/config\"\n\t2018-10-25T14:01:19+02:00 [ℹ]  the cluster has 0 nodes\n\t2018-10-25T14:01:19+02:00 [ℹ]  waiting for at least 4 nodes to become ready\n\t2018-10-25T14:01:50+02:00 [ℹ]  the cluster has 4 nodes\n\t2018-10-25T14:01:50+02:00 [ℹ]  node \"ip-192-168-161-180.us-west-2.compute.internal\" is ready\n\t2018-10-25T14:01:50+02:00 [ℹ]  node \"ip-192-168-214-48.us-west-2.compute.internal\" is ready\n\t2018-10-25T14:01:50+02:00 [ℹ]  node \"ip-192-168-75-44.us-west-2.compute.internal\" is ready\n\t2018-10-25T14:01:50+02:00 [ℹ]  node \"ip-192-168-82-236.us-west-2.compute.internal\" is ready\n\t2018-10-25T14:01:52+02:00 [ℹ]  kubectl command should work with \"/Users/argu/.kube/config\", try 'kubectl get nodes'\n\t2018-10-25T14:01:52+02:00 [✔]  EKS cluster \"myeks\" in \"us-west-2\" region is ready\n\n. Check the nodes:\n\n\tkubectl get nodes\n\tNAME                                            STATUS   ROLES    AGE   VERSION\n\tip-192-168-161-180.us-west-2.compute.internal   Ready    \u003cnone\u003e   52s   v1.10.3\n\tip-192-168-214-48.us-west-2.compute.internal    Ready    \u003cnone\u003e   57s   v1.10.3\n\tip-192-168-75-44.us-west-2.compute.internal     Ready    \u003cnone\u003e   57s   v1.10.3\n\tip-192-168-82-236.us-west-2.compute.internal    Ready    \u003cnone\u003e   54s   v1.10.3\n\n. Get the list of configs:\n+\n\tkubectl config get-contexts\n\tCURRENT   NAME                             CLUSTER                      AUTHINFO                         NAMESPACE\n\t*         arun@myeks.us-west-2.eksctl.io   myeks.us-west-2.eksctl.io    arun@myeks.us-west-2.eksctl.io   \n\t          docker-for-desktop               docker-for-desktop-cluster   docker-for-desktop               \n+\nAs indicated by `*`, kubectl CLI configuration is updated to the recently created cluster.\n\n== Migrate from Dev to Prod\n\n. Explicitly set the context:\n\n    kubectl config use-context arun@myeks.us-west-2.eksctl.io\n\n. Install Helm:\n\n\tkubectl -n kube-system create sa tiller\n\tkubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller\n\thelm init --service-account tiller\n\n. Check the list of pods:\n\n\tkubectl get pods -n kube-system\n\tNAME                            READY   STATUS    RESTARTS   AGE\n\taws-node-774jf                  1/1     Running   1          2m\n\taws-node-jrf5r                  1/1     Running   0          2m\n\taws-node-n46tw                  1/1     Running   0          2m\n\taws-node-slgns                  1/1     Running   0          2m\n\tkube-dns-7cc87d595-5tskv        3/3     Running   0          8m\n\tkube-proxy-2ghg6                1/1     Running   0          2m\n\tkube-proxy-hqxwg                1/1     Running   0          2m\n\tkube-proxy-lrwrr                1/1     Running   0          2m\n\tkube-proxy-x77tq                1/1     Running   0          2m\n\ttiller-deploy-895d57dd9-txqk4   1/1     Running   0          15s\n\n. Redeploy the application:\n\n\thelm install --name myapp manifests/myapp\n\n. Get the service:\n+\n\tkubectl get svc\n\tNAME             TYPE           CLUSTER-IP       EXTERNAL-IP                                                             PORT(S)                         AGE\n\tkubernetes       ClusterIP      10.100.0.1       \u003cnone\u003e                                                                  443/TCP                         17m\n\tmyapp-greeting   LoadBalancer   10.100.241.250   a8713338abef211e8970816cb629d414-71232674.us-east-1.elb.amazonaws.com   8080:32626/TCP,5005:30739/TCP   2m\n+\nIt shows the port `8080` and `5005` are published and an Elastic Load Balancer is provisioned. It takes about three minutes for the load balancer to be ready.\n+\n. Access the application:\n\n\tcurl http://$(kubectl get svc/myapp-greeting \\\n\t\t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello\n\n. Delete the application:\n\n\thelm delete --purge myapp\n\n== Service Mesh using AWS App Mesh\n\nhttps://https://aws.amazon.com/app-mesh/[AWS App Mesh] is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh can be used with Amazon EKS or Kubernetes running on AWS. In addition, it also works with other container services offered by AWS such as AWS Fargate and Amazon ECS. It also works with microservices deployed on Amazon EC2.\n\nA thorough detailed example that shows how to use App Mesh with EKS is available at https://eksworkshop.com/servicemesh_with_appmesh/[Service Mesh with App Mesh]. This section provides a simplistic setup using the configuration files from there.\n\nAll scripts used in this section are in the `manifests/appmesh` directory.\n\n=== Setup IAM Permissions\n\n. Set a variable `ROLE_NAME` to IAM role for the EKS worker nodes:\n\n\tROLE_NAME=$(aws iam list-roles \\\n\t\t--query \\\n\t\t'Roles[?contains(RoleName,`eksctl-myeks-nodegroup`)].RoleName' --output text)\n\n. Setup permissions for the worker nodes:\n\n\taws iam attach-role-policy \\\n\t\t--role-name $ROLE_NAME \\\n\t\t--policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess\n\n=== Configure App Mesh\n\n. Enable side-car injection by running `create.sh` script from https://github.com/aws/aws-app-mesh-examples/tree/master/examples/apps/djapp/2_create_injector. You need to change `ca-bundle.sh` and change `MESH_NAME` to `greeting-app`.\n. Create `prod` namespace:\n\n\tkubectl create namespace prod\n\n. Label prod namespace:\n\n\tkubectl label namespace prod appmesh.k8s.aws/sidecarInjectorWebhook=enabled\n\n. Create CRDs:\n\n\tkubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/mesh-definition.yaml\n\tkubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-node-definition.yaml\n\tkubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-service-definition.yaml\n\tkubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/controller-deployment.yaml\n\n=== Create App Mesh Components\n\n. Create a Mesh:\n\n\tkubectl create -f mesh.yaml\n\n. Create Virtual Nodes:\n\n\tkubectl create -f virtualnodes.yaml\n\n. Create a Virtual Services:\n\n\tkubectl create -f virtualservice.yaml\n\n. Create deployments:\n\n\tkubectl create -f app-hello-howdy.yaml\n\n. Create services:\n\n\tkubectl create -f services.yaml\n\t\n=== Traffic Shifting\n\n. Find the name of the talker pod:\n\n\tTALKER_POD=$(kubectl get pods \\\n\t\t-nprod -lgreeting=talker \\\n\t\t-o jsonpath='{.items[0].metadata.name}')\n\n. Exec into the talker pod:\n\n\tkubectl exec -nprod $TALKER_POD -it bash\n\n. Invoke the mostly-hello service to get back mostly `Hello` response:\n\n\twhile [ 1 ]; do curl http://mostly-hello.prod.svc.cluster.local:8080/hello; echo;done\n\n. `CTRL`+`C` to break the loop.\n\n. Invoke the mostly-howdy service to get back mostly `Howdy` response:\n\n\twhile [ 1 ]; do curl http://mostly-howdy.prod.svc.cluster.local:8080/hello; echo;done\n\n. `CTRL`+`C` to break the loop.\n\n== Service Mesh using Istio\n\nhttps://istio.io/[Istio] is is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh.\n\nIstio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.\n\nMore details at https://aws.amazon.com/blogs/opensource/getting-started-istio-eks/[Getting Started with Istio on Amazon EKS].\n\n=== Install and Configure\n\n. Download Istio:\n\n\tcurl -L https://git.io/getLatestIstio | sh -\n\tcd istio-1.*\n\n. Include `istio-1.*/bin` directory in `PATH`\n. Install Istio on Amazon EKS:\n\n\thelm install \\\n\t\t--wait \\\n\t\t--name istio \\\n\t\t--namespace istio-system \\\n\t\tinstall/kubernetes/helm/istio \\\n\t\t--set tracing.enabled=true \\\n\t\t--set grafana.enabled=true\n\n. Verify:\n+\n\tkubectl get pods -n istio-system\n\tNAME                                        READY   STATUS    RESTARTS   AGE\n\tgrafana-75485f89b9-4lwg5                    1/1     Running   0          1m\n\tistio-citadel-84fb7985bf-4dkcx              1/1     Running   0          1m\n\tistio-egressgateway-bd9fb967d-bsrhz         1/1     Running   0          1m\n\tistio-galley-655c4f9ccd-qwk42               1/1     Running   0          1m\n\tistio-ingressgateway-688865c5f7-zj9db       1/1     Running   0          1m\n\tistio-pilot-6cd69dc444-9qstf                2/2     Running   0          1m\n\tistio-policy-6b9f4697d-g8hc6                2/2     Running   0          1m\n\tistio-sidecar-injector-8975849b4-cnd6l      1/1     Running   0          1m\n\tistio-statsd-prom-bridge-7f44bb5ddb-8r2zx   1/1     Running   0          1m\n\tistio-telemetry-6b5579595f-nlst8            2/2     Running   0          1m\n\tistio-tracing-ff94688bb-2w4wg               1/1     Running   0          1m\n\tprometheus-84bd4b9796-t9kk5                 1/1     Running   0          1m\n+\nCheck that both Tracing and Grafana add-ons are enabled.\n+\n. Enable side car injection for all pods in `default` namespace\n\n\tkubectl label namespace default istio-injection=enabled\n\n. From the repo's main directory, deploy the application:\n\n\tkubectl apply -f manifests/app.yaml\n\n. Check pods and note that it has two containers (one for the application and one for the sidecar):\n\n\tkubectl get pods -l app=greeting\n\tNAME                       READY     STATUS    RESTARTS   AGE\n\tgreeting-d4f55c7ff-6gz8b   2/2       Running   0          5s\n\n. Get list of containers in the pod:\n\n\tkubectl get pods -l app=greeting -o jsonpath={.items[*].spec.containers[*].name}\n\tgreeting istio-proxy\n\n. Get response:\n\n  curl http://$(kubectl get svc/greeting \\\n  \t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello\n\n=== Traffic Shifting\n\n. Deploy application with two versions of `greeting`, one that returns `Hello` and another that returns `Howdy`:\n\n  kubectl delete -f manifests/app.yaml\n  kubectl apply -f manifests/app-hello-howdy.yaml\n\n. Check the list of pods:\n\n\tkubectl get pods -l app=greeting\n\tNAME                              READY     STATUS    RESTARTS   AGE\n\tgreeting-hello-69cc7684d-7g4bx    2/2       Running   0          1m\n\tgreeting-howdy-788b5d4b44-g7pml   2/2       Running   0          1m\n\n. Access application multipe times to see different response:\n\n  for i in {1..10}\n  do\n  \tcurl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello\n  \techo\n  done\n  \n. Setup an Istio rule to split traffic between 75% to `Hello` and 25% to `Howdy` version of the `greeting` service:\n\n  kubectl apply -f manifests/istio/app-rule-75-25.yaml\n\n. Invoke the service again to see the traffic split between two services.\n\n=== Canary Deployment\n\n. Setup an Istio rule to divert 10% traffic to canary:\n\n  kubectl delete -f manifests/istio/app-rule-75-25.yaml\n  kubectl apply -f manifests/istio/app-canary.yaml\n\n. Access application multipe times to see ~10% greeting messages with `Howdy`:\n\n  for i in {1..50}\n  do\n  \tcurl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello\n  \techo\n  done\n\n=== Distributed Tracing\n\nIstio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the application you deployed in the previous step to demonstrate this.\n\nBy default, tracing is disabled. `--set tracing.enabled=true` was used during Istio installation to ensure tracing was enabled.\n\nSetup access to the tracing dashboard URL using port-forwarding:\n\n\tkubectl port-forward \\\n\t\t-n istio-system \\\n\t\tpod/$(kubectl get pod \\\n\t\t\t-n istio-system \\\n\t\t\t-l app=jaeger \\\n\t\t\t-o jsonpath='{.items[0].metadata.name}') 16686:16686 \u0026\n\nAccess the dashboard at http://localhost:16686, click on `Dependencies`, `DAG`.\n\nimage::images/istio-dag.png[]\n\n=== Metrics using Grafana\n\n. By default, Grafana is disabled. `--set grafana.enabled=true` was used during Istio installation to ensure Grafana was enabled. Alternatively, the Grafana add-on can be installed as:\n\n\tkubectl apply -f install/kubernetes/addons/grafana.yaml\n\n. Verify:\n\n\tkubectl get pods -l app=grafana -n istio-system\n\tNAME                       READY     STATUS    RESTARTS   AGE\n\tgrafana-75485f89b9-n4skw   1/1       Running   0          10m\n\n. Forward Istio dashboard using Grafana UI:\n\n\tkubectl -n istio-system \\\n\t\tport-forward $(kubectl -n istio-system \\\n\t\t\tget pod -l app=grafana \\\n\t\t\t-o jsonpath='{.items[0].metadata.name}') 3000:3000 \u0026\n\n. View Istio dashboard http://localhost:3000. Click on `Home`, `Istio Workload Dashboard`.\n\n. Invoke the endpoint:\n\n\tcurl http://$(kubectl get svc/greeting \\\n\t\t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello\n\nimage::images/istio-dashboard.png[]\n\n=== Timeouts\n\nDelays and timeouts can be injected in services.\n\n. Deploy the application:\n\n   kubectl delete -f manifests/app.yaml\n   kubectl apply -f manifests/app-ingress.yaml\n\n. Add a 5 seconds delay to calls to the service:\n\n    kubectl apply -f manifests/istio/greeting-delay.yaml\n\n. Invoke the service using a 2 seconds timeout:\n\n\texport INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')\n\texport INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http\")].port}')\n\texport GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT\n\tcurl --connect-timeout 2 http://$GATEWAY_URL/resources/greeting\n\nThe service will timeout in 2 seconds.\n\n== Chaos using kube-monkey\n\nhttps://github.com/asobti/kube-monkey[kube-monkey] is an implementation of Netflix's Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes pods in the cluster encouraging and validating the development of failure-resilient services.\n\n. Create kube-monkey configuration:\n\n\tkubectl apply -f manifests/kubemonkey/kube-monkey-configmap.yaml\n\n. Run kube-monkey:\n\n\tkubectl apply -f manifests/kubemonkey/kube-monkey-deployment.yaml\n\n. Deploy an app that opts-in for pod deletion:\n\n\tkubectl apply -f manifests/kubemonkey/app-kube-monkey.yaml\n\nThis application agrees to kill up to 40% of pods. The schedule of deletion is defined by kube-monkey configuration and is defined to be between 10am and 4pm on weekdays.\n\n== Deployment Pipeline using Skaffold\n\nhttps://github.com/GoogleContainerTools/skaffold[Skaffold] is a command line utility that facilitates continuous development for Kubernetes applications. With Skaffold, you can iterate on your application source code locally then deploy it to a remote Kubernetes cluster.\n\n. Check context:\n\n\tkubectl config get-contexts\n\tCURRENT   NAME                               CLUSTER                       AUTHINFO                           NAMESPACE\n\t          arun@eks-gpu.us-west-2.eksctl.io   eks-gpu.us-west-2.eksctl.io   arun@eks-gpu.us-west-2.eksctl.io   \n\t*         arun@myeks.us-east-1.eksctl.io     myeks.us-east-1.eksctl.io     arun@myeks.us-east-1.eksctl.io     \n\t          docker-for-desktop                 docker-for-desktop-cluster    docker-for-desktop\n\n. Change to use local Kubernetes cluster:\n\n\tkubectl config use-context docker-for-desktop\n\n. Download Skaffold:\n\n\tcurl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 \\\n\t\t\u0026\u0026 chmod +x skaffold\n\n. Open http://localhost:8080/resources/greeting in browser. This will show the page is not available.\n. Run Skaffold in the application directory:\n\n    cd app\n    skaffold dev\n\n. Refresh the page in browser to see the output.\n\n== Deployment Pipeline using CodePipeline\n\nComplete detailed instructions are available at https://eksworkshop.com/codepipeline/.\n\n=== Create IAM role\n\n. Create an IAM role and add an in-line policy that will allow the CodeBuild stage to interact with the EKS cluster:\n\n\tACCOUNT_ID=`aws sts get-caller-identity --query Account --output text`\n\tTRUST=\"{ \\\"Version\\\": \\\"2012-10-17\\\", \\\"Statement\\\": [ { \\\"Effect\\\": \\\"Allow\\\", \\\"Principal\\\": { \\\"AWS\\\": \\\"arn:aws:iam::${ACCOUNT_ID}:root\\\" }, \\\"Action\\\": \\\"sts:AssumeRole\\\" } ] }\"\n\techo '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"eks:Describe*\", \"Resource\": \"*\" } ] }' \u003e /tmp/iam-role-policy\n\taws iam create-role --role-name EksWorkshopCodeBuildKubectlRole --assume-role-policy-document \"$TRUST\" --output text --query 'Role.Arn'\n\taws iam put-role-policy --role-name EksWorkshopCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy\n\n. Add this IAM role to aws-auth ConfigMap for the EKS cluster:\n\n\tROLE=\"    - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksWorkshopCodeBuildKubectlRole\\n      username: build\\n      groups:\\n        - system:masters\"\n\tkubectl get -n kube-system configmap/aws-auth -o yaml | awk \"/mapRoles: \\|/{print;print \\\"$ROLE\\\";next}1\" \u003e /tmp/aws-auth-patch.yml\n\tkubectl patch configmap/aws-auth -n kube-system --patch \"$(cat /tmp/aws-auth-patch.yml)\"\n\n=== Create CloudFormation template\n\n. Fork the repo https://github.com/aws-samples/kubernetes-for-java-developers\n. Create a new GitHub token https://github.com/settings/tokens/new, select `repo` as the scope, click on `Generate Token` to generate the token. Copy the generated token.\n. Launch https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=eksws-codepipeline\u0026templateURL=https://s3.amazonaws.com/eksworkshop.com/templates/master/ci-cd-codepipeline.cfn.yml[CodePipeline CloudFormation template].\n. Specify the correct values for `GitHubUser`, `GitHubToken`, `GitSourceRepo` and `EKS cluster name`. Change the branch if you need to:\n+\nimage::images/codepipeline-template.png[]\n+\nClick on `Create stack` to create the resources.\n\n=== View CodePipeline\n\n. Once the stack creation is complete, open https://us-west-2.console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-west-2#[CodePipeline in the AWS Console].\n. Select the pipeline and wait for the pipeline status to complete:\n+\nimage::images/codepipeline-status.png[]\n+\n. Access the service:\n\n\tcurl http://$(kubectl get svc/greeting -n default \\\n\t\t-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello\n\n\n== Deployment Pipeline using Jenkins X\n\n. Install `jx` CLI:\n\n\tbrew tap jenkins-x/jx\n\tbrew install jx \n\n. Create a new https://github.com/settings/tokens[GitHub token] with the following scope:\n+\nimage::images/jenkinsx-github-token.png[]\n+\n. Install Jenkins X on Amazon EKS:\n+\n\tjx install --provider=eks --git-username arun-gupta --git-api-token GITHUB_TOKEN --batch-mode \n+\nlink:images/jenkinsx-log.txt[Log] shows complete run of the command.\n+\n. Use `jx import` to import a project. Need `Dockerfile` and maven application in the root directory.\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faws-samples%2Fkubernetes-for-java-developers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faws-samples%2Fkubernetes-for-java-developers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faws-samples%2Fkubernetes-for-java-developers/lists"}