{"id":24576577,"url":"https://github.com/dmrhimali/kubernetesspringbatch","last_synced_at":"2025-03-17T13:14:44.353Z","repository":{"id":270633895,"uuid":"910983122","full_name":"dmrhimali/kubernetesSpringBatch","owner":"dmrhimali","description":"Kubernetes spring batch deployment","archived":false,"fork":false,"pushed_at":"2025-01-02T01:16:33.000Z","size":6751,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-01-23T22:41:21.563Z","etag":null,"topics":["kubernetes","spring-batch"],"latest_commit_sha":null,"homepage":"","language":"Kotlin","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dmrhimali.png","metadata":{"files":{"readme":"README.MD","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-01-02T01:13:52.000Z","updated_at":"2025-01-02T03:45:54.000Z","dependencies_parsed_at":"2025-01-02T02:32:41.579Z","dependency_job_id":null,"html_url":"https://github.com/dmrhimali/kubernetesSpringBatch","commit_stats":null,"previous_names":["dmrhimali/kubernetesspringbatch"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmrhimali%2FkubernetesSpringBatch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmrhimali%2FkubernetesSpringBatch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmrhimali%2FkubernetesSpringBatch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmrhimali%2FkubernetesSpringBatch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dmrhimali","download_url":"https://codeload.github.com/dmrhimali/kubernetesSpringBatch/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244039241,"owners_count":20387835,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["kubernetes","spring-batch"],"created_at":"2025-01-23T22:40:21.575Z","updated_at":"2025-03-17T13:14:44.316Z","avatar_url":"https://github.com/dmrhimali.png","language":"Kotlin","readme":"## Introduction\n\nThis tutorial will:\n- install minikube locally\n- show how to run: [Spring Bath on Kebernetes](spring-batch)\n- \n\n**References**:\n\n- [Free - Learn Devops Kubernetes deployment by kops and terraform](https://www.udemy.com/course/learn-devops-kubernetes-deployment-by-kops-and-terraform/learn/lecture/18287718#overview)\n-[Really good Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)\n- [Spring Cloud on Kubernetes](https://www.youtube.com/watch?v=pYpruogcb6w)\n\n## Local Setup\n\n### install hyperkit virtual env and minikube\nMinikube will run master and worker node porcesses in one node and come with docker and kubectl installed.\n\n**Minikube** :\n- for start up/deleting cluster     \n\n**kubectl**:\n- for configuring the minikube cluster  \n\n```sh\nbrew update\n\nbrew install hyperkit\n\nbrew install minikube \n\n```\n\n### start minikube in virtual env\n\n```sh\nminikube start --vm-driver=hyperkit\n```\n\n### delete all resources created and start over\n\n`minikube delete`\n\n\n### Basic kubectl commands\n\n**get nodes**: `kubectl get nodes`\n\n```sh\nkubectl get nodes\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  kubectl get nodes\nNAME       STATUS   ROLES    AGE     VERSION\nminikube   Ready    master   6m47s   v1.19.4\n```\n\n**get minikube status:** `minikube status`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  minikube status\n\nminikube\n\ntype: Control Plane\n\nhost: Running\n\nkubelet: Running\n\napiserver: Running\n\nkubeconfig: Configured\n\n```\n\n**get kubectl version**: `kubectl version`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  kubectl version\n\nClient Version: version.Info{Major:\"1\", Minor:\"16+\", GitVersion:\"v1.16.6-beta.0\", \n\nGitCommit:\"e7f962ba86f4ce7033828210ca3556393c377bcc\", GitTreeState:\"clean\", BuildDate:\"2020-01-15T08:26:26Z\", \n\nGoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"darwin/amd64\"}\n\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.4\", \n\nGitCommit:\"d360454c9bcd1634cf4cc52d1867af5491dc9c5f\", GitTreeState:\"clean\", BuildDate:\"2020-11-11T13:09:17Z\", \n\nGoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n```\n\n**get pod** : `kubectl get pod`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  kubectl get pod\n\nNo resources found in default namespace.\n```\n\n**get services** `kubectl get services`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  kubectl get services\n\nNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\n\nkubernetes   ClusterIP   10.96.0.1    \u003cnone\u003e        443/TCP   16m\n```\n\n**get help on a command** `kubectl -h`\n\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master ●✚  kubectl create -h\n\nCreate a resource from a file or from stdin.\n\nJSON and YAML formats are accepted.\n\nExamples:\n# Create a pod using the data in pod.json.\nkubectl create -f ./pod.json\n\n# Create a pod based on the JSON passed into stdin.\ncat pod.json | kubectl create -f -\n\n# Edit the data in docker-registry.yaml in JSON then create the resource using the edited data.\nkubectl create -f docker-registry.yaml --edit -o json\n\nAvailable Commands:\nclusterrole         Create a ClusterRole.\n\nclusterrolebinding  Create a ClusterRoleBinding for a particular ClusterRole\n\nconfigmap           Create a configmap from a local file, directory or literal value\n\ncronjob             Create a cronjob with the specified name.\n\ndeployment          Create a deployment with the specified name.\n\njob                 Create a job with the specified name.\n\nnamespace           Create a namespace with the specified name\n\npoddisruptionbudget Create a pod disruption budget with the specified name.\n\npriorityclass       Create a priorityclass with the specified name.\n\nquota               Create a quota with the specified name.\n\nrole                Create a role with single rule.\n\nrolebinding         Create a RoleBinding for a particular Role or ClusterRole\n\nsecret              Create a secret using specified subcommand\n\nservice             Create a service using specified subcommand.\n\nserviceaccount      Create a service account with the specified name\n\nOptions:\n\n    --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in\n\nUsage:\n\nkubectl create -f FILENAME [options]\n\nUse \"kubectl \u003ccommand\u003e --help\" for more information about a given command.\n\nUse \"kubectl options\" for a list of global command-line options (applies to all commands).\n\n```\n\n**create pod/deployment**: `kubectl create deployment my-dep --image=nginx`\n\n```sh\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl create deployment nginx-depl --image=nginx\n\ndeployment.apps/nginx-depl created\n```\n\n**get deployment**: `kubectl get deployment`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get deployment\n\nNAME         READY   UP-TO-DATE   AVAILABLE   AGE\n\nnginx-depl   1/1     1            1           70s\n```\n\nREADY(0/1) : service is not ready yet\n\nREADY(1/1) : service is ready\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\n\nNAME                          READY   STATUS    RESTARTS   AGE\n\nnginx-depl-5c8bf76b5b-dvwh9   1/1     Running   0          2m50s\n```\n\npod name format= \u003cdeployment-name\u003e-\u003creplicaset-hash\u003e-\u003chash\u003e\n\nSTATUS(Container Creating) : not ready\n\nSTATUS (Runnig) : container ready   \n\n\n**get replicaset** `kubectl get replicaset`\n\nA replicaset is automatically created with deployment. Nothing you should manage.\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\n\nNAME                    DESIRED   CURRENT   READY   AGE\n\nnginx-depl-5c8bf76b5b   1         1         1       8m12s\n```\n\n**edit configuration of deployment**: `kubectl edit deployment nginx-depl`\n\nkubectl automatically generates a configuration file with default values for the depllyment we did. Let's edit it:\n\n`kubectl edit deployment nginx-depl`\n\n```yml\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\n    deployment.kubernetes.io/revision: \"1\"\ncreationTimestamp: \"2020-12-30T17:55:08Z\"\ngeneration: 1\nlabels:\n    app: nginx-depl\nmanagedFields:\n- apiVersion: apps/v1\n    fieldsType: FieldsV1\n    fieldsV1:\n    f:metadata:\n        f:labels:\n        .: {}\n        f:app: {}\n    f:spec:\n        f:progressDeadlineSeconds: {}\n        f:replicas: {}\n        f:revisionHistoryLimit: {}\n        f:selector:\n        f:matchLabels:\n            .: {}\n            f:app: {}\n        f:strategy:\n        f:rollingUpdate:\n            .: {}\n            f:maxSurge: {}\n            f:maxUnavailable: {}\n        f:type: {}\n        f:template:\n        f:metadata:\n            f:labels:\n            .: {}\n            f:app: {}\n        f:spec:\n            f:containers:\n            k:{\"name\":\"nginx\"}:\n                .: {}\n                f:image: {}\n                f:imagePullPolicy: {}\n                f:name: {}\n                f:resources: {}\n                f:terminationMessagePath: {}\n                f:terminationMessagePolicy: {}\n            f:dnsPolicy: {}\n            f:restartPolicy: {}\n            f:dnsPolicy: {}\n            f:restartPolicy: {}\n            f:schedulerName: {}\n            f:securityContext: {}\n            f:terminationGracePeriodSeconds: {}\n    manager: kubectl\n    operation: Update\n    time: \"2020-12-30T17:55:08Z\"\n- apiVersion: apps/v1\n    fieldsType: FieldsV1\n    fieldsV1:\n    f:metadata:\n        f:annotations:\n        .: {}\n        f:deployment.kubernetes.io/revision: {}\n    f:status:\n        f:availableReplicas: {}\n        f:conditions:\n        .: {}\n        k:{\"type\":\"Available\"}:\n            .: {}\n            f:lastTransitionTime: {}\n            f:lastUpdateTime: {}\n            f:message: {}\n            f:reason: {}\n            f:status: {}\n            f:type: {}\n        k:{\"type\":\"Progressing\"}:\n            .: {}\n            f:lastTransitionTime: {}\n            f:lastUpdateTime: {}\n            f:message: {}\n            f:reason: {}\n            f:status: {}\n            f:type: {}\n        f:observedGeneration: {}\n        f:readyReplicas: {}\n        f:replicas: {}\n        f:updatedReplicas: {}\n    manager: kube-controller-manager\n    operation: Update\n    time: \"2020-12-30T17:55:18Z\"\nname: nginx-depl\nnamespace: default\nresourceVersion: \"4630\"\nselfLink: /apis/apps/v1/namespaces/default/deployments/nginx-depl\nuid: 0beb87fb-5d73-4124-863b-8d2d7cc3ccb0\nspec:   \nprogressDeadlineSeconds: 600\nreplicas: 1\nrevisionHistoryLimit: 10\nselector:\n    matchLabels:\n    app: nginx-depl\nstrategy:\n    rollingUpdate:\n    maxSurge: 25%\n    maxUnavailable: 25%\n    type: RollingUpdate\ntemplate:\n    metadata:\n    creationTimestamp: null\n    labels:\n        app: nginx-depl\n    spec:\n    containers:\n    - image: nginx\n        imagePullPolicy: Always\n        name: nginx\n        resources: {}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n    dnsPolicy: ClusterFirst\n    restartPolicy: Always\n    schedulerName: default-scheduler\n    securityContext: {}\n    terminationGracePeriodSeconds: 30\nstatus:\navailableReplicas: 1\nconditions:\n- lastTransitionTime: \"2020-12-30T17:55:18Z\"\n    lastUpdateTime: \"2020-12-30T17:55:18Z\"\n    message: Deployment has minimum availability.\n    reason: MinimumReplicasAvailable\n    status: \"True\"\n    type: Available\n- lastTransitionTime: \"2020-12-30T17:55:08Z\"\n    lastUpdateTime: \"2020-12-30T17:55:18Z\"\n    message: ReplicaSet \"nginx-depl-5c8bf76b5b\" has successfully progressed.\n    reason: NewReplicaSetAvailable\n    status: \"True\"\n    type: Progressing\nobservedGeneration: 1\nreadyReplicas: 1\nreplicas: 1\nupdatedReplicas: 1    \n```\n\nLet's update image to 1.16 and save.\n\n```yml\n...\nspec:\n    containers:\n    - image: nginx:1.16\n\n...\n```\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl edit deployment nginx-depl\n\ndeployment.apps/nginx-depl edited\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\n#old container terminated and       new container brought up\nNAME                          READY   STATUS    RESTARTS   AGE\n\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running   0          58s\n\n#new replicaset is automatically created, old one has no pods in it anymore  \n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\nNAME                    DESIRED   CURRENT   READY   AGE\nnginx-depl-5c8bf76b5b   0         0         0       21m\nnginx-depl-7fc44fc5d4   1         1         1       2m12s\n\n```\n\n### Debugging pods\n\n**debugging pods** : `kubectl logs CONTAINER_NAME`\n\n\n```sh\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl logs nginx-depl-7fc44fc5d4-2k2ph\n\n#nothing is logged\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master \n```\n\nLet's create mongo deployment:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master     kubectl create deployment\nmongo-depl --image=mongo\n\ndeployment.apps/mongo-depl created\n\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\n\nNAME                          READY   STATUS              RESTARTS   AGE    \nmongo-depl-5fd6b7d4b4-mbvnn   0/1     ContainerCreating   0          16s\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running             0          175m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                          READY   STATUS    RESTARTS   AGE\nmongo-depl-5fd6b7d4b4-mbvnn   1/1     Running   0          27s\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running   0          175m\n\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl logs mongo-depl-5fd6b7d4b4-mbvnn\n\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.014+00:00\"},\"s\":\"I\",  \"c\":\"CONTROL\",  \"id\":23285,   \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.015+00:00\"},\"s\":\"W\",  \"c\":\"ASIO\",     \"id\":22601,   \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.016+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.016+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"mongo-depl-5fd6b7d4b4-mbvnn\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.016+00:00\"},\"s\":\"I\",  \"c\":\"CONTROL\",  \"id\":23403,   \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.2\",\"gitVersion\":\"15e73dc5738d2278b688f8929aee605fe4279b0e\",\"openSSLVersion\":\"OpenSSL 1.1.1  11 Sep 2018\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu1804\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.016+00:00\"},\"s\":\"I\",  \"c\":\"CONTROL\",  \"id\":51765,   \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"18.04\"}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.016+00:00\"},\"s\":\"I\",  \"c\":\"CONTROL\",  \"id\":21951,   \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"*\"}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.017+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":22297,   \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.017+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":22315,   \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=1409M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.536+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":22430,   \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1609362603:536789][1:0x7fd0d5606ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.536+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":22430,   \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1609362603:536859][1:0x7fd0d5606ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.542+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":525}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.542+00:00\"},\"s\":\"I\",  \"c\":\"RECOVERY\", \"id\":23987,   \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.553+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":true}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.554+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":22262,   \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.559+00:00\"},\"s\":\"W\",  \"c\":\"CONTROL\",  \"id\":22120,   \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.560+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":20320,   \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuidDisposition\":\"provided\",\"uuid\":{\"uuid\":{\"$uuid\":\"35b07ba7-2a6e-496a-82a8-146b137f77a9\"}},\"options\":{\"uuid\":{\"$uuid\":\"35b07ba7-2a6e-496a-82a8-146b137f77a9\"}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.568+00:00\"},\"s\":\"I\",  \"c\":\"INDEX\",    \"id\":20345,   \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"admin.system.version\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.568+00:00\"},\"s\":\"I\",  \"c\":\"COMMAND\",  \"id\":20459,   \"ctx\":\"initandlisten\",\"msg\":\"Setting featureCompatibilityVersion\",\"attr\":{\"newVersion\":\"4.4\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.569+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":20536,   \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.570+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":20320,   \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"5b25e517-5ef2-45ca-b315-3a209baf33a5\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.579+00:00\"},\"s\":\"I\",  \"c\":\"INDEX\",    \"id\":20345,   \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.580+00:00\"},\"s\":\"I\",  \"c\":\"FTDC\",     \"id\":20625,   \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.583+00:00\"},\"s\":\"I\",  \"c\":\"STORAGE\",  \"id\":20320,   \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"7008eaaa-98ac-46af-a376-b4afcafd32c9\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.584+00:00\"},\"s\":\"I\",  \"c\":\"CONTROL\",  \"id\":20712,   \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNotFound: config.system.sessions does not exist\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.584+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":23015,   \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.584+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":23015,   \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.584+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":23016,   \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.599+00:00\"},\"s\":\"I\",  \"c\":\"INDEX\",    \"id\":20345,   \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-12-30T21:10:03.599+00:00\"},\"s\":\"I\",  \"c\":\"INDEX\",    \"id\":20345,   \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"lsidTTLIndex\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master \n```\n\n**describing pods** : `kubectl describe pod POD_NAME    `\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl describe pod mongo-depl-5fd6b7d4b4-mbvnn\n\nName:         mongo-depl-5fd6b7d4b4-mbvnn\nNamespace:    default\nPriority:     0\nNode:         minikube/192.168.64.2\nStart Time:   Wed, 30 Dec 2020 15:09:37 -0600\nLabels:       app=mongo-depl\n            pod-template-hash=5fd6b7d4b4\nAnnotations:  \u003cnone\u003e\nStatus:       Running\nIP:           172.17.0.3\nIPs:\nIP:           172.17.0.3\nControlled By:  ReplicaSet/mongo-depl-5fd6b7d4b4\nContainers:\nmongo:\n    Container ID:   docker://1336494faae6f2d0bfbecb17dd3974f3a443c29188ccd6c7a9b854926e5b786d\n    Image:          mongo\n    Image ID:       docker-pullable://mongo@sha256:02e9941ddcb949424fa4eb01f9d235da91a5b7b64feb5887eab77e1ef84a3bad\n    Port:           \u003cnone\u003e\n    Host Port:      \u003cnone\u003e\n    State:          Running\n    Started:      Wed, 30 Dec 2020 15:10:02 -0600\n    Ready:          True\n    Restart Count:  0\n    Environment:    \u003cnone\u003e\n    Mounts:\n    /var/run/secrets/kubernetes.io/serviceaccount from default-token-7hsl2 (ro)\nConditions:\nType              Status\nInitialized       True\nReady             True\nContainersReady   True\nPodScheduled      True\nVolumes:\ndefault-token-7hsl2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-7hsl2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \u003cnone\u003e\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\nType    Reason     Age    From               Message\n----    ------     ----   ----               -------\nNormal  Scheduled  4m56s  default-scheduler  Successfully assigned default/mongo-depl-5fd6b7d4b4-mbvnn to minikube\nNormal  Pulling    4m56s  kubelet, minikube  Pulling image \"mongo\"\nNormal  Pulled     4m31s  kubelet, minikube  Successfully pulled image \"mongo\" in 24.62914344s\nNormal  Created    4m31s  kubelet, minikube  Created container mongo\nNormal  Started    4m31s  kubelet, minikube  Started container mongo\n```\n\n**get an interactive terminal for debugging** : `kubectl exec -it POD_NAME -- bin/bash`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl exec -it mongo-depl-5fd6b7d4b4-mbvnn -- bin/bash\n\nroot@mongo-depl-5fd6b7d4b4-mbvnn:/# ls\nbin   data  docker-entrypoint-initdb.d  home        lib    media  opt   root  sbin  sys  usr\nboot  dev   etc                         js-yaml.js  lib64  mnt    proc  run   srv   tmp  var\n\nroot@mongo-depl-5fd6b7d4b4-mbvnn:/# ls data/\nconfigdb  db\n\nroot@mongo-depl-5fd6b7d4b4-mbvnn:/# exit\nexit\n```\n\n**delete deployment (and replicaset)**: `kubectl delete deployment DEPLOYMENT_NAME`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get deployment\nNAME         READY   UP-TO-DATE   AVAILABLE   AGE\nmongo-depl   1/1     1            1           19m\nnginx-depl   1/1     1            1           3h34m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                          READY   STATUS    RESTARTS   AGE\nmongo-depl-5fd6b7d4b4-mbvnn   1/1     Running   0          19m\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running   0          3h15m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl delete deployment mongo-depl\ndeployment.apps \"mongo-depl\" deleted\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                          READY   STATUS        RESTARTS   AGE\nmongo-depl-5fd6b7d4b4-mbvnn   0/1     Terminating   0          21m\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running       0          3h16m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\nNAME                    DESIRED   CURRENT   READY   AGE\nnginx-depl-5c8bf76b5b   0         0         0       3h36m\nnginx-depl-7fc44fc5d4   1         1         1       3h16m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                          READY   STATUS    RESTARTS   AGE\nnginx-depl-7fc44fc5d4-2k2ph   1/1     Running   0          3h16m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl delete deployment nginx-depl\ndeployment.apps \"nginx-depl\" deleted\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                          READY   STATUS        RESTARTS   AGE\nnginx-depl-7fc44fc5d4-2k2ph   0/1     Terminating   0          3h17m\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\nNo resources found in default namespace.\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNo resources found in default namespace.\n\n```\n### using kubernetese configuration file  (instead of kubectl create deployment name image options) : \n\n`kubectl apply  -f FILE_NAME`\n\ne.g. `kubectl apply  -f config-file.yaml    `\n\nFirst let's create config file for deploying nginx:\n\n```sh\ntouch nginx-deployment.yaml\n\nvi nginx-deployment.yaml\n```\nsave following:\n\n```yml\napiVersion: apps/v1\nkind: Deployment #deployment config \nmetadata:\nname: nginx-deployment\nlabels:\n    app: nginx\n\nspec: ##specification for the deployment\nreplicas: 1 ##number of pods \nselector:\n    matchLabels:\n    app: nginx\ntemplate:\n    metadata:\n    labels:\n        app: nginx\n    spec: ##specification for the pod   \n    containers:\n    - name: nginx\n        image: nginx:1.16\n        ports:\n        - containerPort: 80\n```\n\nApply configuration to deploy pod(s): `kubectl apply -f nginx-deployment.yaml`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl apply -f nginx-deployment.yaml\ndeployment.apps/nginx-deployment created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                                READY   STATUS    RESTARTS   AGE\nnginx-deployment-644599b9c9-gqlh7   1/1     Running   0          6s\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get deployment\nNAME               READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-deployment   1/1     1            1           15s\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\nNAME                          DESIRED   CURRENT   READY   AGE\nnginx-deployment-644599b9c9   1         1         1       20s\n```\n\nLet's update configuration (to have ttwo pods) and reapply:\n\n\n```yml\napiVersion: apps/v1\nkind: Deployment #deployment config \nmetadata:\nname: nginx-deployment\nlabels:\n    app: nginx\n\nspec: ##specification for the deployment\nreplicas: 2 ##number of pods \nselector:\n    matchLabels:\n    app: nginx\ntemplate: #pod template\n    metadata:\n    labels:\n        app: nginx\n    spec: ##specification/blueprint for the pod   \n    containers:\n    - name: nginx\n        image: nginx:1.16\n        ports:\n        - containerPort: 80\n```\n\nAfter applying two pods created.\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl apply -f nginx-deployment.yaml\ndeployment.apps/nginx-deployment configured\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get pod\nNAME                                READY   STATUS    RESTARTS   AGE\nnginx-deployment-644599b9c9-gqlh7   1/1     Running   0          3m34s #old one still there\nnginx-deployment-644599b9c9-hlrqc   1/1     Running   0          12s #new pod go created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get deployment\nNAME               READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-deployment   2/2     2            2           3m54s\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/microservices   master  kubectl get replicaset\nNAME                          DESIRED   CURRENT   READY   AGE\nnginx-deployment-644599b9c9   2         2         2       4m1s\n\n```\n\n### Kubernetes configuration file structure\n\nEach configuration  has 3 parts:\n1. metadta\n2. specification (attributes of `spec`  vary based on whther it is a deployment or ser ice)\n3. status (Auto generated and maintained by kubernetes (Current state vs Desired state: like in docker compose apply which let you apply incremental changes))\n        - status data come from `etcd` which hold current state of components   \n**Deployment:  nginx-deployment.yml**\n```yml\napiVersion: apps/v1\nkind: Deployment  \nmetadata: #PART1: METADATA\nname: nginx-deployment\nlabels: ...\n\nspec: #PART2: SPECIFICATION \nreplicas: 2  \nselector: ...\ntemplate: ...\n```\n\n**Service:  nginx-service.yml**\n```yml\napiVersion: v1\nkind: Service \nmetadata: #PART1: METADATA\nname: nginx-service\nspec: #PART2: SPECIFICATION\nselector: ...\nports: \n    - protocol: TCP\n    port: 80\n    targetPort: 8080  #matches containerPort of nginx-deployment.yml\n```\n\n**where to store config files?** : \n- usual practice is to store them with code.\n\n### Connecting components (Labrls + Selectors + Port)**\n\nConnect services and deployments (services should know which deploymnets  registered to it.)\n\nIn service and deployment configuration ymls: \n- **metadata**: contain `labels` (any \u003ckey,value\u003e pair (\u003capp,  nginx\u003e))\n    - pods get the label through the tempate blueprint\n- **spec**: contain `selecters`\n    - the label is matched by the selecter (`matchLabels: app:nginx`)\n\n\n- nginx-service.yml `spec: selector` will match the ngonx-deployment.yml `metadata: labels` to create a connection between service and deployment           \n\n- nginx Service (created from nginx-service.yml) when get a requst (from say DB Service) on its listening port 80  will send request to Pod for nginx-depyment in Pod for target port 8080\n\nLet's apply both configs:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl apply -f nginx-deployment.yaml\ndeployment.apps/nginx-deployment created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl apply -f nginx-service.yaml\nservice/nginx-service created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get pod\nNAME                               READY   STATUS    RESTARTS   AGE\nnginx-deployment-f4b7bbcbc-rswsv   1/1     Running   0          16s\nnginx-deployment-f4b7bbcbc-w9h9p   1/1     Running   0          16s\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get service\nNAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\nkubernetes      ClusterIP   10.96.0.1       \u003cnone\u003e        443/TCP   6h8m #default service, always there\nnginx-service   ClusterIP   10.104.131.73   \u003cnone\u003e        80/TCP    34s #one we created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl describe service nginx-service\nName:              nginx-service\nNamespace:         default\nLabels:            \u003cnone\u003e\nAnnotations:       kubectl.kubernetes.io/last-applied-configuration:\n                    {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"nginx-service\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"port\":80...\nSelector:          app=nginx\nType:              ClusterIP\nIP:                10.104.131.73\nPort:              \u003cunset\u003e  80/TCP\nTargetPort:        8080/TCP\nEndpoints:         172.17.0.3:8080,172.17.0.4:8080 #IP addresses:ports of nginx-deployment target pods service must forward requests to\nSession Affinity:  None\nEvents:            \u003cnone\u003e       \n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get deployment\nNAME               READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-deployment   2/2     2            2           53s\n\n#get IP address of deplyment pods\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get pod -o wide\nNAME                               READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES\nnginx-deployment-f4b7bbcbc-rswsv   1/1     Running   0          6m48s   172.17.0.3   minikube   \u003cnone\u003e           \u003cnone\u003e\nnginx-deployment-f4b7bbcbc-w9h9p   1/1     Running   0          6m48s   172.17.0.4   minikube   \u003cnone\u003e           \u003cnone\u003e     \n\n```\n### Get autogenerated status status\nGet config in yml format\n\n```sh\n    #get update config stored in etcd\n```\n```yaml\napiVersion: apps/v1\nkind: Deployment\n...\nstatus:\navailableReplicas: 2\nconditions:\n- lastTransitionTime: \"2020-12-30T22:23:19Z\"\n    lastUpdateTime: \"2020-12-30T22:23:19Z\"\n    message: Deployment has minimum availability.\n    reason: MinimumReplicasAvailable\n    status: \"True\"\n    type: Available\n- lastTransitionTime: \"2020-12-30T22:23:17Z\"\n    lastUpdateTime: \"2020-12-30T22:23:19Z\"\n    message: ReplicaSet \"nginx-deployment-f4b7bbcbc\" has successfully progressed.\n    reason: NewReplicaSetAvailable\n    status: \"True\"\n    type: Progressing\nobservedGeneration: 1\nreadyReplicas: 2\nreplicas: 2\nupdatedReplicas: 2\n```\n\n### delete deployments and service \n\n`kubectl delete -f config.yaml`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl delete -f nginx-service.yaml\nservice \"nginx-service\" deleted\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl delete -f nginx-deployment.yaml\ndeployment.apps \"nginx-deployment\" deleted\n\n```\n\n### Demo : MongoDB and MongoExpress(Web) \n\nCode in demo-mongo folder\n\n### STEP1: create mongodb deployment\n\n**demo-mongo/mongodb.yaml**\n\n```yml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nname: mongodb-deployment\nlabels:\n    app: mongodb\nspec:\nreplicas: 1\nselector:\n    matchLabels:\n    app: mongodb\ntemplate:\n    metadata:\n    labels:\n        app: mongodb\n    spec:\n    containers:\n    - name: mongodb\n        image: mongodb  \n        ports:\n        - containerPort: 27017 #lookup mongo in dockerhub if you do not know  (https://hub.docker.com/_/mongo)\n        env: #lookup mongo in dockerhub if you do not know (https://hub.docker.com/_/mongo Environment Variables    )\n        - name: MONGO_INITDB_ROOT_USERNAME\n        value: #TODO: create kubernetes secret and reference here\n        - name: MONGO_INITDB_ROOT_PASSWORD\n        value: #TODO: create kubernetes secret and reference here\n```\n\n**Create secret for MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWOR**           \n\nget base64 endcode username and passwords:\n\n```sh\necho -n \"username\" | base64\ndXNlcm5hbWU=\n\necho -n \"mypassword\" | base64\nbXlwYXNzd29yZA==\n```\n\nNow create mongo-secret.yml:\n```yml\napiVersion: v1\nkind: Secret\nmetadata:\nname: mongodb-secret #anyname\ntype: Opaque #default, common but can use TLS certificates also\ndata:\nmongo-root-username: dXNlcm5hbWU= #base64 encoded username (echo -n \"username\" | base64)/value (any apropriate name for key)\nmongo-root-password: bXlwYXNzd29yZA== #base64 encoded password/value (any apropriate name for key)\n```\n\nNow let's create the secret:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl apply -f mongo-secret.yaml\nsecret/mongodb-secret created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get secret\nNAME                  TYPE                                  DATA   AGE\ndefault-token-7hsl2   kubernetes.io/service-account-token   3      6h47m\nmongodb-secret        Opaque                                2      20s #secret created  \n```\n\nNow let's go and update our `mongodb.yaml` to reference the k8s secret we created.\n\n```yml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nname: mongodb-deployment\nlabels:\n    app: mongodb\nspec:\nreplicas: 1\nselector:\n    matchLabels:\n    app: mongodb\ntemplate:\n    metadata:\n    labels:\n        app: mongodb\n    spec:\n    containers:\n    - name: mongodb\n        image: mongo \n        ports:\n        - containerPort: 27017 #lookup mongo in dockerhub if you do not know  (https://hub.docker.com/_/mongo)\n        env: #lookup mongo in dockerhub if you do not know (https://hub.docker.com/_/mongo Environment Variables    )\n        - name: MONGO_INITDB_ROOT_USERNAME\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-username #data:mongo-root-username key in mongo-secret.yml\n        - name: MONGO_INITDB_ROOT_PASSWORD\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-password #data:mongo-root-password key in mongo-secret.yml\n```\n\nNow let's apply the mongodb-deployment.yml configuration:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl apply -f mongodb.yaml\ndeployment.apps/mongodb-deployment configured\n\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl get all\nNAME                                      READY   STATUS        RESTARTS   AGE\npod/mongodb-deployment-5cf4c8fdbf-vqfxv   0/1     Terminating   0          113s\npod/mongodb-deployment-8f6675bc5-7hvxj    1/1     Running       0          3s\n\nNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\nservice/kubernetes   ClusterIP   10.96.0.1    \u003cnone\u003e        443/TCP   6h59m\n\nNAME                                 READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/mongodb-deployment   1/1     1            1           114s\n\nNAME                                            DESIRED   CURRENT   READY   AGE\nreplicaset.apps/mongodb-deployment-5cf4c8fdbf   0         0         0       114s\nreplicaset.apps/mongodb-deployment-8f6675bc5    1         1         1       4s\n\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl get pod\nNAME                                 READY   STATUS    RESTARTS   AGE\nmongodb-deployment-8f6675bc5-7hvxj   1/1     Running   0          8s\n```\n\n### STEP2: create internal service so other pods can talk to mongodb pods\n\nLet's modify `mongodb.yml` to inclide service config as well. append foloowing to file:\n\n```yml\n#omitted: deployment doc\n#note 3 dashes to seperate service document from deployment document above:\n---\napiVersion: v1\nkind: Service \nmetadata:\nname: mongodb-service\nspec:\nselector:\n    app: mongodb  #should match template:metadata:label in mongodb deployment configuration to connect to pod\nports:\n    - protocol: TCP\n    port: 27017\n    targetPort: 27017 #should match containerPort in mongodb deployment configuration \n```\n\nNow let's apply service config:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl apply -f mongodb.yaml\ndeployment.apps/mongodb-deployment unchanged\nservice/mongodb-service created\n```\n\nverify ports:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get service\nNAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE\nkubernetes        ClusterIP   10.96.0.1       \u003cnone\u003e        443/TCP     7h12m\nmongodb-service   ClusterIP   10.97.250.196   \u003cnone\u003e        27017/TCP   64s #service created listening @port 27017\n\n#verify target Container ports:\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl describe service mongodb-service\nName:              mongodb-service\nNamespace:         default\nLabels:            \u003cnone\u003e\nAnnotations:       kubectl.kubernetes.io/last-applied-configuration:\n                    {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"mongodb-service\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"port\":...\nSelector:          app=mongodb\nType:              ClusterIP\nIP:                10.97.250.196\nPort:              \u003cunset\u003e  27017/TCP\nTargetPort:        27017/TCP\nEndpoints:         172.17.0.4:27017 #target port veirfied to be 27017   \nSession Affinity:  None\nEvents:            \u003cnone\u003e\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get pod -o wide\nNAME                                 READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES\nmongodb-deployment-8f6675bc5-7hvxj   1/1     Running   0          15m   172.17.0.4   minikube    \u003cnone\u003e           \u003cnone\u003e #pod ip matches target ip of service\n\n# see all components\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get all | grep mongo\npod/mongodb-deployment-8f6675bc5-7hvxj   1/1     Running   0          17m\nservice/mongodb-service   ClusterIP   10.97.250.196   \u003cnone\u003e        27017/TCP   5m47s\ndeployment.apps/mongodb-deployment   1/1     1            1           19m\nreplicaset.apps/mongodb-deployment-5cf4c8fdbf   0         0         0       19m\nreplicaset.apps/mongodb-deployment-8f6675bc5    1         1         1       17m\n```\n\n### STEP3: create mongoexpress deployment+service+confgmap \n\nLet's ccreate mongo-express.yml:\n```yml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nname: mongo-express\nlabels:\n    app: mongo-express\nspec:\nreplicas: 1\nselector:\n    matchLabels:\n    app: mongo-express\ntemplate:\n    metadata:\n    labels:\n        app: mongo-express\n    spec:\n    containers:\n    - name: mongo-express\n        image: mongo-express \n        ports:\n        - containerPort: 8081  #lookup mongo-express in dockerhub for port/env vars etc(https://hub.docker.com/_/mongo-express)\n        env: \n        - name: ME_CONFIG_MONGODB_SERVER\n        value: #TODO: instead of harcoding value, we put it in a configmap and reference (so other components can also use it )\n        - name: ME_CONFIG_MONGODB_ADMINUSERNAME #same ones we setup for mogodb earlier\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-username #data:mongo-root-username key in mongo-secret.yml\n        - name: ME_CONFIG_MONGODB_ADMINPASSWORD #same ones we setup for mogodb earlier\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-password #data:mongo-root-password key in mongo-secret.yml   \n```\n\nTo fill in `ME_CONFIG_MONGODB_SERVER` value we will create a config map `mongo-configmap.yaml`:\n\n```yml\napiVersion: v1\nkind: configMap\nmetadata:\nname: mongodb-configmap\ndata: #key-value pairs\ndatabase_url: mongodb-service #service name in mongo.yml\n```\n\nNow lets create config map:\n\n```sh\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl apply -f mongo-configmap.yaml\nconfigmap/mongodb-configmap created\n```\n\nNow that config map is created, lets update mongo-express.yaml to reference it:\n\n```yml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nname: mongo-express\nlabels:\n    app: mongo-express\nspec:\nreplicas: 1\nselector:\n    matchLabels:\n    app: mongo-express\ntemplate:\n    metadata:\n    labels:\n        app: mongo-express\n    spec:\n    containers:\n    - name: mongo-express\n        image: mongo-express \n        ports:\n        - containerPort: 8081  #lookup mongo-express in dockerhub for port/env vars etc(https://hub.docker.com/_/mongo-express)\n        env: \n        - name: ME_CONFIG_MONGODB_SERVER\n        valueFrom: #instead of harcoding value, we put it in a configmap and reference (so other components can also use it )\n            configMapKeyRef: \n            name: mongodb-configmap #metadata:name in mongo-configmap.yml \n            key: database_url #data:database_url key in mongo-cpnfigmap.yml\n        - name: ME_CONFIG_MONGODB_ADMINUSERNAME #same ones we setup for mogodb earlier\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-username #data:mongo-root-username key in mongo-secret.yml\n        - name: ME_CONFIG_MONGODB_ADMINPASSWORD #same ones we setup for mogodb earlier\n        valueFrom:\n            secretKeyRef: \n            name: mongodb-secret #metadata:name in mongo-secret.yml \n            key: mongo-root-password #data:mongo-root-password key in mongo-secret.yml   \n```\n\nLets create mongo express deployment:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl apply -f mongo-express.yaml\ndeployment.apps/mongo-express created\n```\n\nverify:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get pod\nNAME                                 READY   STATUS    RESTARTS   AGE\nmongo-express-5b895fdb56-58wcd       1/1     Running   0          55s\nmongodb-deployment-8f6675bc5-7hvxj   1/1     Running   0          46m\n\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl logs mongo-express-5b895fdb56-58wcd\nWaiting for mongodb-service:27017...\nWelcome to mongo-express\n------------------------\n\n\nMongo Express server listening at http://0.0.0.0:8081\nServer is open to allow connections from anyone (0.0.0.0)\nbasicAuth credentials are \"admin:pass\", it is recommended you change this in your config.js!\nDatabase connected\nAdmin Database connected\n```\nmongo-express is runnig at : `http://0.0.0.0:8081`\n\n### STEP4: acess mongo-express using extenal service/browser \n\nTo access mongo-express deployment through a browser, we need a mongo-express service. Let's modifty mongo-express.yaml to include the service.\n\nappend following to `mongo-express.yaml`:\n\n```yaml\n#note 3 dashes to seperate service document from deployment document:\n---\napiVersion: v1\nkind: Service \nmetadata:\nname: mongo-express-service\nspec:\nselector:\n    app: mongo-express  #should match template:metadata:label in  deployment configuration to connect to pod\ntype: LoadBalancer #Mark as an external service (confusing because internal service also loadbalancess)\nports:\n    - protocol: TCP\n    port: 8081\n    targetPort: 8081 #should match containerPort in  deployment configuration \n    nodePort: 30000 #Mark as external service (\u003e=30000). External web will acess using this port\n\n```\n\nNow let's apply the mongo-express service:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl apply -f mongo-express.yaml\ndeployment.apps/mongo-express unchanged\nservice/mongo-express-service created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl get service\nNAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE\nkubernetes              ClusterIP      10.96.0.1        \u003cnone\u003e        443/TCP          7h55m \n#note External ip PENDING. will show a public ip for a real k8s cluster\nmongo-express-service   LoadBalancer   10.104.168.113   \u003cpending\u003e     8081:30000/TCP   41s #Note LoadBalancer type denoting it as an external service  #Cluster-ip is default: internal ip of service       \nmongodb-service         ClusterIP      10.97.250.196    \u003cnone\u003e        27017/TCP        44m\n```\n\nNote that External-IP shows as pending. This is only an issue in minikube. In a real cluster you will see a public ip.\n\nTo set a public ip (only for) minikube execute: `minikube service mongo-express-service`\n\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  minikube service mongo-express-service\n|-----------|-----------------------|-------------|---------------------------|\n| NAMESPACE |         NAME          | TARGET PORT |            URL            |\n|-----------|-----------------------|-------------|---------------------------|\n| default   | mongo-express-service |        8081 | http://192.168.64.2:30000 |\n|-----------|-----------------------|-------------|---------------------------|\n🎉  Opening service default/mongo-express-service in default browser...\n```\n\nassigns public ip: 192.168.64.2\n\nNow browser automatically opens : http://192.168.64.2:30000/\n\n![out](./img/out1.png)\n\nNow in the web page create a `Test-db` database:\n\n![out](./img/out2.png)\n\nWhen we made a request to create a db it in the background forwards request in following order:\n\n    - external web request --\u003e \n    - mongo-express external service --\u003e \n    - mongo-express pod --\u003e \n    - mongodb (internal) service --\u003e \n    - mogodb pod which created db for you and reflect back in rever path\n\n### Namespace\n\n- you can organize resources into namespaces\n- a cluster can have multiple namespaces\n- think of a namespace as a virtual cluster inside k8s cluster  \n- when you create a cluster, kubernetes give you 4 namespaces out of the box\n\n**get namespace** : `kubectl get namespaces`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get namespaces\nNAME              STATUS   AGE\ndefault           Active   9h #what we use to create resources unleass we create a new namespace\nkube-node-lease   Active   9h #node hearbeats   \nkube-public       Active   9h #publicly accessible data(configmap eith cluster info in it) access using kubectl cluster-info dump\nkube-system       Active   9h #not meant for your use.  \n``` \n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl cluster-info\nKubernetes master is running at https://192.168.64.2:8443\nKubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n```\n\n**get namespace of a resource**:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl get configmap -o yaml\napiVersion: v1\nitems:\n- apiVersion: v1\ndata:\n    database_url: mongodb-service\nkind: ConfigMap\nmetadata:\n    annotations:\n    kubectl.kubernetes.io/last-applied-configuration: |\n        {\"apiVersion\":\"v1\",\"data\":{\"database_url\":\"mongodb-service\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"mongodb-configmap\",\"namespace\":\"default\"}}\n    creationTimestamp: \"2020-12-30T23:58:07Z\"\n    managedFields:\n    - apiVersion: v1\n    fieldsType: FieldsV1\n    fieldsV1:\n        f:data:\n        .: {}\n        f:database_url: {}\n        f:metadata:\n        f:annotations:\n            .: {}\n            f:kubectl.kubernetes.io/last-applied-configuration: {}\n    manager: kubectl\n    operation: Update\n    time: \"2020-12-30T23:58:07Z\"\n    name: mongodb-configmap\n    namespace: default #note namespace is default\n    resourceVersion: \"20211\"\n    selfLink: /api/v1/namespaces/default/configmaps/mongodb-configmap\n    uid: 16f57828-54c3-49f7-b621-9dd44e87abe3\nkind: List\nmetadata:\nresourceVersion: \"\"\nselfLink: \"\"\n```\n\n**create a namespace**\n\nOPTION 1: `kubectl create namespace NAMESPACE_NAME`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●   kubectl create namespace my-namespace\n\nnamespace/my-namespace created\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes/demo-mongo   master ●  kubectl get namespaces\nNAME              STATUS   AGE\ndefault           Active   9h\nkube-node-lease   Active   9h\nkube-public       Active   9h\nkube-system       Active   9h\nmy-namespace      Active   4s #new namespace created    \n```\n\nOPTION2: create a namespace with a config file (better  )\n\n**when to use new namespace**\n\n1. when you have a lot of resources (namespaces: database, monitoring, elastic-stack, nginx-ingress)\n2. conflict: may teams, same application  \n3. resource shaing: staging and development envs in same cluster  \n4. blue/green deployments (Production-blue-active, production-green-new)\n5. access and resource limits in namespaces (namespace per team and team has access to only resources in its namespace only)\n\nNote:\n\n1. you cannot  access most resources  in a namespace from another (e.g. configmap, secrets). They need to be created in each namespace that need to use them.\n2. service resources however can be shared across namespaces\n3. some components are not created in a namespace (rather live globally in the cluster) e.g. persistent volumes , node      \n\n**To add a resource to a specific namespace** :\n\n`kubectl apply -f my-configmap.yaml --namespace=my-namespace`\n\n\nlet's create a configmap in `my-namespace` in my-configmap.yaml:\n\n```yml\napiVersion: v1\nkind: ConfigMap\nmetadata:\nname: my-configmap\nnamespace:  my-namespace\ndata: #key-value pairs\ndatabase_url: mongodb-service #service name in mongo.yml\n```\n\napply:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl apply -f my-configmap.yaml\nconfigmap/my-configmap created\n```\n\ncheck namespace:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get configmap -n my-namespace #note -n my-namespace optionn, otherwise only lists default namespace maps\nNAME           DATA   AGE\nmy-configmap   1      2m5s\n\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get configmap -n my-namespace -o yaml\napiVersion: v1\nitems:\n- apiVersion: v1\ndata:\n    database_url: mongodb-service\nkind: ConfigMap\nmetadata:\n    annotations:\n    kubectl.kubernetes.io/last-applied-configuration: |\n        {\"apiVersion\":\"v1\",\"data\":{\"database_url\":\"mongodb-service\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"my-configmap\",\"namespace\":\"my-namespace\"}}\n    creationTimestamp: \"2020-12-31T17:03:44Z\"\n    managedFields:\n    - apiVersion: v1\n    fieldsType: FieldsV1\n    fieldsV1:\n        f:data:\n        .: {}\n        f:database_url: {}\n        f:metadata:\n        f:annotations:\n            .: {}\n            f:kubectl.kubernetes.io/last-applied-configuration: {}\n    manager: kubectl\n    operation: Update\n    time: \"2020-12-31T17:03:44Z\"\n    name: my-configmap\n    namespace: my-namespace #note new namespace configmap resource belongs to\n    resourceVersion: \"34318\"\n    selfLink: /api/v1/namespaces/my-namespace/configmaps/my-configmap\n    uid: 1a8e4dd8-8a4d-4a87-bd1c-20d394f27f67\nkind: List\nmetadata:\nresourceVersion: \"\"\nselfLink: \"\"\n```\n\n###  change active namespace\n\n**only namespace my-namespace allowed!**\n\nInstall `kubens`:\n\n`brew install kubectx`\n\nget a list of namespaces: `kubens`\n\n```sh\n🍺  /usr/local/Cellar/kubectx/0.9.1: 12 files, 36.8KB, built in 3 seconds\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubens\ndefault #active one is highlighted with green\nkube-node-lease\nkube-public\nkube-system\nmy-namespace\n```\n\nchange active-namespace:\n\n```sh\n(base)  ✘ rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubens my-namespace\nContext \"minikube\" modified.\nActive namespace is \"my-namespace\".\n```\n\n`kubens` command will highlight my-namespace green now.\n\n### Kubernetes ingress\n\nTo access a service/pod through web instead of external service, make it internal service and ingress into a user-friendly domain name. so you won't have to open your app through an ip like when you used external service.\n\nSo the web request flows as follows:\n\n`http:my-app.com` ==\u003e `ingress-controller-pod`==\u003e `my-app-ingress` ==\u003e `my=app-service` ==\u003e `my-app-pod`\n\nYou also need implementation for ingress called `ingress-controller`.\n\ningress controller:\n- evaluate all the rules\n- manage redirection\n- entrypoint to cluster \n- many third-party implementations\n- k8s nginx ingress controller  \n\nthe ingress.yaml:\n\n```yml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress #note service Ingress\nmetadata:\nname: myapp-ingress\nspec:\nrules:\n- host: myapp.com #can't put anything here. should be a valid domain addreess. and you should map the domain name to ip address of the node\n    http:\n    paths:\n    - backend:\n        serviceName: myapp-internal-service #shoul correspond to the name of internal service\n        servicePort: 8080 #should correspond to the service port\n```\n\n**installing ingress controller in minikube**: `minikube addons enable ingress`\nLet's start by installing ingress controller in minikube\n\n```sh\nbase)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  minikube addons enable ingress\n🔎  Verifying ingress addon...\n🌟  The 'ingress' addon is enabled\n```\nThis automatically starts k8s nginx implementation of ingress controller\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get pod -n kube-system\nNAME                                        READY   STATUS      RESTARTS   AGE\ncoredns-f9fd979d6-55j65                     1/1     Running     0          25h\netcd-minikube                               1/1     Running     0          25h\ningress-nginx-admission-create-jt2gv        0/1     Completed   0          2m12s\ningress-nginx-admission-patch-cnfq2         0/1     Completed   0          2m12s\ningress-nginx-controller-558664778f-7gr7m   1/1     Running     0          2m12s #note the new ingress-nginx-controller runnig in cluster   \nkube-apiserver-minikube                     1/1     Running     0          25h\nkube-controller-manager-minikube            1/1     Running     0          25h\nkube-proxy-7d522                            1/1     Running     0          25h\nkube-scheduler-minikube                     1/1     Running     0          25h\nstorage-provisioner                         1/1     Running     1          25h\n```\n\nNow let's check kubernetes-dashboard in namespaces (note it is not visible):\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●      kubectl get ns\nNAME              STATUS   AGE\ndefault           Active   25h\nkube-node-lease   Active   25h\nkube-public       Active   25h\nkube-system       Active   25h\nmy-namespace      Active   15h\n```\n\nif kubernetes-dashboard is not there run `minikube dashboard` to install it. This will automatically install kubernetes-dashboard and open proxy url. terminate (CTRL+C)\n\n```sh\n(base)  rdissanayakam@rbh12855  ~  minikube dashboard\n🔌  Enabling dashboard ...\n🤔  Verifying dashboard health ...\n🚀  Launching proxy ...\n🤔  Verifying proxy health ...\n🎉  Opening http://127.0.0.1:56444/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...\n^C\n```\n\nnow try again:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●      kubectl get ns\nNAME                   STATUS   AGE\ndefault                Active   25h\nkube-node-lease        Active   25h\nkube-public            Active   25h\nkube-system            Active   25h\nkubernetes-dashboard   Active   17s #note dashbiard \nmy-namespace           Active   15h\n```\n\nget all resources under kubernetes-dashboard: `kubectl get all -n kubernetes-dashboard`\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●      kubectl get all -n kubernetes-dashboard\n#pods\nNAME                                             READY   STATUS              RESTARTS   AGE\npod/dashboard-metrics-scraper-7445d59dfd-h8cs7   0/1     ContainerCreating   0          3s\npod/kubernetes-dashboard-7d8466d688-bzw22        0/1     ContainerCreating   0          4s  # i have kubernetes-dashboard\n\n#service\nNAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\nservice/dashboard-metrics-scraper   ClusterIP   10.102.13.175   \u003cnone\u003e        8000/TCP   3s \nservice/kubernetes-dashboard        ClusterIP   10.97.146.25    \u003cnone\u003e        443/TCP    4s #I have internal service kubernetes-dashboard\n\n#deployments\nNAME                                        READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/dashboard-metrics-scraper   0/1     1            0           3s #\ndeployment.apps/kubernetes-dashboard        0/1     1            0           4s\n\n#replicasets\nNAME                                                   DESIRED   CURRENT   READY   AGE\nreplicaset.apps/dashboard-metrics-scraper-7445d59dfd   1         1         0       3s\nreplicaset.apps/kubernetes-dashboard-7d8466d688        1         1         0       4s\n```\n\nNow let's create dashborad-ingress.yaml and apply:\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\nname: dashboard-ingress\nnamespace: kubernetes-dashboard\nspec:\nrules:\n- host: dashboard.com # i still need to configure ip for this host\n    http: #http forwarding to internal service\n    paths:\n    - backend: #service backend \n        serviceName: kubernetes-dashboard\n        servicePort: 80 #sbhould match service/kubernetes-dashboard PROT when execute kubectl get all -n kubernetes-dashboard\n```\n\napply:\n\n```sh\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl apply -f dashboard-ingess.yaml\ningress.networking.k8s.io/dashboard-ingress created\n\n(base)  rdissanayakam@rbh12855  ~/home/bitbucket/kubernetes   master ●  kubectl get ingress -n kubernetes-dashboard\nNAME                CLASS    HOSTS           ADDRESS        PORTS   AGE\ndashboard-ingress   \u003cnone\u003e   dashboard.com   192.168.64.2   80      44s #note the ip address because we need to map to dashboard.com\n```\n\nnow` sudo vi /etc/hosts` and add mapping\n\n```sh\n192.168.64.2    dashboard.com\n```\n\nopen dashboard.com:\n\n![out](./img/out3.png)\n\n**ingress default backend**: `kubectl describe ingress dashboard-ingress -n kubernetes-dashboard`\n\n```sh\n(base)  ✘ rdissanayakam@rbh12855  ~  kubectl describe ingress dashboard-ingress -n kubernetes-dashboard\nName:             dashboard-ingress\nNamespace:        kubernetes-dashboard\nAddress:          192.168.64.3\nDefault backend:  default-http-backend:80 (\u003cnone\u003e) #note the default backend\nRules:\nHost           Path  Backends\n----           ----  --------\ndashboard.com\n                    kubernetes-dashboard:80 (172.17.0.4:9090)\nAnnotations:\nkubectl.kubernetes.io/last-applied-configuration:  {\"apiVersion\":\"networking.k8s.io/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"dashboard-ingress\",\"namespace\":\"kubernetes-dashboard\"},\"spec\":{\"rules\":[{\"host\":\"dashboard.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"kubernetes-dashboard\",\"servicePort\":80}}]}}]}}\n\nEvents:\nType    Reason  Age   From                      Message\n----    ------  ----  ----                      -------\nNormal  CREATE  56m   nginx-ingress-controller  Ingress kubernetes-dashboard/dashboard-ingress\nNormal  UPDATE  55m   nginx-ingress-controller  Ingress kubernetes-dashboard/dashboard-ingress\n```\n\n**how to configure paths**:\n\nmyapp-ingress.yaml:\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\nname: simple-fanout-example\nannotations: \n    nginx.ingresskubernetes.io/rewrite-target: /\nspec:\nrules:\n- host: myapp.com\n    http: #http forwarding to internal service\n    paths:\n    - path: /analytics #http://myapp.com/analytics\n        backend: #service backend \n        serviceName: analytics-service\n        servicePort: 3000\n    - path: /shopping #http://myapp.com/shopping  \n        backend: #service backend \n        serviceName: shopping-service\n        servicePort: 8080\n    \n```\n\nor you could choose to have subdomains for each service:\n\nmyapp-ingress.yaml:\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\nname: simple-fanout-example\nannotations: \n    nginx.ingresskubernetes.io/rewrite-target: /\nspec:\nrules:\n- host: analytics.myapp.com\n    http: \n    paths:\n        backend: #service backend \n        serviceName: analytics-service\n        servicePort: 3000\n- host: shopping.myapp.com\n    http: \n    paths:  \n        backend: #service backend \n        serviceName: shopping-service\n        servicePort: 8080   \n```\n\n**configuring TLS certificates (https) for ingress**:\n\nmyapp-tls-ingress.yml\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\nname: tls-example-ingress\nspec:\ntls: #note tls keyword\n- hosts:\n    - myapp.com\n    secretName: myapp-secret-tls #should be created using secret and referenced here\nrules:\n- host: myapp.com\n    http: \n    paths:\n    - path: /analytics \n        backend: \n        serviceName: analytics-service\n        servicePort: 3000\n    - path: /shopping  \n        backend:\n        serviceName: shopping-service\n        servicePort: 8080\n    \n```\n\nmyapp-tls-secret.yaml:\n\n```yml\napiVersion: v1\nkind: Secret\nmetadata:\nname: myapp-secret-tls #matched above myapp-tls-ingress. tls secretName\nnamespace: default #should be same namespace as the ingress component above\ndata:\ntls.crt: dXNlcm5hbWU= # key must be named tls.crt. base64 encoded cert\ntls.key: bXlwYXNzd29yZA== #key must be named tls.key. base64 encoded pkey\n```\n\n### Helm\n\n- Helm changes a lot between versions\n\n**What is Helm?**:\n- package manager for kubernets\n    - A bundle of yaml files (say to deploy elk, database apps, monitoring apps)\n    - create your own Helm Charts with Helm\n    - Push them to Helm Repository  \n    - Download and use existing helm charts other people pushed\n\n- templating engine\n    - allow creating template file for microservices that have a common yaml template but only few values changes\n    - without helm you will need to create yml for each microservice even though they are pretty similsr.\n    - in template files you will have placeholders for the values that could change\n    - e.g. template file\n\n    ```yml\n    apiVersion: v1\n    kind: Pod\n    metadata:\n        name: {{.Values.name}}\n    spec:\n        containers:\n        - name: {{.Values.container.name}}\n        image: {{.Values.container.image}}\n        port: {{.Values.container.port        }}\n    ```\n\n    These values in template comes from an additional external xconfig called  values.yml (or --set constrcruct):\n\n    ```yml\n    name: my-app\n    container:\n        name: my-app-container\n        image: my-app-image\n        port: 9001\n    ```\n\n    - templates are especially helpful in CI/CD because in your microservice build you can replace template values on the fly \n\n- deploy same application across different environments-clusters\n    - instead of deploying each microservice yml file individually in each cluster, you can package them up using helm chart and use  ti to deploy to deply in each env using one command\n\n**Helm chart structure**\n\n```sh\nmychart/ #name of the chart\n    - Chart.yaml #meta info about the chart\n    - values.yaml #values for the template files\n    - charts/ #chart dependencies\n    - templates/ #template files stored\n```\n\nDeploy above yaml files  execute: \n\n`helm install CHART_NAME`\n\nValues will be injected template files and deployed.\n\nvalues.yaml has **default** values that can be overriden:\n\nvalues.yaml:\n\n```yml\nimageName: myapp\nport: 8080\nversion: 1.0.0\n```\n\nThis can be overriden by providing an additional yaml which overrides them:\n\n`helm install --values=my-values.yaml CHART_NAME`\n\nmy-values.yaml (you can add additional values as well):\n\n```yml\nversion: 2.0.0\n```\n\nor less preferably you can do it command line:\n\n`helm install --set version=2.0.0`\n\n### Helm does release management\n\n**Helm version 2**: \n    - Helm Client (Helm CLI)\n    - Server  (Tiller) - should run in a kubernetes cluster\n\n    - When we execute `helm install CHART_NAME` client send request to Tiller which install components in kubernetes environment\n    - This is called release manangement\n    - Tiller keeps track of history of chart executions (revision1: helm install chartname, revision2: helm upgrade chartname)\n    - Therefore if your upgrade goes wrong you can rollback: `helm rollback charname` (Revision3: helm rollback charname)\n    - **PROBLEM WITH THIS SETUP**: Tiller has too much power inside k8s cluster.- Security issue\n    - so in helm3 they removed Tiller.\n\n\n**Helm version 3**\n    - no Tiller. Solves securty concern\n\n\n### K8s Volumes\n\n**How to persist dta in k8s using volumes?**\n3 components:\n\n1. PV: persistent volume\n2. PVC: persistent volume claim\n3. SC: storage class\n\n**the need for volumes**\n\nsay you have two pods: `my-app-pod` ==uses ==\u003e `mysql-pod` \n\nif your  mysql pod restarted all your data is lost as k8s provides no data persistence out of the box\n\nso we need a storage that:\n- doesn't depend on the pod lifecycle\n- avaiable on all nodes\n- survive even if cluster crashe    s\n\nIn addtion to dbs, we may also need a storage directory t\n\nSOlution: **persisteny volumes**\n\n**Persistent Volumes**\n- a cluster resource\n- created via yaml file\n- kind: PersistentVolume\n\n**nfs as stoage backend**:\n\n```yml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: pv-name\nspec:\n  capacity:\n    storage: 5Gi #you should configure and maintain localdisk/nfs servier/ clod-storage\n  volumeMode: Filesystem\n  accessModes:\n    - ReadWriteOnce\n  persistentVolumeReclaimPolicy: Recycle\n  storageClassName: slow\n  mountOptions:\n    - hard\n    - nfsvers=4.0\n  nfs: #nfs storage backend \n    path: /dir/path/on/nfs/server\n    server: nfs-server-ip-address\n```\n\n**google cloud as stoage backend**:\n\n```yml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: test-volume\n  labels:\n    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-centtral1-b\nspec:\n  capacity:\n    storage: 400Gi \n  volumeMode: Filesystem\n  accessModes:\n  - ReadWriteOnce\n  gcePersistentDisk: #google cloud parameters\n    pdNamr: my-data-disk\n    fsType: ext4\n```\n\n**node local stoage as stoage backend**:\n\n```yml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: example-pv\nspec:\n  capacity:\n    storage: 100Gi \n  volumeMode: Filesystem\n  accessModes:\n  - ReadWriteOnce\n  persistentVolumeReclaimPolicy: Delete\n  storageClassName: local-storage\n  local:\n    path: /mnt/disks/ssd1\n  nodeAffinity:\n    required:\n      nodeSelectorTerms:\n      - matchExpressions:\n        - key: kubernetes.io/hostname\n          operator: In\n          values:\n          - example-node    \n```\n- persistent volumes are not namespaced.\n\n**Persistent Volume Claim**\n\n- Application has to **claim** the persistent volume (application-pod ==\u003e pvc ==\u003e pv)\n- created with yaml configs\n- while Admin role create persistent volume, user role creates a persistent volume claim (claim PV using PVC    )\n  \n```yml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n    name: pvc-name\nspec: #any volume matches the claim spec is used\n    storageClassName: manual\n    volumeMode: FileSystem\n    accessModes:\n        - ReadWriteOnce\n    resources:\n        requests:\n            storage: 10Gi\n```\n\nNow in your pod config the volume `claimName` should match above claim name:\n\n```yml\napiVersion: v1\nkind: Pod\nmetadata:\n    name: mypod\nspec:\n    containers:\n        - name: myfrontend\n          image: nginx\n          volumeMounts: #volume in volumes section mounted to pod\n          - mountPath: \"/var/www/html\" #apps can access volume data from /var/www/html\n            name: mypd\n    volumes:\n      - name: mypd\n        persitentVolumeClaim:\n          claimName: pvc-name #should match PersistentVolumeClaim name\n```\n\n**Pod having different volume typrd**\n\n```yml\nkind: deployment\nmetadata:\n  name: elastic\nspec:\n  selector:\n    matchLabels:\n      app: elastic\n  template:\n    metadata:\n      labels:\n        app: elastic\n    spec:\n      containers:\n      - image: elastic:latest\n        name: elastic-container\n        ports:\n        - containerPort: 9200\n        volumeMounts:\n        - name: es-persistent-storage\n        mountPath: /var/lib.data\n        - name: es-secret-dor\n        mountPath: /var/lib/secret\n        - name: es-config-dir\n        mountPath: /var/lib/config\n    volumes:\n    - name: es-persistent-storage\n      persistentVolumeClaim: \n        claimName: es-secret-dir\n    - name: es-secret-dir\n      secret: \n        secretName: es-secret\n    - name: es-config-dir\n      configMap:\n        name: es-config-map\n```\n\n### Storage class (SC)\n\nSC creates persistent volumes dynamically. (otherwise developers should request admins to create PVs before they configure PVCS)\n\n**storage class config:**\n```yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: storage-class-name\nprovisioner: kubernetes.io/aws-ebs\nparameters:\n  type: io1\n  iopsPerGB: \"10\"\n  fsType: ext4  \n```\n\nTo cliam above persistent volume create pvc:\n\n**pvc config:**\n```yml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mypvc\nspec:\n  accessModes:\n  - ReadWriteOnce\n  resources:\n    requests:\n      storage: 100Gi\n  storageClassName: storage-class-name #must match metadata:name of storage class config yml\n```\n\n### Deploying Stateful apps with StatefukSet\n- examples of stateful apps:\n    - an app tthat keeps state of app using storagr/db\n    - compared to steless app each new request is handles as a brand new app\n\n- **stateless applications** are deployed using **Deployment** (and replicated to pods as needed)\n- **stateless applications** are deployed using **StatefulSet** \n- both manage pods and configure storage the same way.  \n- however, **replicating statful applications are more difficult**:\n    - need to maintain stivky identity for each of its pod\n    - will need pv attached for state\n\n### Kubernetes services\n\ntypes:\n1. ClusterIP Services\n2. NodePort Services\n3. Headless SErvices\n4. LoadBalancer Services\n\n**Why we need services?**:\n\n- Each pod has its own ip. but pods are ephemeral (destroyed frequently) so ip changes\n- Services: \n    - stable IP\n    - loadbalancing\n\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmrhimali%2Fkubernetesspringbatch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdmrhimali%2Fkubernetesspringbatch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmrhimali%2Fkubernetesspringbatch/lists"}