{"id":13449348,"url":"https://github.com/eon01/kubernetes-workshop","last_synced_at":"2025-05-15T02:09:56.579Z","repository":{"id":35254403,"uuid":"202927395","full_name":"eon01/kubernetes-workshop","owner":"eon01","description":"⚙️ A Gentle introduction to Kubernetes with more than just the basics. 🌟 Give it a star if you like it.","archived":false,"fork":false,"pushed_at":"2023-02-28T17:06:24.000Z","size":887,"stargazers_count":3231,"open_issues_count":9,"forks_count":242,"subscribers_count":54,"default_branch":"master","last_synced_at":"2025-04-14T00:58:31.132Z","etag":null,"topics":["ambassador","api-gateway","devops","devops-tools","docker","docker-images","kubernetes","kubernetes-api","kubernetes-apis","kubernetes-client","kubernetes-cluster","kubernetes-deployment","kubernetes-service","kubernetes-services","kubernetes-workshop","minikube","minikube-cluster","pods","python","service-mesh"],"latest_commit_sha":null,"homepage":"https://medium.com/faun/a-gentle-introduction-to-kubernetes-4961e443ba26","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/eon01.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-08-17T20:14:22.000Z","updated_at":"2025-02-27T18:10:17.000Z","dependencies_parsed_at":"2023-01-15T17:07:06.385Z","dependency_job_id":"271cc353-792d-4c7d-9788-37b9765d0471","html_url":"https://github.com/eon01/kubernetes-workshop","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eon01%2Fkubernetes-workshop","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eon01%2Fkubernetes-workshop/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eon01%2Fkubernetes-workshop/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eon01%2Fkubernetes-workshop/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/eon01","download_url":"https://codeload.github.com/eon01/kubernetes-workshop/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254259386,"owners_count":22040821,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ambassador","api-gateway","devops","devops-tools","docker","docker-images","kubernetes","kubernetes-api","kubernetes-apis","kubernetes-client","kubernetes-cluster","kubernetes-deployment","kubernetes-service","kubernetes-services","kubernetes-workshop","minikube","minikube-cluster","pods","python","service-mesh"],"created_at":"2024-07-31T06:00:36.106Z","updated_at":"2025-05-15T02:09:51.560Z","avatar_url":"https://github.com/eon01.png","language":"Python","readme":"# Brought to you by [FAUN](https://faun.dev?utm_source=faun\u0026utm_medium=github\u0026utm_campaign=kubernetes-workshop)\n\n[![Join us](images/join.png)](https://faun.dev/join?utm_source=faun\u0026utm_medium=github\u0026utm_campaign=kubernetes-workshop)\n\n- Slides available [here](https://slides.com/eon01/kubernetes-workshop#/).\n- Original article posted [here](https://medium.com/faun/a-gentle-introduction-to-kubernetes-4961e443ba26).\n- Source code [here](https://github.com/eon01/kubernetes-workshop).\n- Inspired from my course: [Learn Kubernetes by building 10 projects](https://learn.faun.dev)\n- [Buy Me A Coffee](https://www.buymeacoffee.com/joinFAUN)\n\n\n# Table of Contents\n- [Brought to you by FAUN](#brought-to-you-by-faun)\n- [Table of Contents](#table-of-contents)\n- [Introduction](#introduction)\n  - [Development Environment](#development-environment)\n  - [Developing a Trending Git Repositories API (Flask)](#developing-a-trending-git-repositories-api-flask)\n- [Pushing the Image to a Remote Registry](#pushing-the-image-to-a-remote-registry)\n  - [A Security Notice](#a-security-notice)\n- [Installing Minikube](#installing-minikube)\n- [Deploying to Kubernetes](#deploying-to-kubernetes)\n  - [Services](#services)\n  - [Inconvenience of Load Balancer Service](#inconvenience-of-load-balancer-service)\n- [An API Gateway](#an-api-gateway)\n  - [Edge Proxy vs Service Mesh](#edge-proxy-vs-service-mesh)\n- [Accessing the Kubernetes API](#accessing-the-kubernetes-api)\n  - [Using an API Client](#using-an-api-client)\n  - [Accessing the API from inside a POD](#accessing-the-api-from-inside-a-pod)\n- [Star History](#star-history)\n- [Thanks to all the contributors!](#thanks-to-all-the-contributors)\n\n# Introduction\n\nIn this workshop, we're going to:\n\n- Deploy Kubernetes services and an Ambassador API gateway.\n- Examine the difference between Kubernetes proxies and service mesh like Istio.\n- Access the Kubernetes API from the outside and from a Pod.\n- Understand what API to choose.\n- See how Service Accounts and RBAC works\n- Discover some security pitfalls when building Docker images and many interesting things.\n- Other things :-)\n\nWe will start by developing then deploying a simple Python application (a Flask API that returns the list of trending repositories by programming language).\n\n## Development Environment\n\nWe are going to use Python 3.6.7\n\n\nWe are using Ubuntu 18.04 that comes with Python 3.6 by default. You should be able to invoke it with the command python3. (Ubuntu 17.10 and above also come with Python 3.6.7)\n\nIf you use Ubuntu 16.10 and 17.04, you should be able to install it with the following commands:\n\n```bash\nsudo apt-get update\nsudo apt-get install python3.6\n```\n\nIf you are using Ubuntu 14.04 or 16.04, you need to get Python 3 from a Personal Package Archive (PPA):\n\n```bash\nsudo add-apt-repository ppa:deadsnakes/ppa\nsudo apt-get update\nsudo apt-get install python3.6\n```\n\nFor the other operating systems, visit [this guide](https://realpython.com/installing-python/), follow the instructions and install Python3.\n\nNow install PIP, the package manager:\n\n```bash\nsudo apt-get install python3-pip\n```\n\nFollow this by the installation of Virtualenvwrapper, which is a virtual environment manager:\n\n```bash\nsudo pip3 install virtualenvwrapper\n```\n\nCreate a folder for your virtualenvs (I use ~/dev/PYTHON_ENVS) and set it as WORKON_HOME:\n\n```bash\nmkdir  ~/dev/PYTHON_ENVS\nexport WORKON_HOME=~/dev/PYTHON_ENVS\n```\n\nIn order to source the environment details when the user login, add the following lines to ~/.bashrc:\n\n```bash\nsource \"/usr/local/bin/virtualenvwrapper.sh\"\nexport WORKON_HOME=\"~/dev/PYTHON_ENVS\"\n```\n\nMake sure to adapt the WORKON_HOME to your real WORKON_HOME.\nNow we need to create then activate the new environment:\n\n```bash\nmkvirtualenv --python=/usr/bin/python3 trendinggitrepositories\nworkon trendinggitrepositories\n```\n\nLet's create the application directories:\n\n```bash\nmkdir trendinggitrepositories\ncd trendinggitrepositories\nmkdir api\ncd api\n```\n\nOnce the virtual environment is activated, we can install Flask:\n\n```bash\npip install flask\n```\n\n\n## Developing a Trending Git Repositories API (Flask)\n\nInside the API folder `api`, create a file called `app.py` and add the following code:\n\n```python\nfrom flask import Flask\n\napp = Flask(__name__)\n\n@app.route('/')\ndef index():\n    return \"Hello, World!\"\n\nif __name__ == '__main__':\n    app.run(debug=True)\n```\n\nThis will return a hello world message when a user requests the \"/\" route.\n\nNow run it using: `python app.py` and you will see a similar output to the following one:\n\n```\n* Serving Flask app \"api\" (lazy loading)\n* Environment: production\n  WARNING: This is a development server. Do not use it in a production deployment.\n  Use a production WSGI server instead.\n* Debug mode: on\n* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\n* Restarting with stat\n* Debugger is active!\n* Debugger PIN: 465-052-587\n```\n\n\n![](images/ghtokens.png)\n\nWe now need to install PyGithub since we need it to communicate with Github API v3.\n\n```bash\npip install PyGithub\n```\n\nGo to Github and [create a new app](https://github.com/settings/applications/new). We will need the application \"Client ID\" and \"Client Secret\":\n\n```python\nfrom github import Github\ng = Github(\"xxxxxxxxxxxxx\", \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\")\n```\n\nThis is how the mini API looks like:\n\n```python\nfrom flask import Flask, jsonify, abort\nimport urllib.request, json\nfrom flask import request\n\napp = Flask(__name__)\n\nfrom github import Github\ng = Github(\"xxxxxx\", \"xxxxxxxxxxxxx\")\n\n@app.route('/')\ndef get_repos():\n    r = []\n\n    try:\n        args = request.args\n        n = int(args['n'])\n    except (ValueError, LookupError) as e:\n        abort(jsonify(error=\"No integer provided for argument 'n' in the URL\"))\n\n    repositories = g.search_repositories(query='language:python')[:n]\n\n    for repo in repositories:\n        with urllib.request.urlopen(repo.url) as url:\n            data = json.loads(url.read().decode())\n        r.append(data)\n\n    return jsonify({'repos':r })\n\nif __name__ == '__main__':\n    app.run(debug=True)\n```\n\nLet's hide the Github token and secret as well as other variables in the environment.\n\n```python\nfrom flask import Flask, jsonify, abort, request\nimport urllib.request, json, os\nfrom github import Github\n\napp = Flask(__name__)\n\nCLIENT_ID = os.environ['CLIENT_ID']\nCLIENT_SECRET = os.environ['CLIENT_SECRET']\nDEBUG = os.environ['DEBUG']\n\ng = Github(CLIENT_ID, CLIENT_SECRET)\n\n\n@app.route('/')\ndef get_repos():\n    r = []\n\n    try:\n        args = request.args\n        n = int(args['n'])\n    except (ValueError, LookupError) as e:\n        abort(jsonify(error=\"No integer provided for argument 'n' in the URL\"))\n\n    repositories = g.search_repositories(query='language:python')[:n]\n\n    for repo in repositories:\n        with urllib.request.urlopen(repo.url) as url:\n            data = json.loads(url.read().decode())\n        r.append(data)\n\n    return jsonify({'repos':r })\n\nif __name__ == '__main__':\n    app.run(debug=DEBUG)\n```\n\nThe code above will return the top \"n\" repositories using Python as a programming language. We can use other languages too:\n\n```python\nfrom flask import Flask, jsonify, abort, request\nimport urllib.request, json, os\nfrom github import Github\n\napp = Flask(__name__)\n\nCLIENT_ID = os.environ['CLIENT_ID']\nCLIENT_SECRET = os.environ['CLIENT_SECRET']\nDEBUG = os.environ['DEBUG']\n\ng = Github(CLIENT_ID, CLIENT_SECRET)\n\n\n@app.route('/')\ndef get_repos():\n    r = []\n\n    try:\n        args = request.args\n        n = int(args['n'])\n        l = args['l']\n    except (ValueError, LookupError) as e:\n        abort(jsonify(error=\"Please provide 'n' and 'l' parameters\"))\n\n    repositories = g.search_repositories(query='language:' + l)[:n]\n\n\n    try:\n        for repo in repositories:\n            with urllib.request.urlopen(repo.url) as url:\n                data = json.loads(url.read().decode())\n            r.append(data)\n        return jsonify({\n            'repos':r,\n            'status': 'ok'\n            })\n    except IndexError as e:\n        return jsonify({\n            'repos':r,\n            'status': 'ko'\n            })\n\nif __name__ == '__main__':\n    app.run(debug=DEBUG)\n```\n\nIn a .env file, add the variables you want to use:\n\n```\nCLIENT_ID=\"xxxxx\"\nCLIENT_SECRET=\"xxxxxx\"\nENV=\"dev\"\nDEBUG=\"True\"\n```\n\nBefore running the Flask application, you need to source these variables:\n\n```bash\nsource .env\n```\n\n\n\nNow, you can go to `http://0.0.0.0:5000/?n=1\u0026l=python` to get the trendiest Python repository or `http://0.0.0.0:5000/?n=1\u0026l=c` for C programming language.\nHere is a list of other programming languages you can test your code with:\n\n```\nC++\nAssembly\nObjective\nMakefile\nShell\nPerl\nPython\nRoff\nYacc\nLex\nAwk\nUnrealScript\nGherkin\nM4\nClojure\nXS\nPerl\nsed\n```\n\nThe list is long, but our mini API is working fine.\nNow, let's freeze the dependencies:\n\n```bash\npip freeze \u003e requirements.txt\n```\n\nBefore running the API on Kubernetes, let's create a Dockerfile. This is a typical Dockerfile for a Python app:\n\n```\nFROM python:3\nENV PYTHONUNBUFFERED 1\nRUN mkdir /app\nWORKDIR /app\nCOPY requirements.txt /app\nRUN pip install --upgrade pip\nRUN pip install -r requirements.txt\nCOPY . /app\nEXPOSE 5000\nCMD [ \"python\", \"app.py\" ]\n```\n\nNow you can build it:\n\n```bash\ndocker build --no-cache -t tgr .\n```\n\nThen run it:\n\n```bash\ndocker rm -f tgr\ndocker run -it  --name tgr -p 5000:5000 -e CLIENT_ID=\"xxxxxxx\" -e CLIENT_SECRET=\"xxxxxxxxxxxxxxx\" -e DEBUG=\"True\" tgr\n```\n\n\nLet's include some other variables as environment variables:\n\n```python\nfrom flask import Flask, jsonify, abort, request\nimport urllib.request, json, os\nfrom github import Github\n\napp = Flask(__name__)\n\nCLIENT_ID = os.environ['CLIENT_ID']\nCLIENT_SECRET = os.environ['CLIENT_SECRET']\nDEBUG = os.environ['DEBUG']\nHOST = os.environ['HOST']\nPORT = os.environ['PORT']\n\ng = Github(CLIENT_ID, CLIENT_SECRET)\n\n\n@app.route('/')\ndef get_repos():\n    r = []\n\n    try:\n        args = request.args\n        n = int(args['n'])\n        l = args['l']\n    except (ValueError, LookupError) as e:\n        abort(jsonify(error=\"Please provide 'n' and 'l' parameters\"))\n\n    repositories = g.search_repositories(query='language:' + l)[:n]\n\n\n    try:\n        for repo in repositories:\n            with urllib.request.urlopen(repo.url) as url:\n                data = json.loads(url.read().decode())\n            r.append(data)\n        return jsonify({\n            'repos':r,\n            'status': 'ok'\n            })\n    except IndexError as e:\n        return jsonify({\n            'repos':r,\n            'status': 'ko'\n            })\n\nif __name__ == '__main__':\n    app.run(debug=DEBUG, host=HOST, port=PORT)\n```\n\nFor security reasons, let's change the user inside the container from root to a user with less rights that we create:\n\n```\nFROM python:3\nENV PYTHONUNBUFFERED 1\nRUN adduser pyuser\n\nRUN mkdir /app\nWORKDIR /app\nCOPY requirements.txt /app\nRUN pip install --upgrade pip\nRUN pip install -r requirements.txt\nCOPY . .\nRUN chmod +x app.py\n\nRUN chown -R pyuser:pyuser /app\nUSER pyuser\n\n\nEXPOSE 5000\nCMD [\"python\",\"./app.py\"]\n```\n\nNow if we want to run the container, we need to add many environment variables to the docker run command. An easier solution is using `--env-file    ` with Docker run:\n\n```bash\ndocker run -it --env-file .env my_container\n```\n\nOur .env file looks like the following one:\n\n```\nCLIENT_ID=\"xxxx\"\nCLIENT_SECRET=\"xxxx\"\nENV=\"dev\"\nDEBUG=\"True\"\nHOST=\"0.0.0.0\"\nPORT=5000\n```\n\nAfter this modification, rebuild the image `docker build -t tgr .` and run it using:\n\n```bash\ndocker rm -f tgr;\ndocker run -it  --name tgr -p 5000:5000 --env-file .env  tgr\n```\n\nOur application runs using `python app.py` which is the webserver that ships with Flask and it's great for development and local execution of your program, however, it's not designed to run in a production mode, whether it's a monolithic app or a microservice.\n\nA production server typically receives abuse from spammers, script kiddies, and should be able to handle high traffic. In our case, a good solution is using a WSGI HTTP server like Gunicorn (or uWsgi).\n\nFirst, let's install `gunicorn` with the following command: `pip install gunicorn`. This will require us to update our `requirements.txt` with `pip freeze \u003e requirements.txt`\n\nThis is why we are going to change our Docker file:\n\n```\nFROM python:3\nENV PYTHONUNBUFFERED 1\nRUN adduser pyuser\n\nRUN mkdir /app\nWORKDIR /app\nCOPY requirements.txt /app\nRUN pip install --upgrade pip\nRUN pip install -r requirements.txt\nCOPY . .\nRUN chmod +x app.py\n\nRUN chown -R pyuser:pyuser /app\nUSER pyuser\n\n\nEXPOSE 5000\nCMD [\"gunicorn\", \"app:app\", \"-b\", \"0.0.0.0:5000\"]\n```\n\nIn order to optimize the Wsgi server, we need to set the number of its workers and threads to:\n\n```\nworkers = multiprocessing.cpu_count() * 2 + 1\nthreads = 2 * multiprocessing.cpu_count()\n```\n\nThis is why we are going to create another Python configuration file (`config.py`):\n\n```python\nimport multiprocessing\nworkers = multiprocessing.cpu_count() * 2 + 1\nthreads = 2 * multiprocessing.cpu_count()\n```\n\nIn the same file, we are going to include other configurations of Gunicorn:\n\n```python\nfrom os import environ as env\nbind = env.get(\"HOST\",\"0.0.0.0\") +\":\"+ env.get(\"PORT\", 5000)\n```\n\nThis is the final `config.py` file:\n\n```python\nimport multiprocessing\nworkers = multiprocessing.cpu_count() * 2 + 1\nthreads = 2 * multiprocessing.cpu_count()\n\nfrom os import environ as env\nbind = env.get(\"HOST\",\"0.0.0.0\") +\":\"+ env.get(\"PORT\", 5000)\n```\n\nIn consequence, we should adapt the Dockerfile to the new Gunicorn configuration by changing the last line to :\n\n```\nCMD [\"gunicorn\", \"app:app\", \"--config=config.py\"]\n```\n\nNow, build `docker build -t tgr .` and run ` docker run -it --env-file .env -p 5000:5000 tgr`.\n\n\n\n# Pushing the Image to a Remote Registry\n\nA Docker registry is a storage and distribution system for named Docker images.\n\nThe images we built are stored in our local environment and can only be used if you deploy locally. However, if you choose to deploy a Kubernetes cluster in a cloud or any different environment, these images will be not found. This is why we need to push the build images to a remote registry.\n\nThink of container registries as a git system for Docker images.\n\nThere are plenty of containers registries:\n\n- Dockerhub\n- Amazon Elastic Registry (ECR)\n- Azure Container Registry (ACR)\n- Google Container Registry (GCR)\n- CoreOS Quay\n\nYou can also host your private container registry that supports OAuth, LDAP and Active Directory authentication using the registry provided by Docker:\n\n```bash\ndocker run -d -p 5000:5000 --restart=always --name registry registry:2\n```\n\nMore about self-hosting a registry can be found in [the official Docker documentation](https://docs.docker.com/registry/deploying/).\n\nWe are going to use Dockerhub; this is why you need to create an account on [hub.docker.com](https://hub.docker.com/).\n\nNow, using Docker CLI, login:\n\n```bash\ndocker login\n```\n\nNow rebuild the image using the new tag:\n\n``` \u0026lt;username\u0026gt;/\u0026lt;image_name\u0026gt;:\u0026lt;tag_version\u0026gt;\n docker build -t \u003cusername\u003e/\u003cimage_name\u003e:\u003ctag_version\u003e .\n```\n\nExample:\n\n```bash\ndocker build -t eon01/tgr:1 .\n```\n\nFinally, push the image:\n\n```bash\ndocker push eon01/tgr:1\n```\n\n\n\n## A Security Notice\n\nMany of the publicly (and even private Docker images) seems to be secure, but it's not the case. When we built our image, we told Docker to copy all the images from the application folder to the image and we push it to an external public registry.\n\n```C\nCOPY . .        \n```\n\nOr\n\n```\nADD . .\n```\n\nThe above commands will even copy the `.env` file containing our secrets.\n\nA good solution is to tell Docker to ignore these files during the build using a `.dockerignore` file:\n\n```\n**.git\n**.gitignore\n**README.md\n**env.*\n**Dockerfile*\n**docker-compose*\n**.env\n```\n\nAt this stage, you should remove any image that you pushed to a distant registry, reset the Github tokens, build the new image without any cache:\n\n```bash\ndocker build -t eon01/tgr:1 . --no-cache\n```\n\nPush it again:\n\n```bash\ndocker push eon01/tgr:1\n```\n\n\n\n# Installing Minikube\n\nOne of the fastest ways to try Kubernetes is using Minkube, which will create a virtual machine for you and deploy a ready-to-use Kubernetes cluster.\n\nBefore you begin the installation, you need to make sure that your laptop supports virtualization:\n\nIf your using Linux, run the following command and make sure that the output is not empty:\n\n```bash\ngrep -E --color 'vmx|svm' /proc/cpuinfo\n```\n\nMac users should execute:\n\n```bash\nsysctl -a | grep -E --color 'machdep.cpu.features|VMX'\n```\n\nIf you see `VMX` in the output, the VT-x feature is enabled in your machine.\n\nWindows users should use `systeminfo` and you should see the following output:\n\n```\nHyper-V Requirements:     VM Monitor Mode Extensions: Yes\n                          Virtualization Enabled In Firmware: Yes\n                          Second Level Address Translation: Yes\n                          Data Execution Prevention Available: Yes\n```\n\nIf everything is okay, you need to install a hypervisor. You have a list of possibilities here:\n\n- [KVM](https://www.linux-kvm.org/)\n- [VirtualBox](https://www.virtualbox.org/wiki/Downloads)\n- [HyperKit](https://github.com/moby/hyperkit)\n- [VMware Fusion](https://www.vmware.com/products/fusion)\n- [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install)\n\nSome of these hypervisors are only compatible with some OSs like Hyper-V (formerly known as Windows Server Virtualization) for windows.\n\nVirtualBox is however cross-platform, and this is why we are going to use it here. Make sure to [follow the instructions](https://www.virtualbox.org/wiki/Downloads) to install it.\n\nNow, install Minikube.\n\nLinux systems:\n\n```bash\ncurl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \u0026\u0026 chmod +x minikube\nsudo install minikube /usr/local/bin\n```\n\nMacOs:\n\n```bash\nbrew cask install minikube\ncurl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \u0026\u0026 chmod +x minikube\nsudo mv minikube /usr/local/bin\n```\n\nWindows:\n\nUse Chocolatey as an administrator:\n\n```bash\nchoco install minikube\n```\n\nOr use [the installer binary](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe).\n\nMinikube does not support all Kubernetes features (like load balancing for example), however, you can find the most important features there:\n\nMinikube supports the following Kubernetes features:\n\n- DNS\n- NodePorts\n- ConfigMaps and Secrets\n- Dashboards\n- Container Runtime: Docker, [rkt](https://github.com/rkt/rkt), [CRI-O](https://github.com/kubernetes-incubator/cri-o), and [containerd](https://github.com/containerd/containerd)\n- Enabling CNI (Container Network Interface)\n- Ingress\n\nYou can also add different addons like:\n\n- addon-manager\n- dashboard\n- default-storageclass\n- efk\n- freshpod\n- gvisor\n- heapster\n- ingress\n- logviewer\n- metrics-server\n- nvidia-driver-installer\n- nvidia-gpu-device-plugin\n- registry\n- registry-creds\n- storage-provisioner\n- storage-provisioner-gluster\n\nIf you run `minikube start` a cluster called minikube will be created; however, you have other choices rather than just creating a regular Minikube cluster. In this example, we are going to create a cluster called \"workshop\", enable a UI to browse the API and activate tailing logs:\n\n```bash\nminikube start -p workshop --extra-config=apiserver.enable-swagger-ui=true --alsologtostderr\n```\n\nYou have plenty of other options to start a Minikube cluster; you can, for instance, choose the Kubernetes version and the VM driver:\n\n```bash\nminikube start --kubernetes-version=\"v1.12.0\" --vm-driver=\"virtualbox\"  \n```\n\nStart the new cluster:\n\n```bash\nminikube start -p workshop --extra-config=apiserver.enable-swagger-ui=true --alsologtostderr\n```\n\nYou can get detailed information about the cluster using:\n\n```bash\nkubectl cluster-info\n```\n\nIf you didn't install kubectl, [follow the official instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl).\n\nYou can open the dashboard using `minikube -p workshop dashboard `\n\n\n\n# Deploying to Kubernetes\n\nWe have three main ways to deploy our container to Kubernetes and scale it to N replica.\n\nThe first one is the original form of replication in Kubernetes, and it's called **Replication Controller**.\n\nEven if Replica Sets replace it, it's still used in some codes.\n\nThis is a typical example:\n\n```yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: app\nspec:\n  replicas: 3\n  selector:\n    app: app\n  template:\n    metadata:\n      name: app\n      labels:\n        app: app\n    spec:\n      containers:\n      - name: tgr\n        image: reg/app:v1\n        ports:\n        - containerPort: 80\n```\n\nWe can also use Replica Sets, another way to deploy an app and replicate it:\n\n```yaml\napiVersion: extensions/v1beta1\n kind: ReplicaSet\n metadata:\n   name: app\n spec:\n   replicas: 3\n   selector:\n     matchLabels:\n       app: app\n   template:\n     metadata:\n       labels:\n         app: app\n         environment: dev\n     spec:\n       containers:\n       - name: app\n         image: reg/app:v1\n         ports:\n         - containerPort: 80\n```\n\nReplica Set and Replication Controller do almost the same thing.\n\nThey ensure that you have a specified number of pod replicas running at any given time in your cluster.\n\nThere are however, some differences.\n\nAs you may notice, we are using `matchLabels` instead of `label`.\n\nReplica Set use Set-Based selectors while replication controllers use Equity-Based selectors.\n\nSelectors match Kubernetes objects (like pods) using the constraints of the specified label, and we are going to see an example in a Deployment specification file.\n\n**Label selectors** with **equality-based requirements** use three operators:`=`,`==` and `!=`.\n\n```\nenvironment = production\ntier != frontend\napp == my_app (similar to app = my_app)\n```\n\nIn the last example, we used this notation:\n\n```\n ...\n spec:\n   replicas: 3\n   selector:\n     matchLabels:\n       app: app\n   template:\n     metadata:\n...     \n```\n\n\n\nWe could have used  **set-based requirements**:\n\n```\n...\nspec:\n   replicas: 3\n   selector:\n     matchExpressions:\n      - {key: app, operator: In, values: [app]}     \n  template:\n     metadata:\n...\n```\n\n\n\nIf we have more than 1 value for the app key, we can use:\n\n```\n...\nspec:\n   replicas: 3\n   selector:\n     matchExpressions:\n      - {key: app, operator: In, values: [app, my_app, myapp, application]}     \n  template:\n     metadata:\n...\n```\n\nAnd if we have other keys, we can use them like in the following example:\n\n ```\n...\nspec:\n   replicas: 3\n   selector:\n     matchExpressions:\n      - {key: app, operator: In, values: [app]}\n      - {key: tier, operator: NotIn, values: [frontend]}\n      - {key: environment, operator: NotIn, values: [production]}\ntemplate:\n     metadata:\n...\n ```\n\nNewer Kubernetes resources such as Jobs, Deployments, ReplicaSets, and DaemonSets all support set-based requirements as well.\n\nThis is an example of how we use Kubectl with selectors :\n\n```bash\nkubectl delete pods -l 'env in (production, staging, testing)'\n```\n\n\n\nUntil now, we have seen that the Replication Controller and Replica Set are two ways to deploy our container and manage it in a Kubernetes cluster. However, the recommended approach is using a Deployment that configures a ReplicaSet.\n\nIt is rather unlikely that we will ever need to create Pods directly for a production use-case since Deployments manages to create Pods for us through ReplicaSets.\n\nThis is a simple Pod definition:\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: infinite\n  labels:\n    env: production\n    owner: eon01\nspec:\n  containers:\n  - name: infinite\n    image: eon01/infinite\n```\n\nIn practice, we need:\n\n1. **A Deployment object** : Containers are specified here.\n2. **A Service object**: An abstract way to expose an application running on a set of Pods as a network service.\n\nThis is a Deployment object that creates three replicas of the container app running the image \"reg/app:v1\". These containers can be reached using port 80:\n\n```yaml\napiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: app\nspec:\n  replicas: 3\n  template:\n    metadata:\n      labels:\n        app: app\n    spec:\n      containers:\n      - name: app\n        image: reg/app:v1\n        ports:\n        - containerPort: 80\n```\n\nThis is the Deployment file we will use (save it to `kubernetes/api-deployment.yaml`:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: tgr\n  labels:\n    name: tgr\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      name: tgr\n  template:\n    metadata:\n      name: tgr\n      labels:\n        name: tgr\n    spec:\n      containers:\n        - name: tgr\n          image: eon01/tgr:1\n          ports:\n            - containerPort: 5000\n          resources:\n            requests:\n              memory: 128Mi\n            limits:\n              memory: 256Mi\n          env:\n            - name: CLIENT_ID\n              value: \"xxxx\"\n            - name: CLIENT_SECRET\n              value: \"xxxxxxxxxxxxxxxxxxxxx\"\n            - name: ENV\n              value: \"prod\"\n            - name: DEBUG\n              value: \"False\"\n            - name: HOST\n              value: \"0.0.0.0\"\n            - name: PORT\n              value: \"5000\"\n```\n\nLet's first talk about the API version; in the first example, we used the `extensions/v1beta1` and in the second one, we used  `apps/v1`. You may know that Kubernetes project development is very active, and it may be confusing sometimes to follow all the software updates.\n\nIn Kubernetes version 1.9, `apps/v1` is introduced, and `extensions/v1beta1`, `apps/v1beta1` and `apps/v1beta2` are deprecated.\n\nTo make things simpler, to know which version of the API you need to use, use the command:\n\n```bash\nkubectl api-versions\n```\n\nThis above command will give you the API versions compatible with your cluster.\n\n- **v1** was the first stable release of the Kubernetes API. It contains many core objects.\n\n- **apps/v1** is the most popular API group in Kubernetes, and it includes functionality related to running applications on Kubernetes, like Deployments, RollingUpdates, and ReplicaSets.\n\n- **autoscaling/v1** allows pods to be autoscaled based on different resource usage metrics.\n\n- **batch/v1** is related to batch processing and and jobs\n\n- **batch/v1beta1** is the beta release of batch/v1\n\n- **certificates.k8s.io/v1beta1** validates network certificates for secure communication in your cluster.\n\n- **extensions/v1beta1**  includes many new, commonly used features. In Kubernetes 1.6, some of these features were relocated from `extensions` to specific API groups like `apps` .\n\n- **policy/v1beta1** enables setting a pod disruption budget and new pod security rules\n\n- **rbac.authorization.k8s.io/v1** includes extra functionality for Kubernetes RBAC (role-based access control)\n- ..etc\n\n\n\nLet's deploy the pod now using the Deployment file we created.\n\n```bash\nkubectl apply -f kubernetes/api-deployment.yaml\n```\n\nNote that you can use `kubectl create -f kubernetes/api-deployment.yaml` command. However, there's a difference, between `apply` and `create`.\n\n\n\n`kubectl create` is what we call [Imperative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tutorials/object-management-kubectl/imperative-object-management-configuration/). `kubectl create` overwrites all changes, and if a resource is having the same id already exists, it will encounter an error.\n\nUsing this approach, you tell the Kubernetes API what you want to create, replace, or delete, not how you want your K8s cluster world to look like.\n\n`kubectl apply` is what we call [Declarative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) approach. `kubectl apply` makes incremental changes. If an object already exists and you want to apply a new value for replica without deleting and recreating the object again, then `kubectl apply` is what you need. `kubectl apply` can also be used even if the object (e.g deployment) does not exist yet.\n\nIn the Deployment configuration, we also defined our container. We will run a single container here since the replica is set to `1`. In the same time, our container will use the image `eon01/tgr:1`. Since our container will need some environment variables, the best way is to provide them using the Kubernetes deployment definition file.\n\nAlso, we can add many other configurations, like the requested memory and its limit. The goal here is not using all that Kubernetes allows is to use in a Deployment file, but to see some of the essential features.\n\n```yaml\n    spec:\n      containers:\n        - name: tgr\n          image: eon01/tgr:1\n          ports:\n            - containerPort: 5000\n          resources:\n            requests:\n              memory: 128Mi\n            limits:\n              memory: 256Mi\n          env:\n            - name: CLIENT_ID\n              value: \"xxxx\"\n            - name: CLIENT_SECRET\n              value: \"xxxxxxxxxxxxxxxxxxxxx\"\n            - name: ENV\n              value: \"prod\"\n            - name: DEBUG\n              value: \"False\"\n            - name: HOST\n              value: \"0.0.0.0\"\n            - name: PORT\n              value: \"5000\"\n```\n\nIn some cases, the Docker registry can be private, and in this case, pulling the image needs authentication. In this case, we need to add the `imagePullSecrets ` configuration:\n\n```\n...\n  containers:\n  - name: private-reg-container\n    image: \u003cyour-private-image\u003e\n  imagePullSecrets:\n  - name: registry-credentials\n  ...\n```\n\nThis is how the `registry-credentials` secret is created:\n\n```bash\nkubectl create secret docker-registry registry-credentials --docker-server=\u003cyour-registry-server\u003e --docker-username=\u003cyour-name\u003e --docker-password=\u003cyour-pword\u003e --docker-email=\u003cyour-email\u003e\n```\n\nYou can also apply/create the `registry-credentials` using a YAML file. This is an example:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  ...\n  name: registry-credentials\n  ...\ndata:\n  .dockerconfigjson: adjAalkazArrA ... JHJH1QUIIAAX0=\ntype: kubernetes.io/dockerconfigjson\n```\n\nIf you decode the .dockerconfigjson file using `base64 --decode` command, you will understand that it's a simple file storing the configuration to access a registry:\n\n```bash\nkubectl get secret regcred --output=\"jsonpath={.data.\\.dockerconfigjson}\" | base64 --decode\n```\n\nYou will get a similar output to the following one:\n\n```json\n{\"auths\":{\"your.private.registry.domain.com\":{\"username\":\"eon01\",\"password\":\"xxxxxxxxxxx\",\"email\":\"aymen@email.com\",\"auth\":\"dE3xxxxxxxxx\"}}}\n```\n\nAgain, let's decode the \"auth\" value using `echo \"dE3xxxxxxxxx\"|base64 --decode ` and it will give you something like `eon01:xxxxxxxx` which has the format `username:password`.\n\nNow let's see if the deployment is done, let's see how many pods we have:\n\n```bash\nkubectl get pods\n```\n\nThis command will show all the pods within a cluster.\n\nWe can scale our deployment using a command similar to the following one:\n\n```bash\nkubectl scale --replicas=\u003cexpected_replica_num\u003e deployment \u003cdeployment_name\u003e\n```\n\nOur deployment is called `tgr` since it's the name we gave to it in the Deployment configuration. You can also make verification by typing `kubectl get deployment`. Let's scale it:\n\n```bash\nkubectl scale --replicas=2 deployment tgr\n```\n\nEach of these containers will be accessible on port 500 from outside the container but not from outside the cluster.\n\nThe number of pods/containers running for our API can be variable and may change dynamically.\n\nWe can set up a load balancer that will balance traffic between the two pods we created, but since each pod can disappear to be recreated, its hostname and address will change.\n\nIn all cases, pods are not meant to receive traffic directly, but they need to be exposed to traffic using a Service. In other words, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.\n\nAt the moment, the only service running is the cluster IP (which is related to Minikube and give us access to the cluster we created):\n\n```bash\nkubectl get services\n```\n\n## Services\n\nIn Kubernetes, since Pods are mortals, we should create an abstraction that defines a logical set of Pods and how to access them. This is the role of Services.\n\nIn our case, creating a load balancer is a suitable solution. This is the configuration file of a Service object that will listen on the port 80 and load-balance traffic to the Pod with the label `name`equals to `app` . The latter is accessible internally using the port 5000 like it's defined in the Deployment configuration:\n\n```\n...\n          ports:\n            - containerPort: 5000\n...            \n```\n\nThis is how the Service looks like:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: lb\n  labels:\n    name: lb\nspec:\n  ports:\n  - port: 80\n    targetPort: 5000\n  selector:\n    name: tgr\n  type: LoadBalancer\n```\n\nSave this file to `kubernetes/api-service.yaml` and deploy it using `kubectl apply -f kubernetes/api-service.yaml`.\n\nIf you type `kubectl get service`, you will get the list of Services running in our local cluster:\n\n```\nNAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE\nkubernetes   ClusterIP      10.96.0.1       \u003cnone\u003e        443/TCP        51m\nlb           LoadBalancer   10.99.147.117   \u003cpending\u003e     80:30546/TCP   21s\n```\n\nNot that the ClusterIP does not have an external IP while the app Service external IP is pending.\nNo need to wait for the external IP of the created service, since Minikube does not really deploy a load balancer and this feature will only work if you configure a Load Balancer provider.\n\nIf you are using a Cloud provider, say AWS, an AWS load balancer will be set up for you, GKE will provide a Cloud Load Balancer..etc You may also configure other types of load balancers.\n\nThere are different types of Services that we can use to expose access to the API publicly:\n\n- `ClusterIP`:  is the default Kubernetes service. It exposes the Service on a cluster-internal IP. You can access it using the Kubernetes proxy.\n\n![](images/clusterip.png)\n\n\u003e Illustration by [Ahmet Alp Balkan](https://twitter.com/ahmetb) via [Medium](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0).\n\n- [`NodePort`](https://kubernetes.io/docs/concepts/services-networking/#nodeport): Exposes the Service on each Node’s (VM's) IP at a static port called the `NodePort`.  (In our example, we have a single node). This is a primitive way to make an application accessible from outside the cluster and is not suitable for many use cases since your nodes (VMs) IP addresses may change at any time. The service is accessible using `\u003cNodeIP\u003e:\u003cNodePort\u003e`.\n\n![](images/nodeport.png)\n\n\u003e Illustration by [Ahmet Alp Balkan](https://twitter.com/ahmetb) via [Medium](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0).\n\n\n\n- [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer): This is more advanced than a `NodePort` Service. Usually, a Load Balancer exposes a Service externally using a cloud provider’s load balancer. `NodePort` and `ClusterIP` Services, to which the external load balancer routes, are automatically created.\n\n![](images/loadbalancer.png)\n\n\u003e Illustration by [Ahmet Alp Balkan](https://twitter.com/ahmetb) via [Medium](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0).\n\nWe created a Load Balancer using a Service on our Minikube cluster, but since we don't have a Load Balancer to run, we can access the API service using the Cluster IP followed by the Service internal Port:\n\n```bash\nminikube -p workshop ip\n```\n\nOutput:\n\n```\n192.168.99.100\n```\n\nNow execute `kubectl get services` :\n\n```\nNAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE\nkubernetes   ClusterIP      10.96.0.1       \u003cnone\u003e        443/TCP        51m\nlb           LoadBalancer   10.99.147.117   \u003cpending\u003e     80:30546/TCP   21s\n```\n\n\n\nUse the IP `192.168.99.199` followed by the port `30546` to access the API.\n\nYou can test this using a `curl` command:\n\n```bash\ncurl \"http://192.168.99.100:30546/?l=python\u0026n=1\"\n\n---\n{\"repos\":[{\"archive_url\":\"https://api.github.com/repos/vinta/awesome-python/{archive_format}{/ref}\",\"archived\":false,\"assignees_url\":\"https://api.github.com/repos/vinta/awesome-python/assignees{/user}\",\"blobs_url\":\"https://api.github.com/repos/vinta/awesome-python/git/blobs{/sha}\",\"branches_url\":\"https://api.github.com/repos/vinta/awesome-python/branches{/branch}\",\"clone_url\":\"https://github.com/vinta/awesome-python.git\",\"collaborators_url\":\"https://api.github.com/repos/vinta/awesome-python/collaborators{/collaborator}\",\"comments_url\":\"https://api.github.com/repos/vinta/awesome-python/comments{/number}\",\"commits_url\":\"https://api.github.com/repos/vinta/awesome-python/commits{/sha}\",\"compare_url\":\"https://api.github.com/repos/vinta/awesome-python/compare/{base}...{head}\",\"contents_url\":\"https://api.github.com/repos/vinta/awesome-python/contents/{+path}\",\"contributors_url\":\"https://api.github.com/repos/vinta/awesome-python/contributors\",\"created_at\":\"2014-06-27T21:00:06Z\",\"default_branch\":\"master\",\"deployments_url\":\"https://api.github.com/repos/vinta/awesome-python/deployments\",\"description\":\"A curated list of awesome Python frameworks, libraries, software and resources\",\"disabled\":false,\"downloads_url\":\"https://api.github.com/repos/vinta/awesome-python/downloads\",\"events_url\":\"https://api.github.com/repos/vinta/awesome-python/events\",\"fork\":false,\"forks\":13929,\"forks_count\":13929,\"forks_url\":\"https://api.github.com/repos/vinta/awesome-python/forks\",\"full_name\":\"vinta/awesome-python\",\"git_commits_url\":\"https://api.github.com/repos/vinta/awesome-python/git/commits{/sha}\",\"git_refs_url\":\"https://api.github.com/repos/vinta/awesome-python/git/refs{/sha}\",\"git_tags_url\":\"https://api.github.com/repos/vinta/awesome-python/git/tags{/sha}\",\"git_url\":\"git://github.com/vinta/awesome-python.git\",\"has_downloads\":true,\"has_issues\":true,\"has_pages\":true,\"has_projects\":false,\"has_wiki\":false,\"homepage\":\"https://awesome-python.com/\",\"hooks_url\":\"https://api.github.com/repos/vinta/awesome-python/hooks\",\"html_url\":\"https://github.com/vinta/awesome-python\",\"id\":21289110,\"issue_comment_url\":\"https://api.github.com/repos/vinta/awesome-python/issues/comments{/number}\",\"issue_events_url\":\"https://api.github.com/repos/vinta/awesome-python/issues/events{/number}\",\"issues_url\":\"https://api.github.com/repos/vinta/awesome-python/issues{/number}\",\"keys_url\":\"https://api.github.com/repos/vinta/awesome-python/keys{/key_id}\",\"labels_url\":\"https://api.github.com/repos/vinta/awesome-python/labels{/name}\",\"language\":\"Python\",\"languages_url\":\"https://api.github.com/repos/vinta/awesome-python/languages\",\"license\":{\"key\":\"other\",\"name\":\"Other\",\"node_id\":\"MDc6TGljZW5zZTA=\",\"spdx_id\":\"NOASSERTION\",\"url\":null},\"merges_url\":\"https://api.github.com/repos/vinta/awesome-python/merges\",\"milestones_url\":\"https://api.github.com/repos/vinta/awesome-python/milestones{/number}\",\"mirror_url\":null,\"name\":\"awesome-python\",\"network_count\":13929,\"node_id\":\"MDEwOlJlcG9zaXRvcnkyMTI4OTExMA==\",\"notifications_url\":\"https://api.github.com/repos/vinta/awesome-python/notifications{?since,all,participating}\",\"open_issues\":482,\"open_issues_count\":482,\"owner\":{\"avatar_url\":\"https://avatars2.githubusercontent.com/u/652070?v=4\",\"events_url\":\"https://api.github.com/users/vinta/events{/privacy}\",\"followers_url\":\"https://api.github.com/users/vinta/followers\",\"following_url\":\"https://api.github.com/users/vinta/following{/other_user}\",\"gists_url\":\"https://api.github.com/users/vinta/gists{/gist_id}\",\"gravatar_id\":\"\",\"html_url\":\"https://github.com/vinta\",\"id\":652070,\"login\":\"vinta\",\"node_id\":\"MDQ6VXNlcjY1MjA3MA==\",\"organizations_url\":\"https://api.github.com/users/vinta/orgs\",\"received_events_url\":\"https://api.github.com/users/vinta/received_events\",\"repos_url\":\"https://api.github.com/users/vinta/repos\",\"site_admin\":false,\"starred_url\":\"https://api.github.com/users/vinta/starred{/owner}{/repo}\",\"subscriptions_url\":\"https://api.github.com/users/vinta/subscriptions\",\"type\":\"User\",\"url\":\"https://api.github.com/users/vinta\"},\"private\":false,\"pulls_url\":\"https://api.github.com/repos/vinta/awesome-python/pulls{/number}\",\"pushed_at\":\"2019-08-16T15:21:42Z\",\"releases_url\":\"https://api.github.com/repos/vinta/awesome-python/releases{/id}\",\"size\":4994,\"ssh_url\":\"git@github.com:vinta/awesome-python.git\",\"stargazers_count\":71222,\"stargazers_url\":\"https://api.github.com/repos/vinta/awesome-python/stargazers\",\"statuses_url\":\"https://api.github.com/repos/vinta/awesome-python/statuses/{sha}\",\"subscribers_count\":5251,\"subscribers_url\":\"https://api.github.com/repos/vinta/awesome-python/subscribers\",\"subscription_url\":\"https://api.github.com/repos/vinta/awesome-python/subscription\",\"svn_url\":\"https://github.com/vinta/awesome-python\",\"tags_url\":\"https://api.github.com/repos/vinta/awesome-python/tags\",\"teams_url\":\"https://api.github.com/repos/vinta/awesome-python/teams\",\"trees_url\":\"https://api.github.com/repos/vinta/awesome-python/git/trees{/sha}\",\"updated_at\":\"2019-08-17T16:11:44Z\",\"url\":\"https://api.github.com/repos/vinta/awesome-python\",\"watchers\":71222,\"watchers_count\":71222}],\"status\":\"ok\"}\n\n```\n\n\n\n## Inconvenience of Load Balancer Service\n\nTypically, load balancers are provisioned by the Cloud provider you're using.\n\nA load balancer can handle one service but imagine if you have ten services, each one will need a load balancer; this is when it becomes costly.\n\nThe best solution, in this case, is set up an Ingress controller that acts as a smart router and can be deployed at the edge of the cluster, therefore in the front of all the services you deploy.\n\n![](images/ingress.png)\n\n\u003e Illustration by [Ahmet Alp Balkan](https://twitter.com/ahmetb) via [Medium](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0).\n\n\n# An API Gateway\n\nAmbassador is an Open Source Kubernetes-Native API Gateway built on the Envoy Proxy. It provides a solution for traffic management and application security. It's described as is a specialized control plane that translates Kubernetes annotations to Envoy configuration.\n\nAll traffic is directly handled by the high-performance Envoy Proxy.\n\n![](images/ambassador-arch.png)\n\nPhoto credit: https://www.getambassador.io/concepts/architecture\n\nAs it's described in [Envoy official website](https://www.envoyproxy.io/):\n\n\u003e Originally built at **Lyft**, Envoy is a high-performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Built on the learnings of solutions such as NGINX, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner. When all service traffic in an infrastructure flows via an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place.\n\n We are going to use Ambassador as an API Gateway; we no longer need the load balancer service we created in the first part. Let's remove it:\n\n```bash\nkubectl delete -f kubernetes/api-service.yaml\n```\n\nTo deploy Ambassador in your **default** namespace, first, you need to check if Kubernetes has RBAC enabled:\n\n```bash\nkubectl cluster-info dump --namespace kube-system | grep authorization-mode\n```\n\nIf RBAC is enabled:\n\n```shell\nkubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml\n```\n\nWithout RBAC, you can use:\n\n```shell\nkubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml\n```\n\nAmbassador is deployed as a Kubernetes Service that references the ambassador Deployment you deployed previously. Create the following YAML and put it in a file called `kubernetes/ambassador-service.yaml`.\n\n```\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: ambassador\nspec:\n  type: LoadBalancer\n  externalTrafficPolicy: Local\n  ports:\n   - port: 80\n     targetPort: 8080\n  selector:\n    service: ambassador\n```\n\nDeploy the service:\n\n```bash\nkubectl apply -f ambassador-service.yaml\n```\n\nNow let's use this file containing the Deployment configuration for our API as well as the Ambassador Service configuration relative to the same Deployment. Call this file `kubernetes/api-deployment-with-ambassador.yaml`:\n\n```\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: tgr\n  annotations:\n    getambassador.io/config: |\n      ---\n      apiVersion: ambassador/v1\n      kind: Mapping\n      name: tgr_mapping\n      prefix: /\n      service: tgr:5000\n\nspec:\n  ports:\n  - name: tgr\n    port: 5000\n    targetPort: 5000\n  selector:\n    app: tgr\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: tgr\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: tgr\n  strategy:\n    type: RollingUpdate\n  template:\n    metadata:\n      labels:\n        app: tgr\n    spec:\n      containers:\n      - name: tgr\n        image: eon01/tgr:1\n        ports:\n          - containerPort: 5000\n        env:\n          - name: CLIENT_ID\n            value: \"453486b9225e0e26c525\"\n          - name: CLIENT_SECRET\n            value: \"a63e841d5c18f41b9264a1a2ac0675a1f903ee8c\"\n          - name: ENV\n            value: \"prod\"\n          - name: DEBUG\n            value: \"False\"\n          - name: HOST\n            value: \"0.0.0.0\"\n          - name: PORT\n            value: \"5000\"\n```\n\nDeploy the previously created configuration:\n\n```bash\nkubectl apply -f kubernetes/api-deployment-with-ambassador.yaml\n```\n\nLet's test things out: We need the external IP for Ambassador:\n\n```bash\nkubectl get svc -o wide ambassador\n```\n\nYou should see something like:\n\n```\nNAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR\nambassador   LoadBalancer   10.103.201.130   \u003cpending\u003e     80:30283/TCP   9m2s   service=ambassador\n```\n\nIf you are using Minikube, it is normal to see the external IP in the `pending` state.\n\nWe can use `minikube -p workshop  service list` to get the Ambassador IP. You will get an output similar to the following:\n\n```\n|-------------|------------------|-----------------------------|\n|  NAMESPACE  |       NAME       |             URL             |\n|-------------|------------------|-----------------------------|\n| default     | ambassador       | http://192.168.99.100:30283 |\n| default     | ambassador-admin | http://192.168.99.100:30084 |\n| default     | kubernetes       | No node port                |\n| default     | tgr              | No node port                |\n| kube-system | kube-dns         | No node port                |\n|-------------|------------------|-----------------------------|\n\n```\n\nNow you can use the API using the IP ` http://192.168.99.100:30283`:\n\n```bash\ncurl \"http://192.168.99.100:30283/?l=python\u0026n=1\"\n---\n{\"repos\":[{\"archive_url\":\"https://api.github.com/repos/vinta/awesome-python/{archive_format}{/ref}\",\"archived\":false,\"assignees_url\":\"https://api.github.com/repos/vinta/awesome-python/assignees{/user}\",\"blobs_url\":\"https://api.github.com/repos/vinta/awesome-python/git/blobs{/sha}\",\"branches_url\":\"https://api.github.com/repos/vinta/awesome-python/branches{/branch}\",\"clone_url\":\"https://github.com/vinta/awesome-python.git\",\"collaborators_url\":\"https://api.github.com/repos/vinta/awesome-python/collaborators{/collaborator}\",\"comments_url\":\"https://api.github.com/repos/vinta/awesome-python/comments{/number}\",\"commits_url\":\"https://api.github.com/repos/vinta/awesome-python/commits{/sha}\",\"compare_url\":\"https://api.github.com/repos/vinta/awesome-python/compare/{base}...{head}\",\"contents_url\":\"https://api.github.com/repos/vinta/awesome-python/contents/{+path}\",\"contributors_url\":\"https://api.github.com/repos/vinta/awesome-python/contributors\",\"created_at\":\"2014-06-27T21:00:06Z\",\"default_branch\":\"master\",\"deployments_url\":\"https://api.github.com/repos/vinta/awesome-python/deployments\",\"description\":\"A curated list of awesome Python frameworks, libraries, software and resources\",\"disabled\":false,\"downloads_url\":\"https://api.github.com/repos/vinta/awesome-python/downloads\",\"events_url\":\"https://api.github.com/repos/vinta/awesome-python/events\",\"fork\":false,\"forks\":13933,\"forks_count\":13933,\"forks_url\":\"https://api.github.com/repos/vinta/awesome-python/forks\",\"full_name\":\"vinta/awesome-python\",\"git_commits_url\":\"https://api.github.com/repos/vinta/awesome-python/git/commits{/sha}\",\"git_refs_url\":\"https://api.github.com/repos/vinta/awesome-python/git/refs{/sha}\",\"git_tags_url\":\"https://api.github.com/repos/vinta/awesome-python/git/tags{/sha}\",\"git_url\":\"git://github.com/vinta/awesome-python.git\",\"has_downloads\":true,\"has_issues\":true,\"has_pages\":true,\"has_projects\":false,\"has_wiki\":false,\"homepage\":\"https://awesome-python.com/\",\"hooks_url\":\"https://api.github.com/repos/vinta/awesome-python/hooks\",\"html_url\":\"https://github.com/vinta/awesome-python\",\"id\":21289110,\"issue_comment_url\":\"https://api.github.com/repos/vinta/awesome-python/issues/comments{/number}\",\"issue_events_url\":\"https://api.github.com/repos/vinta/awesome-python/issues/events{/number}\",\"issues_url\":\"https://api.github.com/repos/vinta/awesome-python/issues{/number}\",\"keys_url\":\"https://api.github.com/repos/vinta/awesome-python/keys{/key_id}\",\"labels_url\":\"https://api.github.com/repos/vinta/awesome-python/labels{/name}\",\"language\":\"Python\",\"languages_url\":\"https://api.github.com/repos/vinta/awesome-python/languages\",\"license\":{\"key\":\"other\",\"name\":\"Other\",\"node_id\":\"MDc6TGljZW5zZTA=\",\"spdx_id\":\"NOASSERTION\",\"url\":null},\"merges_url\":\"https://api.github.com/repos/vinta/awesome-python/merges\",\"milestones_url\":\"https://api.github.com/repos/vinta/awesome-python/milestones{/number}\",\"mirror_url\":null,\"name\":\"awesome-python\",\"network_count\":13933,\"node_id\":\"MDEwOlJlcG9zaXRvcnkyMTI4OTExMA==\",\"notifications_url\":\"https://api.github.com/repos/vinta/awesome-python/notifications{?since,all,participating}\",\"open_issues\":482,\"open_issues_count\":482,\"owner\":{\"avatar_url\":\"https://avatars2.githubusercontent.com/u/652070?v=4\",\"events_url\":\"https://api.github.com/users/vinta/events{/privacy}\",\"followers_url\":\"https://api.github.com/users/vinta/followers\",\"following_url\":\"https://api.github.com/users/vinta/following{/other_user}\",\"gists_url\":\"https://api.github.com/users/vinta/gists{/gist_id}\",\"gravatar_id\":\"\",\"html_url\":\"https://github.com/vinta\",\"id\":652070,\"login\":\"vinta\",\"node_id\":\"MDQ6VXNlcjY1MjA3MA==\",\"organizations_url\":\"https://api.github.com/users/vinta/orgs\",\"received_events_url\":\"https://api.github.com/users/vinta/received_events\",\"repos_url\":\"https://api.github.com/users/vinta/repos\",\"site_admin\":false,\"starred_url\":\"https://api.github.com/users/vinta/starred{/owner}{/repo}\",\"subscriptions_url\":\"https://api.github.com/users/vinta/subscriptions\",\"type\":\"User\",\"url\":\"https://api.github.com/users/vinta\"},\"private\":false,\"pulls_url\":\"https://api.github.com/repos/vinta/awesome-python/pulls{/number}\",\"pushed_at\":\"2019-08-16T15:21:42Z\",\"releases_url\":\"https://api.github.com/repos/vinta/awesome-python/releases{/id}\",\"size\":4994,\"ssh_url\":\"git@github.com:vinta/awesome-python.git\",\"stargazers_count\":71269,\"stargazers_url\":\"https://api.github.com/repos/vinta/awesome-python/stargazers\",\"statuses_url\":\"https://api.github.com/repos/vinta/awesome-python/statuses/{sha}\",\"subscribers_count\":5254,\"subscribers_url\":\"https://api.github.com/repos/vinta/awesome-python/subscribers\",\"subscription_url\":\"https://api.github.com/repos/vinta/awesome-python/subscription\",\"svn_url\":\"https://github.com/vinta/awesome-python\",\"tags_url\":\"https://api.github.com/repos/vinta/awesome-python/tags\",\"teams_url\":\"https://api.github.com/repos/vinta/awesome-python/teams\",\"trees_url\":\"https://api.github.com/repos/vinta/awesome-python/git/trees{/sha}\",\"updated_at\":\"2019-08-19T08:21:51Z\",\"url\":\"https://api.github.com/repos/vinta/awesome-python\",\"watchers\":71269,\"watchers_count\":71269}],\"status\":\"ok\"}\n\n```\n\n\n\n## Edge Proxy vs Service Mesh\n\nYou may have heard of tools like Istio and Linkerd and it may be confusing to compare Ambassador or Envoy to these tools. We are going to understand the differences here.\n\nIstio is described as a tool to connect, secure, control, and observe services.The same features are implemented by its alternatives like Linkerd or Consul. These tools are called Service Mesh.\n\nAmbassador is a, API gateway for services (or microservices) and it's deployed at the edge of your network. It routes incoming traffic to a cluster internal services and this what we call \"north-south\" traffic.\n\nIstio, in the other hand, is a service mesh for Kubernetes services (or microservices). It's designed to add application-level Layer (L7) observability, routing, and resilience to service-to-service traffic and this is what we call \"east-west\" traffic.\n\nThe fact that both Istio and Ambassador are built using Envoy, does not mean they have the same features or usability. Therefore, they can be deployed together in the same cluster.\n\n# Accessing the Kubernetes API\n\nIf you remember, when we created our Minikube cluster we used `--extra-config=apiserver.enable-swagger-ui=true`. This configuration makes the Kubernetes API \"browsable\" via a web browser.\n\nWhen using Minikube, in order to access the Kubernetes API using a browser, we need to create a proxy:\n\n```bash\nkubectl proxy --port=8080 \u0026\n```\n\nNow we can test this out using Curl:\n\n```bash\ncurl http://localhost:8080/api/\n---\n{\n  \"kind\": \"APIVersions\",\n  \"versions\": [\n    \"v1\"\n  ],\n  \"serverAddressByClientCIDRs\": [\n    {\n      \"clientCIDR\": \"0.0.0.0/0\",\n      \"serverAddress\": \"192.168.99.100:8443\"\n    }\n  ]\n}\n```\n\nWe can get a list of the APIs and resources we can access by visiting: `http://localhost:8080/`.\n\nFor instance, we can get a list of metrics here `http://localhost:8080/metrics`.\n\n## Using an API Client\n\nWe are going to use the Kubernetes client:\n\n```bash\npip install kubernetes\n```\n\n```python\n# import json\n# import requests\n# @app.route('/pods')\n# def monitor():\n#\n#     api_url = \"http://kubernetes.default.svc/api/v1/pods/\"\n#     response = requests.get(api_url)\n#     if response.status_code == 200:\n#         return json.loads(response.content.decode('utf-8'))\n#     else:\n#         return None\n```\n\n## Accessing the API from inside a POD\n\n By default, a Pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that Pod, at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\n\nLet's try to go inside a Pod and access the API. Use `kubectl get pods` to get a list of pods\n\n```\nNAME                          READY   STATUS    RESTARTS   AGE\nambassador-64d8b877f9-4bzvn   1/1     Running   0          103m\nambassador-64d8b877f9-b68w6   1/1     Running   0          103m\nambassador-64d8b877f9-vw9mm   1/1     Running   0          103m\ntgr-8d78d599f-pt5xx           1/1     Running   0          4m17s\n```\n\nNow log inside the API Pod:\n\n```bash\nkubectl exec -it tgr-8d78d599f-pt5xx bash\n```\n\nAssign the token to a variable:\n\n```\nKUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\n```\n\nNotice that the token file is added automatically by Kubernetes.\n\nWe also have other variables already set like:\n\n```bash\necho $KUBERNETES_SERVICE_HOST\n10.96.0.1\n\necho $KUBERNETES_PORT_443_TCP_PORT\n443\n\necho $HOSTNAME\ntgr-8d78d599f-pt5xx\n```\n\nWe are going to use these variables to access the list of Pods using this Curl command:\n\n```bash\ncurl -sSk -H \"Authorization: Bearer $KUBE_TOKEN\" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods\n```\n\nAt this stage, you should have an error output saying that you don't have the rights to access this API endpoint, which is normal:\n\n```json\n{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n\n  },\n  \"status\": \"Failure\",\n  \"message\": \"pods \\\"tgr-8d78d599f-pt5xx\\\" is forbidden: User \\\"system:serviceaccount:default:default\\\" cannot get resource \\\"pods\\\" in API group \\\"\\\" in the namespace \\\"default\\\"\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n    \"name\": \"tgr-8d78d599f-pt5xx\",\n    \"kind\": \"pods\"\n  },\n  \"code\": 403\n}\n```\n\nThe Pod is using the default Service Account and it does not have the right to list the Pods.\n\nIn order to fix this, exit the container and create a file called `kubernetes/service-account.yaml`, add the following lines:\n\n```yaml\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: pods-list\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"list\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: pods-list\nsubjects:\n- kind: ServiceAccount\n  name: default\n  namespace: default\nroleRef:\n  kind: ClusterRole\n  name: pods-list\n  apiGroup: rbac.authorization.k8s.io\n\n```\n\nThen apply the new configuration using `kubectl apply -f kubernetes/service-account.yaml`.\n\nNow you can access the list of Pods:\n\n```json\n{\n  \"kind\": \"PodList\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n    \"selfLink\": \"/api/v1/namespaces/default/pods/\",\n    \"resourceVersion\": \"19589\"\n  },\n  \"items\": [\n    {\n      \"metadata\": {\n        \"name\": \"ambassador-64d8b877f9-4bzvn\",\n        \"generateName\": \"ambassador-64d8b877f9-\",\n        \"namespace\": \"default\",\n        \"selfLink\": \"/api/v1/namespaces/default/pods/ambassador-64d8b877f9-4bzvn\",\n        \"uid\": \"63f62ede-de77-441d-85f7-daf9cbc7040f\",\n        \"resourceVersion\": \"1047\",\n        \"creationTimestamp\": \"2019-08-19T08:12:47Z\",\n        \"labels\": {\n          \"pod-template-hash\": \"64d8b877f9\",\n          \"service\": \"ambassador\"\n        },\n        \"annotations\": {\n          \"consul.hashicorp.com/connect-inject\": \"false\",\n          \"sidecar.istio.io/inject\": \"false\"\n        },\n        \"ownerReferences\": [\n          {\n            \"apiVersion\": \"apps/v1\",\n            \"kind\": \"ReplicaSet\",\n            \"name\": \"ambassador-64d8b877f9\",\n            \"uid\": \"383c2e4b-7179-4806-b7bf-3682c7873a10\",\n            \"controller\": true,\n            \"blockOwnerDeletion\": true\n          }\n        ]\n      },\n      \"spec\": {\n        \"volumes\": [\n          {\n            \"name\": \"ambassador-token-rdqq6\",\n            \"secret\": {\n              \"secretName\": \"ambassador-token-rdqq6\",\n              \"defaultMode\": 420\n            }\n          }\n        ],\n        \"containers\": [\n          {\n            \"name\": \"ambassador\",\n            \"image\": \"quay.io/datawire/ambassador:0.75.0\",\n            \"ports\": [\n              {\n                \"name\": \"http\",\n                \"containerPort\": 8080,\n                \"protocol\": \"TCP\"\n              },\n...\n```\n\nWhat about creating your own monitoring/observability solution using Python (or any other programming language) and the Kubernetes API ?\nThis could be probably the subject of the upcoming workshop.\n\n---\n\n# Star History\n[![Star History Chart](https://api.star-history.com/svg?repos=eon01/kubernetes-workshop\u0026type=Date)](https://star-history.com/#eon01/kubernetes-workshop\u0026Date)\n\n# Thanks to all the contributors!\n\u003ca href=\"https://github.com/eon01/kubernetes-workshop/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=eon01/kubernetes-workshop\" /\u003e\n\u003c/a\u003e\n\n---\n\nIf this workshop solved some of your problems, please consider giving it a star and/or buying me a coffee:\n\n[![Buy Me A Coffee](images/bmc.png)](https://www.buymeacoffee.com/joinFAUN)\n","funding_links":["https://www.buymeacoffee.com/joinFAUN"],"categories":["Python","Virtualization"],"sub_categories":["Containers"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feon01%2Fkubernetes-workshop","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Feon01%2Fkubernetes-workshop","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feon01%2Fkubernetes-workshop/lists"}