Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/zloeber/KubeStitch

Kubernetes deployment stitcher
https://github.com/zloeber/KubeStitch

bash docker helm helmfile ingress istio k3d kubernetes makefile

Last synced: 3 months ago
JSON representation

Kubernetes deployment stitcher

Awesome Lists containing this project

README

        

# KubeStitch

An easy to use tool for stitching together customized Kubernetes cluster environments that run locally the same as they do in the cloud.

## About

A default deployment of Kubernetes is not extremely useful. Usually, one needs to layer several stacks of technology on top of Kubernetes before even beginning to think about pipelining specific workloads to the cluster. To start extracting real value from this powerful new platform one needs to stitch together their base environment. KubeStitch attempts to help with this task by making it much easier to vet out technical solutions for the base deployment level of a Kuberentes cluster. It squarely targets that middle section between the custom workloads and the kubernetes cluster deployment itself.

![](./docs/cluster-components.jpg)

The diagram shows a few elements which may need deployment to your clusters. These can be done in one Istio operator driven deployment, as individual stacks, starting with ArgoCD which then gets used to deploy each stack as an 'App of Apps' GitOps approach, or any other method which suits your fancy. KubeStitch can help aid your efforts.

## Goals

The short goal for this project would be to narrow the gap in feature parity between what your local environment looks like versus your deployment environment. One can use kubestitch to quickly bring up and tear down local kubernetes environments for testing and vetting out Kubernetes solutions that then get deployed remotely with little extra code refactoring. Additionally, multiple Kubernetes clustering technologies can be swapped out relatively quickly and easily using a single file with few variables.

## Requirements

This project currently uses standard bash scripts, make, and docker. It has been tested on Linux and Mac. The following should be available on the system running these scripts.

- bash
- docker

Where required, all other stand-alone cli tools will be installed to a `./.local/` path within this folder to ensure portability.

If you ever run a git pull to update this project locally, it would be safe to run a `make clean` to ensure that all the latest updates are pulled down when running `make cluster`.

> **NOTE** Mac users should be aware that while the dns-proxy-server will spin up with the correct dns internal zone forwarding configured to the correct ingress internal IP, it will not work from the mac automatically. First you will need to reconfigure your DNS resolver to point to 127.0.0.1. I could not figure out a way to make this work automatically due to how mdnsResponder takes over the DNS lookup process on Macs. I welcome pull requests to help resolve this automation snafu. More work in this area can be found in the [Mac Issues section of this document](#mac-issues).

## Installing

This project is only a makefile with some scripts. So clone the repo, cd to it, then run make. If you want to run it from anywhere on your system then you will have to create an alias for the command like this from the cloned repo directory.

```bash
alias kubestitch="make --no-print-directory -C $(pwd)"
```

Then you can run any tasks using the `kubestitch` command.

```bash
kubestitch show
kubestitch PROFILE=monitoring show

# Note that the placement of override env vars in the command does not matter
kubestitch cluster PROFILE=vault
```

The rest of this document will use `make` in any examples.

## Usage

Standard usage is pretty simple. After cloning this repo you can start a default kind based cluster for the current PROFILE (default=default) with one command.

```bash
make cluster
```

This aggregates several tasks which are broken down further below.

```bash
# Show tasks
make

# Install dependencies
make deps

# Start a local kind cluster and create a local kube
make cluster/start

# Perform a default deployment of helmfiles for the cluster defined in the profile (default: cicd)
make sync
```

When you are done you can destroy the cluster you just created.

```bash
make cluster/stop
```

## Clusters

Before diving into profiles, it is important to be aware that each cluster that gets created will create its own configuration file within the `./.local/` path in the form of `kube..conf` (unless the `KUBE_CONFIG` environment variable is overwritten). This ensures your personal user kubernetes config file does not get polluted with KubeStitch testing cluster detritus.

This also means that, by default, you will not be able to interface with the cluster without some additional work. The simple way around this is to just point your 'KUBECONFIG' env var to the config file that gets generated. If you are tinkering with multiple versions of Kubernetes, then it may be useful to also set an alias for the kubectl binary that gets downloaded as well.

Both tasks can be done easily with the following single command after the cluster has been created (note that we are sourcing the output of make config as a command using backticks!).

```bash
`make config`
```

You only need to do this once for any cluster you are working on (per console session). If you recreate the cluster in your journey (done easily via `make cluster`) this file may get recreated but it shouldn't matter so much.

If you are changing between versions of Kubernetes using this framework you will have to clear out the kubectl binary when changing versions (between cluster builds) with `make clean`. Just ensure you also rerun the deps task to get the correct kubectl binary version afterwards.

## Profiles

This project was originally created to support multiple clusters and teams. I've since pulled it back to simply targeting 'profiles'. This means all top level settings that you may want to overwrite are able to be put in a single file: `./profiles/.env`. To see the current profile, helmfile environment, and some additional variables, use `make show`.

To create a whole new target environment you can copy and modify the `./profiles/default.env` to `./profiles/.env` then pass in `PROFILE=` to all make commands (or export it at the start of your session). If you need a separate environment or helmfile cluster deployment then further files will need to be created to accommodate.

> **NOTE** `PROFILE` is for using different cluster types and configurations. Testing the same set of helm charts to both k3d and kind (or any other target for that matter) might be facilitated by using a profile. An `ENVIRONMENT` is what you would use to create helmfile deployment environment configuration. `ENVIRONMENT` is always 'default' unless manually passed in and is only used in the * tasks. You can set `ENVIRONMENT` within the profile definition or overwrite it when calling the tasks via the command line. The istio example uses an Environment to define a set of istio specific helm settings to deploy.

```bash
## Launch a local k3d based kube cluster
make deps cluster/start PROFILE=k3d

## Launch a local kind based kube cluster
make deps cluster/start PROFILE=default

## Look at your first cluster
export KUBECONFIG=`make .kube/config/file PROFILE=k3d`
kubectl get nodes

## And your second one too
export KUBECONFIG=`make .kube/config/file PROFILE=default`
kubectl get nodes

## Destroy both clusters
make cluster/stop PROFILE=k3d
make cluster/stop ## Default environment is 'default'
```

> **NOTE** While you can run multiple clusters at once that really wasn't what this project was meant for and you will likely run into port or other conflicts. It is best to run each cluster then destroy it before changing and using another profile.

### Profile - default

**File:** ./profiles/default.env

This profile is what we use for default deployments. It includes;

- A 2 node kind cluster running
- Kubernetes 1.18.2
- The calico CNI
- MetalLB

After the cluster starts up and you run the `dnsforward/start` task the following urls will be available.

- http://traefik.int.micro.svc

### Profile - monitoring

**File:** ./profiles/monitoring.env

This profile is the same as the default profile but allows for the prometheus operator to also be deployed. After the cluster starts up and you run the `dnsforward/start` task the following urls will be available.

- http://traefik.int.micro.svc
- http://grafana.int.micro.svc
- http://alertmanager.int.micro.svc
- http://prometheus.int.micro.svc

The grafana site default login is `admin/prom-operator`

### PROFILE - k3d

**File:** ./profiles/k3d.env

- A 2 node k3d cluster
- Kubernetes 1.18.3
- Builtin k3d loadbalancer

This is a k3d cluster that is similar to the kind environment

### PROFILE - istio

**File:** ./profiles/istio.env

Here is an example environment running istio on a kind cluster. It includes;

- A separate 'istio' profile (`./profiles/istio.env`)
- An additional plugin to include istio specific commands (found in inc/makefile.istio)
- An istioctl based istio operator deployment
- Some istio tasks for monitoring ingress and such
- A helmfile based deployment of bookinfo:
- Not a bad example of converting a straight yaml file to helmfile using the raw chart (as a crude shortcut)
- Exhibits helmfile dependency chaining
- Uses a local custom namespace chart to also enable the istio sidecar injection label upon deployment

### PROFILE - vault

**File:** ./profiles/vault.env

This is another proof of concept environment that spins up a kind cluster running Hashicorp vault with a consul backend. It also includes installation of some dependencies for testing things out as well as vault-sync to test out seeding a base deployment via cli. This is a work in progress on how one might use mostly declarative configuration for a vault integrated kubernetes cluster.

You can access both [http://consul.int.micro.svc](http://consul.int.micro.svc) and [http://vault.int.micro.svc](http://vault.int.micro.svc) to immediately start exploring these cool products after bringing up the cluster.

If you use this profile the vault service will be provisioned via an ingress address which can be accessed using the vault client and some environment variables. To export these variables into your session use the following command:

```bash
make dnsforward/start
`make show/vault`
./.local/bin/vault status
```

> **NOTE** - This assumes that dns lookups for your cluster works properly, dnsforward tasks currently only work on Linux hosts. Alternatively, you can update the deployment to expose the service via Loadbalancer IP as well and use the IP address instead.

### PROFILE - vaultha

**File:** ./profiles/vaultha.env

This changes a few variables around so that a 3 node kind cluster is deployed and the vault chart gets deployed in ha mode instead of dev mode. The vault deployment is uninitialized and will need more love to get working correctly.

### PROFILE - localstack

**File:** ./profiles/localstack.env

This profile is for setting up the venerable [localstack](https://github.com/localstack/localstack) AWS API simulation environment in kubernetes. The deployment was a 10 minute effort proof of concept done by converting the project's docker-compose file via kompose then plugging in the output into a custom helmfile.

This can be a great way to vet out terraform manifests or test out AWS based pipelines.

### PROFILE - argocd

**File:** ./profiles/argocd.env

Create a local argocd cluster for testing out GitOps. Includes customization to allow for helmfile use in the application definitions.

To access the GUI interface (https://argocd.int.micro.svc/) the default ID is `admin` and the password can be retrieved by running `make argocd/get/password` (typically ends up being the argocd-server pod name)

**NOTE:** I'm still working on the helmfile argocd repository task runner. This is just a base environment and nothing more.

### PROFILE - homeassist

Performs a local install/uninstall of a single node k3s cluster for use in a bare metal home assistant deployment. This assumes the local machine is Linux ready and uses sudo rights to setup the service. This profile is custom built for my own environment which includes an NFS storage backend and should not be used without modifications.

**File:** ./profiles/homeassist.env

**NOTE:** To do the same deployment locally via kind use the 'kind_homeassist' profile instead.

### PROFILE - testing

**File:** ./profiles/testing.env

I use this to work on charts before they are published for a new release version. This uses the `testing` environment with overrides to the archetype chart source so that it uses the direct git repo master branch instead of the released version.

## Tasksets

Sometimes there are additional commands required to vet out a particular solution. To accommodate for this need, we can add a new makefile directly in the 'inc' folder as `./inc/makefile.` and then add the `` into your profile's `ADDITIONAL_TASKSETS` variable (space delimited if there is more than one). This will source in your new script tasks automatically when the profile is used.

You can see the tasksets which are available by listing the files within the 'inc' folder of this repo. Alternatively, run `make show/tasksets` to list out all those that are available.

Most tasksets are cross platform and will download and then run binaries specific to your platform (Linux/Mac only at this time).

> **NOTE:** The Lens taskset is not cross platform so it is excluded from the default profile. You can insert this or any other non-profile specific taskset by adding `CUSTOM_TASKSETS=lens` to your make commands.

The following tasksets are loaded by default and are considered essential for KubeStitch to operate.

- helm
- kube
- common

To get help for loaded tasks run `make help`. This will show all non-hidden tasks. There are quite a few more hidden tasks which may be useful as well. To list these tasks, run `make help/all` and grep the output for what you might be looking to do.

```bash
# show all tasks, including those that are hidden and search for those related to ingress
❯ make help/all | grep ingress
.cluster/ingress Start cluster ingress
.kube/ingress/hosts Show ingress hosts
```
### Provider Tasksets

A provider taskset is special as it is a specific taskset for provisioning kubernetes clusters. These are in the same location as the other tasksets but with a different naming convention, `./inc/makefile.provider.`. Only one provider taskset should be loaded at a time. Thus far kind and k3d are the most reliable and tested provider tasksets but I also include a minikube provider from earlier testing as well as the k3s taskset for my home server deployment.

> **NOTE:** I abandoned minikube earlier on for a number of reasons but it would be feasible to fix this up and use minikube as well if that's your thing.

If you choose to add a new provider ensure that they include all the base tasks found in the other `makefile.cluster.*` files and you will find that its pretty easy to automatically install dependencies and customize this thing to your needs.

## Helmfile

Helmfile is used extensively to declaratively stitch together deployments from external helm charts, my own monochart (called archetype), and the raw helm chart. Common repositories and default helmfile values are sourced into the various helmfiles using a common block at the top of each helmfile.

Helmfile uses 'environments' to help breakdown multiple configuration paths more effectively.

### Environments

Environments are defined in `config/environments.yaml`. This points to an individual values file that contains any default values first. Optional override values can then be manually defined afterwards. Additionally, each relevant helmfile includes an optional override value file that can be used to rewrite the helm deployment for each environment if so desired. Any override files should be dropped in the environment folder in this format:

```bash
./config//.override.yaml
```

To target another environment for helmfile follow a few extra steps.

1. Modify `config/environments.yaml` to add a new environment (copy/paste the default one to start if you like)
2. Minor changes can be made right in the environments.yaml file beneath the default values
3. For larger changes, create/update any override files as needed in `config//`

Then, whenever you are targeting a specific environment, include `ENVIRONMENT=`. Optionally, you can create a new profile and set ENVIRONMENT within it instead.

I try to keep all settings in the default helmfile environment yaml definition and overwrite their installation in the environment level file when I need to do so.
In this example, I disable monitoring elements from being deployed to the default environment even though the same setting in the `./config/default/values.yaml` file is enabled.

```yaml
# ./config/environments.yaml
environments:
default:
values:
- ../config/default/values.yaml
- prometheusoperator:
enabled: false
```

As you can see, I purposefully set the prometheus operator deployment as disabled in the default environment. This is because one usually does not want an entire monitoring environment when doing local testing. But many helmfile stacks include monitors that I may otherwise need to deploy that have dependencies upon CRDs of the Prometheus Operator. To work around this chicken/egg scenario we simply disable the monitoring stack at the environment level and use that variable when deploying monitoring elements in the various stacks. The same type of logic is done with the 'ingress' stack so we can deploy several stacks and simply not deploy ingress by ensuring that ingress.enabled=false if we so desire.

> **NOTE:** The approach I take is that all stacks are default enabled unless the environment overrides say otherwise. This does not mean that all stacks are deployed by default, this is what the cluster helmfiles are for. But we can add/remove individual stacks this way.

This effectively makes the `../config/default/values.yaml` our default settings with anything under `./config/environments.yaml` being the overrides. I source these settings in for all my helmfiles which allows me to update helm chart versions and sources from one values file. If I want to test a new chart I can create another environment with the value being overwritten in `./config/environments.yaml` while retaining all other chart settings.

> **NOTE:** In the past I've utilized many environment variables for these kinds of settings. While it is tempting to do so I recommend against this style of chart authoring as it makes for more complex pipelines and a less declarative approach to deployments.

If you wish to see the applied values after all these layers are merged, we can use a specially crafted helmfile stack to display the rendered values.

```bash
make build STACK=values PROFILE=
```

### Helmfile Stacks

For lack of better terminology I use 'stack' to define a set of helm charts that I've stitched together as a deployable unit. Typically this is a handful of charts all targeting a single namespace by the same name. I attempt to create helmfiles in a manner which can be deployed individually but there may be some cross dependencies for more complex stacks (like if I use cert manager CRDs for instance).

A set of stacks can be applied, in order, via a single helmfile. This is how I deploy whole cluster configurations declaratively. Below you can see the default cicd cluster helmfile. As it is simply another helmfile we keep the cluster helmfile definitions right alongside the rest of the stacks.

```bash
#/helmfiles/helmfile.cluster.cicd.yaml
---
bases:
- ../config/environments.yaml
---

helmfiles:
- ../helmfiles/helmfile.traefik.yaml
- ../helmfiles/helmfile.cert-manager.yaml
- ../helmfiles/helmfile.security.yaml
- ../helmfiles/helmfile.metricsserver.yaml
```

What makes this fun is that I have a whole other profile called 'vault' that creates its own cluster (aptly named 'vault') that I put together simply for the convenience of bringing up a full cluster configured with consul and vault in a single command. But I don't need to use the vault profile to run consul and vault. We can also add individual helmfile stacks or the entire cluster helmfile stack definition at any time to any cluster while in any profile.

```bash
# Use the default profile
unset PROFILE

# Setup a default cluster
make cluster

# Install only the consul stack
make sync STACK=consul

# Or apply the cluster stack entirely outside of the vault profile (note that consul is already applied and will not be applied again even though it is listed in the cluster.vault helmfile)
make sync STACK=cluster.vault

# Now remove just consul for the hell of it (likely breaking vault though)
make destroy STACK=consul
```

This is a powerful way to quickly work on various deployments and allows for layering of stacks to get the results you are looking for in an iterative manner.

> **NOTE 1** Sometimes you may have to wait for the initial cluster to fully come up before running the apply for the entire cluster.

> **NOTE 2** Because many of the lookups involved with this framework are just simple scripts and I found no other good way to look up the helmfile values in a post rendered state, I use a special helmfile that is never meant to be deployed that is called lookup.placeholder.yaml. This allows me to render values based on environment using the helmfile build command. Then the output is parsed for the values I need. I use this workaround and a hidden task to render out ingress internal dns zone information for the dns proxy task for instance.

# Examples

Here are a few example use cases worth looking over.

## Example 1 - Local Istio

Here is an example of using a profile to start up a local istio cluster. Istio has a moderately more complex deployment path than a standard helm chart so this is a good example for showing how KubeStitch can be flexible. The profile definition is copied from the default profile and renamed with a few changes.

```bash
KUBE_PROVIDER=kind
CLUSTER=istio
KUBE_VERSION=1.18.2
KIND_IMAGE_POSTFIX=@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f
ENVIRONMENT=istio
ADDITIONAL_TASKSETS=istio k9s metallb
```

Because istio will want to run its own ingress (and egress) gateways within the istio-system namespace we create another helmfile profile which looks very similar to the default profile:

```yaml
istio:
missingFileHandler: Debug
secrets:
- ../config/{{ .Environment.Name }}/secrets.yaml
values:
- ../config/default/values.yaml
- ../config/{{ .Environment.Name }}/values.yaml
- prometheusoperator:
enabled: false
- ingress:
internal:
namespace: istio-system
- traefik:
enabled: false
```

We don't actually use any secrets or even a specific environment values.yaml file but we define them anyways (in case we ever want to drop overrides for chart locations, versions, or anything else that veers from the ../config/default/values.yaml definitions). We then put in a few overrides that will prevent deployment of the prometheus-operator and traefik (though the default cluster helmfile stack for the 'istio' cluster defined in the profile does not have these defined anyway). Finally, the ingress internal namespace is overwritten from the default values in here as well. We use this for the dnsforwarding tasks.

The istio operator chart wants to create its own namespace via its chart template so we let it do so via the istio-operator chart by not defining a namespace within the helmfile release. We also deploy the demo profile for istio using the CRD definition and a simple raw helm chart in the helmfile.istio.yaml file.

```bash
# Ensure we get the correct profile for the remaining commands
export PROFILE=istio

# Start the cluster
make cluster

# If you are on linux you can also start forwarding dns
make dnsforward/start
```

This installs the istio operator, istio itself, and some dashboard ingress definitions for these things.

> **NOTE** Because we are using the operator, it can take a few minutes for the istio deployment to get rolled out.

If you want to see more than just a base deployment deploy the bookinfo helmfile stack as well.

```bash
# Deploy bookinfo demo app if you like
make sync STACK=bookinfo
```

At this point you should be able to go to the example bookinfo microservice deployment by visiting [http://bookinfo.int.micro.svc/productpage](http://bookinfo.int.micro.svc/productpage) from your local machine. Open the kiali dashboard at [http://kiali.int.micro.svc](http://kiali.int.micro.svc) (login: admin/admin) to see the istio magic at work.

You can then screw around with the cluster to your heart's content.

```bash
# configure local kubectl to access your cluster
export KUBECONFIG=`make .kube/config/file`

# view the virtual services (ingresses) for the dashboards
kubectl -n istio-system get virtualservice
```

To clean things up.

```bash
make dnsforward/stop cluster/stop
unset PROFILE KUBECONFIG
```

If k3d is more your style then you can repeat this entire set of directions with another profile I setup and tested in a few minutes after doing this with kind. Just use the istio-k3d profile instead!

## Example 2 - K3S with HomeAssistant

This is a somewhat unique example. Basically I used this framework for a bare metal k3s cluster at home for home assistant. I created another kubernetes provider specifically for k3s (`inc/makefile.cluster.k3s`) along with a profile to use it (`profiles/profile.homeassist.env`). With this profile and provider I overwrite the default location of the KUBE_CONFIG so that after installation of the cluster elements the config is automatically updated for my standard account. This particular provider does require sudo rights.

A neat trick I use here is to set 'CLUSTER' to 'haas' in the profile and create a helmfile.cluster.haas.yaml file for use in deploying all stacks to this deployment even though it is not local.

```bash
export PROFILE=homeassist
make cluster
```

The k3s provider I created specifically disables many of the default deployment settings of k8s, specifically the default 'local-path' storage provider. I later use NFS mounts in my home setup using the nfs-client-provider. This gets installed via helmfile like 99% of hte cluster configuration.

While I was testing this out on an old Shuttle PC I found out that some memory sticks were bad. As I was running memtest86+ against the server (which takes forever and a day) I quickly setup another profile called 'kind_homeassist' so I could continue vetting out my kubernetes deployment stacks. In a few minutes I had the same deployment via the kind provider up and running.

```bash
export PROFILE=kind_homeassist
make cluster
make sync STACK=homeassistant
```

You may notice that I use the cicd cluster for the local kind install as I know it works locally and since I'm not using nfs mounts or anything like that in my docker based cluster it makes sense for me to use a default baseline cluster and simply manually add the stacks in that I'm testing afterwards.

Because I'm using some localized IP addresses for both the nfs provisioner chart and traefik, I created another helmfile profile in 'config/environments.yaml' with these override settings in place. I do not yet know how to extract the extrapolated values yaml from helmfile so for now I also have to put these values in the 'config/homeassist/values.yaml' file for the Metallb deployment scripts to locate and for dnsforward tasks to use.

> **NOTE**: In this setup I run the KubeStitch from my server then access it remotely via kubectl. To do this I have to copy over the config file that gets generated to my workstation `scp zloeber@servername:~/..kube/config ~/..kube/config ` then edit it to replace 127.0.0.1 with the IP address of the server.

## Example 3 - Deployment As an Artifact

I've done a bit of a proof of concept around initializing a cluster's deployed stacks and figured I'd put it as a third example of how you can use this framework. The idea is that I'd like to turn the entire cluster deployment with helmfiles and definitions into a localized immutable artifact. This is because sometimes external helm charts change without notice or helm chart versioning being done.

Also, I just wanted to see what it would take to do a single push deployment to a cluster that then runs all the helmfile logic for that cluster. Turns out it is not so hard. To run such a thing for the default profile we just create a job that uses a dockerfile image created from this repo and set environment variables for ENVIRONMENT and CLUSTER. As long as we run it in a namespace that has cluster-admin rights it will run without issues. I created a helmfile to do this task.

```bash
# Start the base cluster without any helm charts being synced
make cluster/start

# Sync the clusterinit stack
make sync STACK=clusterinit
```

After a few minutes the local cluster will have traefik, certmanager, and rbacmanager installed (and you should see some successful jobs having been run in the cluster-init namespace).

The docker image itself does the repo and chart dep updates. When the image runs on the cluster we skip this step by using the `--skip-deps` flag in the helmfile sync command.

This is a proof of concept only. Next steps would be to make the process more secure by using a versioned image in the job (not latest) and an additional task should be run after the helmfile sync completes to selectively remove the role-binding on the namespace's default service account.

# Secrets

Any secrets should be put into your environment for use in tasksets that require them. For this I personally use direnv with a local .envrc file.

```bash
## Example direnv file for exporting a gitlab token used to login to the gitlab cli
export GITLAB_TOKEN=
```

## Lens (GUI Console)

Lens is a particularly useful GUI app for rucking about in Kubernetes. It is also cross-platform. If you are running a linux host you can use this framework to automatically download the app and access clusters with it. I've added some scriptwork to automatically clear the lens clusters and add new configuration in a few commands. Here is how it works.

```bash
# First ensure that the taskset gets loaded (it should not be in any profile by default)
export CUSTOM_TASKSETS=lens

# Download the lens AppImage and run it at least once then close out of it
make deps lens

# Assuming your cluster has been started already, you can add it to the lens config automatically then restart the app.
# By default, this will first clear any added clusters in the configuration (localized in the ./.local/Lens/lens-cluster-store.json file)
make lens/addcluster lens
```

The running cluster will be the first one in the list (at the upper left). Lens stores entire cluster configurations in its backend store so you will need to do this each time you rebuild a cluster and wish to use this app to manage it.

> **NOTE** The Lens app is cross platform but the taskset is targeted towards linux hosts only currently as the AppImage format is a single download and able to be automated without root access. For a cross-platform console dashboard you can also use the `k9s` taskset. Otherwise you will have to manually download the appropriate version of Lens for your platform and remove/add the cluster configurations manually at this time.

# Troubleshooting

To enable helm debugging pass in `DEBUG_MODE=on` to any command.

## Additional Tips

- `cluster/start` task will always first try to destroy the cluster before starting it.
- `cluster/start` task always spins up a bare deployment (no helmfile stacks applied)
- `cluster` task runs `cluster/start` and `sync` tasks. Unless `STACK=` is specified this defaults to syncing the cluster named helmfile stack (`helmfiles/helmfile.cluster..yaml`)
- `sync` will default to `STACK=cluster.$(CLUSTER)`
- The above information means `make cluster/start sync` will always recreate a cluster from scratch then install the default cluster helmfile stack of charts for the current profile's cluster.
- All of the commands above are condensed into `make cluster`
- There is a ton of extra 'stuff' in this repo that still needs to be cleaned out or revisited, not all files serve a purpose (yet)
- Along the same lines as the prior statement, there are a ton eof 'helmfiles' in the wip folder that worked with env vars at one point. I'll slowly move these out of wip when I'm able to do so or the need comes up.
- MetalLB is used when a loadbalancer is required.
- ~~Currently the loadbalancer deployment will use IP addresses between 172.17.0.100 and 172.17.0.110. This is the bridge IP subnet of docker on my workstation. You can modify this range in the config file within `./deploy/.metallb/metallb-config.yaml`.~~ This is/was only for kind and is now automatically determined when metallb is deployed to the cluster. You can still override this in the `./config/environment.default.yaml` file by adding `stacks.ingress.internalLBSubnet` and assigning it a CIDR subnet instead. Otherwise it grabs the `DOCKER_NETWORK` subnet and replaces the last digits with 1.0/24 (in my case it now ends up being 172.21.1.0/24 since I kept screwing around with the bridge network...whoops.)
- Currently I source in all used repositories regardless if they are used or not, this slows down initial syncing and probably can be improved upon somehow. If you have already run `make sync` once and thus have remote chart repositories already synced up, you can more quickly itterate over local helmfile changes by switching over to using `make charts STACK=` instead. The helmfile charts subcommand is meant for offline chart deployments.
- The `dnsforward/*` tasks are a bit janky as it is really hard to account for all possible scenarios. Generically, only use dnsforward tasks for local cluster testing (and make any additional forwarding updates/changes via the web interface if the logic is not working for you).

# Known Issues

I just want to note that currently, this framework will require all repositories for all helmfiles you wish to use. This list can grow quite quickly. It would not be very hard to have a per-environment configuration of each repo list. You will have to update most of the base helmfiles to accomodate though.

## Minikube on Linux

I actively avoid minikube on Linux as it has given me nothing but problems. If you do happen to really want to use minikube on Linux here are some notes.

If you are running Ubuntu 20.04 here is a snippet to get docker-ce installed:

```bash
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt install docker-ce
sudo usermod -aG docker ${USER}
```

And a snippet to get podman installed:

```bash
source /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- | sudo apt-key add -
sudo apt update && sudo apt install -y podman
```

That is all I have for you, good luck.

# Advanced Usage

Using Kubestich fully means you are likely needing to add/remove/change up your clusters frequently. This is the generic process of how to do so from my experiences.

1. You will likely onboard a new helm chart (usually converting a helm chart to a helmfile)
2. You may bundle several helmfiles into a singlular helmfile chart for a cluster deployment
3. If you get this far you may then also look to 'productionalize' the deployment to work within multiple environments or even ingress zones.

This all starts with converting a helm chart to a helmfile though.

## Helm to Helmfile
Typically the development process starts with a need for a particular helm chart deployment. This deployment will typically need to be stripped of;
- default ingress definitions (we use our own to managed all of them from the ingress controller on down)
- subcharts (these should each be turned into a helmfile deployment)
- Any namespace generation
- default metrics collection or monitoring configurations (again, we will manage that on our own across the whole cluser)

To convert a typical helm chart a few templates have been dropped into the `helmfiles/templates` folder. Copy and edit one of these into the `helmfiles` directory as `helmfile.chartname.yaml`. Simply replace the 'chartname' with your chart and update the default details (and likely values). Additional noodling in the origin helm chart will need to be done to isolate any service names used so they can be used in your own ingress definitions (if the chart requires one that is).

At this stage, a well authored helmfile chart can technically stand on its own and be applied to any current Kubernetes cluster. But more should be done to future proof the deployment a bit and allow for more centralized updates and puzzle piecing down the line.

## Config File Updates

Some files you should update at this point:

|File|Purpose|
|---|---|
|`config/environments.yaml`|And environment specific values should be placed here (not needed for most cases though)|
|`config/repositories.yaml`|Any additional helm repos for your chart should be added here|
|`config/default/values.yaml`|Any default values used in the helmfile should be defined here|

If all is well you should be able to apply your chart via kubestitch to any current cluster:

`make sync STACK=chartname`

# Troubleshooting

This set of Makefiles, scripts, glue, and prayers works well for me but may be problematic for some. Here are specific notes which may or may not help you fix issues you are running into.

## Mac Issues

There are two issues I've run into while getting this to work properly on a mac. They are largely due to difficulties with the networking stack of the most recent releases of OSX. These issues fall into 2 categories, DNS resolution and routable interface handling in hyperkit (the mac virtualization layer).

### DNS

TBD

### Virtual Network Interfaces

The best article on this particular issue [can be read here](https://www.thehumblelab.com/kind-and-metallb-on-mac/). Be warned that getting this working requires additional installation of external software and other tricks to make function. It is almost easier to just run KubeStitch from a local Ubuntu virtual machine in virtualbox or similar hypervisor.

The overall steps to get things working are as follows:

```bash
brew bundle --file ./config/osx/Brewfile
osascript -e 'quit app "Docker"'
./config/osx/docker-tuntap-osx/sbin/docker_tap_install.sh
./config/osx/docker_tap_up.sh
sudo route -v add -net 172.18.0.1 -netmask 255.255.0.0 10.0.75.2
```

Once this has been done technically you should be able to bring up things as normal and access them via the Metallb provisioned loadbalancer IP that gets assigned to the traefik loadbalancer service. I do not believe this accounts for being connected to a VPN though.

# Why Makefiles?

Great question! One which I sometimes struggle to answer. There are dozens of other taskers out there but make is probably one of the older ones that is well supported across a number of different systems. I personally use it when I might use bash scripts otherwise. It allows me to quickly view the commands being run (if I've not purposefully hidden them with a well placed prepended @ symbol). Plus I'm sort of used to slinging makefiles so why not? Besides, most repos worth their salt have at least one of these to bootstrap things in some manner so it doesn't hurt to get to know them a bit I'd say.

> **NOTE** If you are copying task definitions out for stand alone scripts remember to replace '$$' for '$' where ever you see them.

# Resources

[Helmfile](https://github.com/roboll/helmfile) - The ultimate helm chart stitcher

[My Archetype Chart](https://github.com/zloeber/archetype-chart) - I use this quite a bit for standardized ingress among other things

[Manage Helm Charts With Helmfile](https://www.arthurkoziel.com/managing-helm-charts-with-) - Good article on some techniques to implement DRY helmfile deployments

[Helm Secrets Plugin](https://github.com/zendesk/helm-secrets) - Helm plugin for secrets management

[Istio Practice Deployment](https://github.com/RothAndrew/istio-practice/tree/master/eks) - Inspired me to finish this framework

[Fury Kubernetes Distribution](https://github.com/sighupio/fury-distribution) - The basic concept of 'stacks' that I have been putting together using helmfiles has been done in this distribution using kustomize instead. Inspiring work but far too difficult to modify and use for rapid stack stitching from my experience.

[CloudPosse's Helmfiles](https://github.com/cloudposse/helmfiles) - A very impressive and well written set of helmfiles. Inspirational and if I'm honest, better than my own charts.

[MetalLB](https://metallb.universe.tf/) - The software loadbalancer used in several of the cluster profiles

[DPS](http://mageddo.github.io/dns-proxy-server/latest/en/) - DNS Proxy Server container used to forward ingress zone requests to local clusters