Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/dippynark/cluster-api-provider-kubernetes

A Cluster API Infrastructure Provider implementation using Kubernetes itself as the infrastructure
https://github.com/dippynark/cluster-api-provider-kubernetes

cluster-api infrastructure-provider

Last synced: 24 days ago
JSON representation

A Cluster API Infrastructure Provider implementation using Kubernetes itself as the infrastructure

Awesome Lists containing this project

README

        

# Kubernetes Cluster API Provider Kubernetes

The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative,
Kubernetes-style APIs to cluster creation, configuration and management.

This project is a [Cluster API Infrastructure
Provider](https://cluster-api.sigs.k8s.io/reference/providers.html#infrastructure) implementation
using Kubernetes itself to provide the infrastructure. Pods using the
[kindest/node](https://hub.docker.com/r/kindest/node/) image built for
[kind](https://github.com/kubernetes-sigs/kind) are created and configured to serve as Nodes which
form a cluster.

The primary use cases for this project are testing and experimentation.

## Quickstart

We will deploy a Kubernetes cluster to provide the infrastructure, install the Cluster API
controllers and configure an example Kubernetes cluster using the Cluster API and the Kubernetes
infrastructure provider. We will refer to the infrastructure cluster as the outer cluster and the
Cluster API cluster as the inner cluster.

### Infrastructure

Any recent Kubernetes cluster (1.16+) should be suitable for the outer cluster.

We are going to use [Calico](https://docs.projectcalico.org/v3.11/getting-started/kubernetes/) as an
overlay implementation for the inner cluster with [IP-in-IP
encapsulation](https://docs.projectcalico.org/v3.11/getting-started/kubernetes/installation/config-options#configuring-ip-in-ip)
enabled so that our outer cluster does not need to know about the inner cluster's Pod IP range. To
make this work we need to ensure that the `ipip` kernel module is loadable and that IPv4
encapsulated packets are forwarded by the kernel.

On GKE this can be accomplished as follows:

```sh
# The GKE Ubuntu image includes the ipip kernel module
# Calico handles loading the module if necessary
# https://github.com/projectcalico/felix/blob/9469e77e0fa530523be915dfaa69cc42d30b8317/dataplane/linux/ipip_mgr.go#L107-L110
MANAGEMENT_CLUSTER_NAME="management"
gcloud container clusters create $MANAGEMENT_CLUSTER_NAME \
--image-type=UBUNTU \
--machine-type=n1-standard-2

# Allow IP-in-IP traffic between outer cluster Nodes from inner cluster Pods
CLUSTER_CIDR=`gcloud container clusters describe $MANAGEMENT_CLUSTER_NAME --format="value(clusterIpv4Cidr)"`
gcloud compute firewall-rules create allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip \
--source-ranges=$CLUSTER_CIDR \
--allow=ipip

# Forward IPv4 encapsulated packets
kubectl apply -f hack/forward-ipencap.yaml
```

### Installation

```sh
# Install clusterctl
# https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
CLUSTER_API_VERSION=v0.3.15
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/$CLUSTER_API_VERSION/clusterctl-`uname -s | tr '[:upper:]' '[:lower:]'`-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl

# Configure the Kubernetes infrastructure provider
mkdir -p $HOME/.cluster-api
cat > $HOME/.cluster-api/clusterctl.yaml </dev/null`" ] ; do
sleep 1
done
kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' | base64 --decode > $CLUSTER_NAME-kubeconfig

# Switch to new Kubernetes cluster. If the cluster API Server endpoint is not reachable from your
# local machine you can exec into a controller Node (Pod) and run
# `export KUBECONFIG=/etc/kubernetes/admin.conf` instead
export KUBECONFIG=$CLUSTER_NAME-kubeconfig

# Wait for the API Server to come up
until kubectl get nodes &>/dev/null; do
sleep 1
done

# Install Calico. This could also be done using a ClusterResourceSet
# https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# Interact with your new cluster!
kubectl get nodes
```

### Clean up

```sh
unset KUBECONFIG
rm -f $CLUSTER_NAME-kubeconfig
kubectl delete cluster $CLUSTER_NAME
# If using the GKE example above
yes | gcloud compute firewall-rules delete allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip
yes | gcloud container clusters delete $MANAGEMENT_CLUSTER_NAME --async
```

## TODO

- Implement finalizer for control plane Pods to prevent deletion that'd lose quorum (i.e. PDB)
- Work out why KCP replicas 3 has 0 failure tolerance
- https://github.com/kubernetes-sigs/cluster-api/blob/master/controlplane/kubeadm/controllers/remediation.go#L158-L159
- Improve performance of control plane
- Improve recovery of persistent control plane with 3 nodes
- Use Services to keep consistent etcd hostname? This would help if all control plane nodes are deleted at once
- Default cluster service type to ClusterIP
- https://book.kubebuilder.io/cronjob-tutorial/webhook-implementation.html