Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/admiraltyio/multicluster-controller
A Library for Building Hybrid and Multicloud Kubernetes Operators
https://github.com/admiraltyio/multicluster-controller
Last synced: 22 days ago
JSON representation
A Library for Building Hybrid and Multicloud Kubernetes Operators
- Host: GitHub
- URL: https://github.com/admiraltyio/multicluster-controller
- Owner: admiraltyio
- License: other
- Created: 2018-10-18T16:38:30.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2020-06-03T21:17:16.000Z (over 4 years ago)
- Last Synced: 2024-10-21T20:02:02.340Z (about 2 months ago)
- Language: Go
- Homepage: https://admiralty.io
- Size: 4.6 MB
- Stars: 247
- Watchers: 13
- Forks: 20
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-cloud-native - Multicluster-Controller - A Library for Building Hybrid and Multicloud Kubernetes Operators. (Framework)
README
# Multicluster-Controller
Multicluster-controller is a Go library for building Kubernetes controllers that need to watch resources in multiple clusters. It uses the best parts of [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) (the library powering [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) and now [operator-sdk](https://github.com/operator-framework/operator-sdk)) and replaces its API (the `manager`, `controller`, `reconcile`, and `handler` packages) to support multicluster operations.
Why? Check out [Admiralty's blog post introducing multicluster-controller](https://admiralty.io/blog/introducing-multicluster-controller/).
## Table of Contents
- [How it Works](#how-it-works)
- [Getting Started](#getting-started)
- [Configuration](#configuration)
- [Usage with Custom Resources](#usage-with-custom-resources)
- [API Reference](#api-reference)## How it Works
Here is a minimal multicluster controller that watches pods in two clusters. On pod events, it simply logs the pod's cluster name, namespace, and name. In a way, the only thing controlled by this controller is the standard output, but it illustrates a basic scaffold:
```go
package mainimport (
"context"
"log""k8s.io/api/core/v1"
"k8s.io/sample-controller/pkg/signals""admiralty.io/multicluster-controller/pkg/cluster"
"admiralty.io/multicluster-controller/pkg/controller"
"admiralty.io/multicluster-controller/pkg/manager"
"admiralty.io/multicluster-controller/pkg/reconcile"
"admiralty.io/multicluster-service-account/pkg/config"
)func main() {
stopCh := signals.SetupSignalHandler()
ctx, cancel := context.WithCancel(context.Background())
go func() {
<-stopCh
cancel()
}()co := controller.New(&reconciler{}, controller.Options{})
contexts := [2]string{"cluster1", "cluster2"}
for _, kubeCtx := range contexts {
cfg, _, err := config.NamedConfigAndNamespace(kubeCtx)
if err != nil {
log.Fatal(err)
}
cl := cluster.New(kubeCtx, cfg, cluster.Options{})
if err := co.WatchResourceReconcileObject(ctx, cl, &v1.Pod{}, controller.WatchOptions{}); err != nil {
log.Fatal(err)
}
}m := manager.New()
m.AddController(co)if err := m.Start(stopCh); err != nil {
log.Fatal(err)
}
}type reconciler struct{}
func (r *reconciler) Reconcile(req reconcile.Request) (reconcile.Result, error) {
log.Printf("%s / %s / %s", req.Context, req.Namespace, req.Name)
return reconcile.Result{}, nil
}
```1. `Cluster`s have arbitrary names. Indeed, Kubernetes clusters are unaware of their names at the moment—apimachinery's `ObjectMeta` struct has a `ClusterName` field, but it ["is not set anywhere right now and apiserver is going to ignore it if set in create or update request."](https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMeta)
1. `Cluster`s are configured using regular [client-go](https://github.com/kubernetes/client-go) [rest.Config](https://godoc.org/k8s.io/client-go/rest#Config) structs. They can be created, for example, from [kubeconfig files](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) or [service account imports](https://admiralty.io/blog/introducing-multicluster-service-account/). We recommend using the [config](https://godoc.org/admiralty.io/multicluster-service-account/pkg/config) package of [multicluster-service-account](https://github.com/admiraltyio/multicluster-service-account) in either case.
1. A `Cluster` struct is created for each kubeconfig context and/or service account import. `Cluster`s hold references to cluster-scoped dependencies: clients, caches, etc. (In controller-runtime, the `Manager` holds a unique set of those.)
1. A `Controller` struct is created, and configured to watch the Pod resource in each cluster. Internally, on each pod event, a reconcile `Request`, which consists of the cluster name, namespace, and name of the pod, is added to the `Controller`'s [workqueue](https://godoc.org/k8s.io/client-go/util/workqueue).
1. `Request`s are to be processed asynchronously by the `Controller`'s `Reconciler`, whose level-based logic is provided by the user (e.g., create controlled objects, call other services).
1. Finally, a `Manager` is created, and the `Controller` is added to it. In multicluster-controller, the `Manager`'s only responsibilities are to start the `Cluster`s' caches, wait for them to sync, then start the `Controller`s. (The `Manager` knows about the caches from the `Controller`s.)## Getting Started
A good way to get started with multicluster-controller is to run the `helloworld` example, which is more or less the controller presented above in [How it Works](#how-it-works). The other examples illustrate an actual reconciliation logic and the use of a custom resource. Look at their source code, change them to your needs, and refer to the [API documentation](#api-reference) as you go.
### 0. Requirements
You need at least two clusters and a kubeconfig file configured with two contexts, one for each of the clusters. If you already have two clusters/contexts set up, note the **context** names. In this guide, we use "cluster1" and "cluster2" as context names. (If your kubeconfig file contains more contexts/clusters/users, that's fine, they'll be ignored.)
**Important:** if your kubeconfig uses token-based authentication (e.g., GKE by default, or Azure with AD integration), make sure a valid (non-expired) token is cached before you continue. To refresh the tokens, run simple commands like:
```bash
kubectl cluster-info --context cluster1
kubectl cluster-info --context cluster2
```Note: In production, you wouldn't use your user kubeconfig. Instead, we recommend [multicluster-service-account](https://admiralty.io/blog/introducing-multicluster-service-account/).
If running the manager out-of-cluster, both clusters must be accessible from your machine; in-cluster, assuming you run the manager in cluster1, cluster2 must be accessible from cluster1, or if you run the manager in a third cluster, cluster1 and cluster2 must be accessible from cluster3.
#### (Optional) Creating Two Clusters on Google Kubernetes Engine
Assuming the `gcloud` CLI is installed, you're logged in, a default compute zone and project are set, and the Kubernetes Engine API is enabled in the project, here's a small script to create two clusters and rename their corresponding kubeconfig contexts "cluster1" and cluster2":
```bash
set -e
PROJECT=$(gcloud config get-value project)
REGION=$(gcloud config get-value compute/zone)
for NAME in cluster1 cluster2; do
gcloud container clusters create $NAME
gcloud container clusters get-credentials $NAME
CONTEXT=gke_$PROJECT"_"$REGION"_"$NAME
sed -i -e "s/$CONTEXT/$NAME/g" ~/.kube/config
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
kubectl cluster-info # caches a token in kubeconfig
done
```### 1. Running the Manager
You can run the manager either out-of-cluster or in-cluster.
#### Out-Of-Cluster
Build and run the manager from source:
```bash
go get admiralty.io/multicluster-controller
cd $GOPATH/src/admiralty.io/multicluster-controller
go run examples/helloworld/main.go --contexts cluster1,cluster2
```Run some other pod from a second terminal, for example:
```bash
kubectl run nginx --image=nginx
```#### In-Cluster
Save your kubeconfig file as a secret:
```bash
kubectl create secret generic kubeconfig \
--from-file=config=$HOME/.kube/config
```Then run a manager pod with the kubeconfig file mounted as a volume, and the `KUBECONFIG` environment variable set to its path:
```bash
cat < hack/boilerplate.go.txtecho 'version: "1"
domain: admiralty.io
repo: admiralty.io/foo' > PROJECTkubebuilder create api \
--group multicluster \
--version v1alpha1 \
--kind Foo \
--controller=false \
--make=falsego generate ./pkg/apis # runs k8s.io/code-generator/cmd/deepcopy-gen/main.go
```##### Option 2
```bash
kubebuilder init \
--domain admiralty.io \
--owner "The Multicluster-Controller Authors"kubebuilder create api \
--group multicluster \
--version v1alpha1 \
--kind Foo \
--controller=false
# calls make, which calls go generaterm pkg/controller/controller.go
# and rewrite cmd/manager/main.go
```## API Reference
https://godoc.org/admiralty.io/multicluster-controller/
or
```bash
go get admiralty.io/multicluster-controller
godoc -http=:6060
```then http://localhost:6060/pkg/admiralty.io/multicluster-controller/