{"id":13839370,"url":"https://github.com/admiraltyio/multicluster-controller","last_synced_at":"2026-01-12T14:29:26.039Z","repository":{"id":57480212,"uuid":"153655737","full_name":"admiraltyio/multicluster-controller","owner":"admiraltyio","description":"A Library for Building Hybrid and Multicloud Kubernetes Operators","archived":false,"fork":false,"pushed_at":"2020-06-03T21:17:16.000Z","size":4823,"stargazers_count":246,"open_issues_count":4,"forks_count":20,"subscribers_count":12,"default_branch":"master","last_synced_at":"2025-06-25T15:26:38.847Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://admiralty.io","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/admiraltyio.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-10-18T16:38:30.000Z","updated_at":"2025-03-28T02:38:30.000Z","dependencies_parsed_at":"2022-09-26T17:41:39.935Z","dependency_job_id":null,"html_url":"https://github.com/admiraltyio/multicluster-controller","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"purl":"pkg:github/admiraltyio/multicluster-controller","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/admiraltyio%2Fmulticluster-controller","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/admiraltyio%2Fmulticluster-controller/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/admiraltyio%2Fmulticluster-controller/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/admiraltyio%2Fmulticluster-controller/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/admiraltyio","download_url":"https://codeload.github.com/admiraltyio/multicluster-controller/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/admiraltyio%2Fmulticluster-controller/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264721342,"owners_count":23653923,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-04T17:00:20.593Z","updated_at":"2026-01-12T14:29:26.025Z","avatar_url":"https://github.com/admiraltyio.png","language":"Go","readme":"# Multicluster-Controller\n\nMulticluster-controller is a Go library for building Kubernetes controllers that need to watch resources in multiple clusters. It uses the best parts of [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) (the library powering [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) and now [operator-sdk](https://github.com/operator-framework/operator-sdk)) and replaces its API (the `manager`, `controller`, `reconcile`, and `handler` packages) to support multicluster operations.\n\nWhy? Check out [Admiralty's blog post introducing multicluster-controller](https://admiralty.io/blog/introducing-multicluster-controller/).\n\n## Table of Contents\n\n- [How it Works](#how-it-works)\n- [Getting Started](#getting-started)\n- [Configuration](#configuration)\n- [Usage with Custom Resources](#usage-with-custom-resources)\n- [API Reference](#api-reference)\n\n## How it Works\n\nHere is a minimal multicluster controller that watches pods in two clusters. On pod events, it simply logs the pod's cluster name, namespace, and name. In a way, the only thing controlled by this controller is the standard output, but it illustrates a basic scaffold:\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"log\"\n\n\t\"k8s.io/api/core/v1\"\n\t\"k8s.io/sample-controller/pkg/signals\"\n\n\t\"admiralty.io/multicluster-controller/pkg/cluster\"\n\t\"admiralty.io/multicluster-controller/pkg/controller\"\n\t\"admiralty.io/multicluster-controller/pkg/manager\"\n\t\"admiralty.io/multicluster-controller/pkg/reconcile\"\n\t\"admiralty.io/multicluster-service-account/pkg/config\"\n)\n\nfunc main() {\n\tstopCh := signals.SetupSignalHandler()\n\tctx, cancel := context.WithCancel(context.Background())\n\tgo func() {\n\t\t\u003c-stopCh\n\t\tcancel()\n\t}()\n\n\tco := controller.New(\u0026reconciler{}, controller.Options{})\n\n\tcontexts := [2]string{\"cluster1\", \"cluster2\"}\n\tfor _, kubeCtx := range contexts {\n\t\tcfg, _, err := config.NamedConfigAndNamespace(kubeCtx)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t\tcl := cluster.New(kubeCtx, cfg, cluster.Options{})\n\t\tif err := co.WatchResourceReconcileObject(ctx, cl, \u0026v1.Pod{}, controller.WatchOptions{}); err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n\n\tm := manager.New()\n\tm.AddController(co)\n\n\tif err := m.Start(stopCh); err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n\ntype reconciler struct{}\n\nfunc (r *reconciler) Reconcile(req reconcile.Request) (reconcile.Result, error) {\n\tlog.Printf(\"%s / %s / %s\", req.Context, req.Namespace, req.Name)\n\treturn reconcile.Result{}, nil\n}\n```\n\n1. `Cluster`s have arbitrary names. Indeed, Kubernetes clusters are unaware of their names at the moment—apimachinery's `ObjectMeta` struct has a `ClusterName` field, but it [\"is not set anywhere right now and apiserver is going to ignore it if set in create or update request.\"](https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMeta)\n1. `Cluster`s are configured using regular [client-go](https://github.com/kubernetes/client-go) [rest.Config](https://godoc.org/k8s.io/client-go/rest#Config) structs. They can be created, for example, from [kubeconfig files](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) or [service account imports](https://admiralty.io/blog/introducing-multicluster-service-account/). We recommend using the [config](https://godoc.org/admiralty.io/multicluster-service-account/pkg/config) package of [multicluster-service-account](https://github.com/admiraltyio/multicluster-service-account) in either case.\n1. A `Cluster` struct is created for each kubeconfig context and/or service account import. `Cluster`s hold references to cluster-scoped dependencies: clients, caches, etc. (In controller-runtime, the `Manager` holds a unique set of those.)\n1. A `Controller` struct is created, and configured to watch the Pod resource in each cluster. Internally, on each pod event, a reconcile `Request`, which consists of the cluster name, namespace, and name of the pod, is added to the `Controller`'s [workqueue](https://godoc.org/k8s.io/client-go/util/workqueue).\n1. `Request`s are to be processed asynchronously by the `Controller`'s `Reconciler`, whose level-based logic is provided by the user (e.g., create controlled objects, call other services).\n1. Finally, a `Manager` is created, and the `Controller` is added to it. In multicluster-controller, the `Manager`'s only responsibilities are to start the `Cluster`s' caches, wait for them to sync, then start the `Controller`s. (The `Manager` knows about the caches from the `Controller`s.)\n\n## Getting Started\n\nA good way to get started with multicluster-controller is to run the `helloworld` example, which is more or less the controller presented above in [How it Works](#how-it-works). The other examples illustrate an actual reconciliation logic and the use of a custom resource. Look at their source code, change them to your needs, and refer to the [API documentation](#api-reference) as you go.\n\n### 0. Requirements\n\nYou need at least two clusters and a kubeconfig file configured with two contexts, one for each of the clusters. If you already have two clusters/contexts set up, note the **context** names. In this guide, we use \"cluster1\" and \"cluster2\" as context names. (If your kubeconfig file contains more contexts/clusters/users, that's fine, they'll be ignored.)\n\n**Important:** if your kubeconfig uses token-based authentication (e.g., GKE by default, or Azure with AD integration), make sure a valid (non-expired) token is cached before you continue. To refresh the tokens, run simple commands like:\n\n```bash\nkubectl cluster-info --context cluster1\nkubectl cluster-info --context cluster2\n```\n\nNote: In production, you wouldn't use your user kubeconfig. Instead, we recommend [multicluster-service-account](https://admiralty.io/blog/introducing-multicluster-service-account/).\n\nIf running the manager out-of-cluster, both clusters must be accessible from your machine; in-cluster, assuming you run the manager in cluster1, cluster2 must be accessible from cluster1, or if you run the manager in a third cluster, cluster1 and cluster2 must be accessible from cluster3.\n\n#### (Optional) Creating Two Clusters on Google Kubernetes Engine\n\nAssuming the `gcloud` CLI is installed, you're logged in, a default compute zone and project are set, and the Kubernetes Engine API is enabled in the project, here's a small script to create two clusters and rename their corresponding kubeconfig contexts \"cluster1\" and cluster2\":\n\n```bash\nset -e\nPROJECT=$(gcloud config get-value project)\nREGION=$(gcloud config get-value compute/zone)\nfor NAME in cluster1 cluster2; do\n  gcloud container clusters create $NAME\n  gcloud container clusters get-credentials $NAME\n  CONTEXT=gke_$PROJECT\"_\"$REGION\"_\"$NAME\n  sed -i -e \"s/$CONTEXT/$NAME/g\" ~/.kube/config\n  kubectl create clusterrolebinding cluster-admin-binding \\\n    --clusterrole cluster-admin \\\n    --user $(gcloud config get-value account)\n  kubectl cluster-info # caches a token in kubeconfig\ndone\n```\n\n### 1. Running the Manager\n\nYou can run the manager either out-of-cluster or in-cluster.\n\n#### Out-Of-Cluster\n\nBuild and run the manager from source:\n\n```bash\ngo get admiralty.io/multicluster-controller\ncd $GOPATH/src/admiralty.io/multicluster-controller\ngo run examples/helloworld/main.go --contexts cluster1,cluster2\n```\n\nRun some other pod from a second terminal, for example:\n\n```bash\nkubectl run nginx --image=nginx\n```\n\n#### In-Cluster\n\nSave your kubeconfig file as a secret:\n\n```bash\nkubectl create secret generic kubeconfig \\\n  --from-file=config=$HOME/.kube/config\n```\n\nThen run a manager pod with the kubeconfig file mounted as a volume, and the `KUBECONFIG` environment variable set to its path:\n\n```bash\ncat \u003c\u003cEOF | kubectl create -f -\napiVersion: v1\nkind: Pod\nmetadata:\n  name: helloworld\nspec:\n  containers:\n  - env:\n    - name: KUBECONFIG\n      value: /root/.kube/config\n    image: quay.io/admiralty/multicluster-controller-example-helloworld\n    name: manager\n    args: [\"--contexts\", \"cluster1,cluster2\"]\n    volumeMounts:\n    - mountPath: /root/.kube\n      name: kubeconfig\n      readOnly: true\n  volumes:\n  - name: kubeconfig\n    secret:\n      secretName: kubeconfig\nEOF\n```\n\nRun some other pod and check the logs:\n\n```bash\nkubectl run nginx --image=nginx\nkubectl logs helloworld\n```\n\nIf you cannot trust the pre-built image, you can build your own from source:\n\n```bash\ngo get admiralty.io/multicluster-controller\ncd $GOPATH/src/admiralty.io/multicluster-controller\ndocker build \\\n  --file examples/Dockerfile \\\n  --build-arg target=admiralty.io/multicluster-controller/examples/helloworld \\\n  --tag $IMAGE .\n```\n\n### 2. Understanding the Output\n\nHere is a sample output, showing the system pods when the manager starts, followed by three lines for the nginx pod:\n\n```\n2018/10/11 18:53:52 cluster2 / kube-system / kube-dns-5dcfcbf5fb-89ngc\n2018/10/11 18:53:52 cluster2 / kube-system / kube-proxy-gke-cluster4-default-pool-cd1af1fa-z5pn\n...\n2018/10/11 18:53:52 cluster1 / kube-system / kube-dns-autoscaler-69c5cbdcdd-bjn5x\n2018/10/11 18:53:52 cluster1 / kube-system / fluentd-gcp-v2.0.17-q8g8x\n...\n2018/10/11 18:54:28 cluster2 / default / nginx-8586cf59-q59nb\n2018/10/11 18:54:28 cluster2 / default / nginx-8586cf59-q59nb\n2018/10/11 18:54:34 cluster2 / default / nginx-8586cf59-q59nb\n```\n\nWhen the cache synced, one reconcile request per pod was added to the controller's work queue. They were all different and all were processed. On the other hand, the nginx pod generated six events when it was created: Scheduled, SuccessfulMountVolume, Pulling, Pulled, Created, and Started; see for yourself by running:\n\n```bash\nkubectl describe pod nginx | tail -n 10\n```\n\nHowever, only three reconcile requests were processed. Indeed, the requests were all equal (same context, namespace, and name), so while the controller was processing one of them, several others were added to and **grouped** by the work queue before the controller could process another one (pod events can follow each other very quickly). That's normal and it illustrates the [asynchronous and level-based characteristics of the controller pattern](https://admiralty.io/blog/kubernetes-custom-resource-controller-and-operator-development-tools/#the-controller-pattern).\n\n### 3. Further Examples\n\n#### deploymentcopy\n\nThe `deploymentcopy` example filters events, watching only the default namespace. Also, it implements an actual reconciliation loop, following the common pattern illustrated in the figure below, where the controller object is an original Deployment in cluster1, and the controlled onject is a copy in cluster2.\n\n![controller logic](doc/controller-logic.svg)\n\n~~Note: Cross-cluster garbage collection is still in the works, so we must delete the controlled object when the controller has disappeared.~~ Cross-cluster garbage collection has been [extracted into a reusable pattern](https://github.com/admiraltyio/multicluster-controller/blob/master/pkg/patterns/gc/gc.go), but we still need to update the examples.\n\n![cross-cluster garbage collection with finalizers](doc/gc.png)\n\nTo run `deploymentcopy` out-of-cluster:\n\n```bash\ngo run examples/deploymentcopy/cmd/manager/main.go cluster1 cluster2\n```\n\n#### podghost\n\nThe `podghost` example's reconciliation logic is similar to `deploymentcopy`'s, but it creates PodGhost objects from Pods, where PodGhost is a custom resource (see below, [Usage with Custom Resources](#usage-with-custom-resources)).\n\nThe PodGhost custom resource definition (CRD) must be created in \"cluster2\" before running the manager:\n\n```bash\nkubectl create -f examples/podghost/kustomize/crd.yaml \\\n  --context cluster2\n```\n\nThen, out-of-cluster:\n\n```bash\ngo run examples/podghost/cmd/manager/main.go cluster1 cluster2\n```\n\n## Usage with Custom Resources\n\nThe Kubernetes controller pattern is often used in conjunction with [custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). To use multicluster-controller with a custom resource, we need two things:\n1. a custom resource definition (CRD), and\n2. API code for the custom resource.\n\n### Custom Resource Definition\n\nReminder: CustomResourceDefinition is itself a Kubernetes resource. A custom resource's schema can be defined by creating a CRD object, which specifies an API group (e.g., `multicluster.admiralty.io`), version (e.g., `v1alpha1`), kind (e.g., `PodGhost`), OpenAPI validation rules, [among other things](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/).\n\nThere's nothing special about multicluster-controller in this regard. Just don't forget to create your CRDs in all of the clusters that need them.\n\n### Custom API Code\n\nYou need to define a struct for the custom resource (e.g., PodGhost), a corresponding List struct (e.g., PodGhostList), with proper json field tags, and DeepCopy methods. The structs must be registered with the [scheme](https://godoc.org/github.com/kubernetes/client-go/kubernetes/scheme) used for the `Cluster`s at run time. Setting up those things manually is cumbersome and error-prone.\n\nLuckily, [there are tools to help us](https://admiralty.io/blog/kubernetes-custom-resource-controller-and-operator-development-tools/). You could copy-paste and modify the structs from [sample-controller](https://github.com/kubernetes/sample-controller) and use [code-generator](https://github.com/kubernetes/code-generator) to generate the DeepCopy methods. You can also leverage the scaffolding of [operator-sdk](https://github.com/operator-framework/operator-sdk) or [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder).\n\nIn the end, don't forget to register the structs with the scheme, as in this snippet [from the `podghost` example](examples/podghost/pkg/controller/podghost/podghost_controller.go):\n\n```go\nif err := apis.AddToScheme(ghostCluster.GetScheme()); err != nil {\n  return nil, err\n}\n```\n\n#### Using operator-sdk 0.0.7\n\n```bash\noperator-sdk new foo \\\n  --api-version multicluster.admiralty.io/v1alpha1 \\\n  --kind Foo\n```\n\nYou would then rewrite `cmd/foo/main.go` and `pkg/apis/stub/handler.go` for multicluster-scheduler. Note that `operator-sdk new` creates a lot of other files that you may or may not need.\n\n#### Using kubebuilder 1.0.4\n\nWe only care about the `kubebuilder create api` subcommand of `kubebuilder`, but unfortunately it requires files created by `kubebuilder init`, namely:\n- `hack/boilerplate.go.txt`, the copyright and license notice to use as a header in generated files,\n- and `PROJECT`, which contains metadata such as kubebuilder's major version number, the custom API's domain, and the package's import path.\n\nYou can either create only those two files (option 1) or run `kubebuilder init` and delete a bunch of files you don't need (option 2).\n\n##### Option 1\n\n```bash\necho '/*\nCopyright 2018 The Multicluster-Controller Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/' \u003e hack/boilerplate.go.txt\n\necho 'version: \"1\"\ndomain: admiralty.io\nrepo: admiralty.io/foo' \u003e PROJECT\n\nkubebuilder create api \\\n  --group multicluster \\\n  --version v1alpha1 \\\n  --kind Foo \\\n  --controller=false \\\n  --make=false\n\ngo generate ./pkg/apis # runs k8s.io/code-generator/cmd/deepcopy-gen/main.go\n```\n\n##### Option 2\n\n```bash\nkubebuilder init \\\n  --domain admiralty.io \\\n  --owner \"The Multicluster-Controller Authors\"\n\nkubebuilder create api \\\n  --group multicluster \\\n  --version v1alpha1 \\\n  --kind Foo \\\n  --controller=false\n  # calls make, which calls go generate\n\nrm pkg/controller/controller.go\n# and rewrite cmd/manager/main.go\n```\n\n## API Reference\n\nhttps://godoc.org/admiralty.io/multicluster-controller/\n\nor\n\n```bash\ngo get admiralty.io/multicluster-controller\ngodoc -http=:6060\n```\n\nthen http://localhost:6060/pkg/admiralty.io/multicluster-controller/\n","funding_links":[],"categories":["Framework"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadmiraltyio%2Fmulticluster-controller","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fadmiraltyio%2Fmulticluster-controller","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadmiraltyio%2Fmulticluster-controller/lists"}