{"id":20077635,"url":"https://github.com/stolostron/deploy","last_synced_at":"2025-05-16T06:08:21.887Z","repository":{"id":37610691,"uuid":"246079750","full_name":"stolostron/deploy","owner":"stolostron","description":"Deploy Development Builds of Open Cluster Management (OCM) on RedHat Openshift Container Platform","archived":false,"fork":false,"pushed_at":"2025-05-12T11:30:20.000Z","size":9410,"stargazers_count":164,"open_issues_count":23,"forks_count":156,"subscribers_count":22,"default_branch":"master","last_synced_at":"2025-05-12T12:37:50.562Z","etag":null,"topics":["deploy","hybrid-cloud","k8s","k8s-cluster","kubernetes","kubernetes-deployment","multi-cloud","multi-cloud-environments","multi-cloud-kubernetes","open-cluster-management","openshift","openshift-operator","operator"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/stolostron.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-03-09T15:59:21.000Z","updated_at":"2025-05-12T11:30:24.000Z","dependencies_parsed_at":"2024-01-13T01:58:49.758Z","dependency_job_id":"17283d8e-bb20-4402-90a8-772ac8d704f9","html_url":"https://github.com/stolostron/deploy","commit_stats":null,"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stolostron%2Fdeploy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stolostron%2Fdeploy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stolostron%2Fdeploy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/stolostron%2Fdeploy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/stolostron","download_url":"https://codeload.github.com/stolostron/deploy/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254478193,"owners_count":22077676,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deploy","hybrid-cloud","k8s","k8s-cluster","kubernetes","kubernetes-deployment","multi-cloud","multi-cloud-environments","multi-cloud-kubernetes","open-cluster-management","openshift","openshift-operator","operator"],"created_at":"2024-11-13T15:09:26.327Z","updated_at":"2025-05-16T06:08:16.863Z","avatar_url":"https://github.com/stolostron.png","language":"Shell","readme":"\n# Deploy the _open-cluster-management_ project\n\n### Welcome!\n\nYou might be asking yourself, \"What is Open Cluster Management?\", well it is the _open-cluster-management_ project. View the _open-cluster-management_ architecture diagram:\n\n![Architecture diagram](images/arch.jpg)\n\n\u003eThe GitHub org and project are currently distinct from the SaaS offering named \"Red Hat OpenShift Cluster Manager\" but will ultimately co-exist/share technology as needed. Core technology, such as [Hive](https://github.com/openshift/hive) is already shared between the two offerings.\n\nKubernetes provides a platform to deploy and manage containers in a standard, consistent control plane. However, as application workloads move from development to production, they often require multiple fit-for-purpose Kubernetes clusters to support DevOps pipelines. Users such as administrators and site reliability engineers (SREs), face challenges as they work across a range of environments, including multiple data centers, private clouds, and public clouds that run Kubernetes clusters. The _open-cluster-management_ project provides the tools and capabilities to address these common challenges.\n\n_open-cluster-management_ provides end-to-end visibility and control to manage your Kubernetes environment. Take control of your application modernization program with management capabilities for cluster creation, application lifecycle, and provide security and compliance for all of them across data centers and hybrid cloud environments. Clusters and applications are all visible and managed from a single console with built-in security policies. Run your operations where Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.\n\nWith the _open-cluster-management_ project, you can complete the following functionality tasks:\n\n  - Work across a range of environments, including multiple data centers, private clouds and public clouds that run Kubernetes clusters.\n  - Easily create Kubernetes clusters and offer cluster lifecycle management in a single console.\n  - Enforce policies at the target clusters using Kubernetes-supported custom resource definitions.\n  - Deploy and maintain day-two operations of business applications distributed across your cluster landscape.\n\nOur code is open! To reach us in the open source community please head to https://open-cluster-management.io, and you can also find us on Kubernetes Slack workspace: https://kubernetes.slack.com/archives/C01GE7YSUUF\n \nIf you're looking for RHACM, the Red Hat multicluster management product that runs on OpenShift, your Red Hat account team rep should be able to help you get an evaluation of ACM so that you can use the actual product bits in a supported way. There is also a self-supported evaluation if you prefer that, and you can get started right away at: https://www.redhat.com/en/technologies/management/advanced-cluster-management\n-\u003e click the “Try It” button. \n\n## Let's get started...\n\nYou can find our __work-in-progress__ documentation [here](https://github.com/stolostron/rhacm-docs/blob/doc_prod/README.md). Please read through the docs to find out how you can use the _open-cluster-management_ project. Oh, and please submit an issue for any problems you may find, or clarifications you might suggest.\n\nYou can find information on how to contribute to this project and our docs project in our [CONTRIBUTING.md](CONTRIBUTING.md) doc.\n\n#### Prereqs\n\nYou must meet the following requirements to install the _open-cluster-management_ project:\n\n- An OpenShift Container Platform (OCP) 4.3+ cluster available\n  - You must have a default storage class defined\n- `oc` (ver. 4.3+) \u0026 `kubectl` (ver. 1.16+) configured to connect to your OCP cluster\n- `oc` is connected with adequate permissions to create new namespaces in your OCP cluster.\n- The following utilities **required**:\n  - `sed`\n    - On **macOS** install using: `brew install gnu-sed`\n  - `jq`\n    - On **macOS** install using: `brew install jq`\n  - `yq` (v4.12+)\n    - On **macOS** install using: `brew install yq`\n- The following utilities are **optional**:\n  - `watch`\n    - On **macOS** install using: `brew install watch`\n\n#### Repo Structure and Organization\nThis repo contains the 3 directories:\n  - `prereqs` - YAML definitions for prerequisite objects (namespaces and pull-secrets)\n  - `acm-operator` - YAML definitions for setting up a `CatalogSource` for our operator\n  - `multiclusterhub` -  YAML definitions for creating an instance of `MultiClusterHub`\n\nEach of the three directories contains a `kustomization.yaml` file that will apply the YAML definitions to your OCP instance with the following command: `kubectl apply -k`.\n\nThere are __helper__ scripts in the root of this repo:\n  - `start.sh` - takes the edge off having to manually edit YAML files\n  - `uninstall.sh` - we're not perfect yet; includes additional scripting to ensure we clean up our mess on your OCP cluster.\n\nYou have multiple choices of installation:\n  - [the easy way](#deploy-using-the-startsh-script-the-easy-way) - using the provided `start.sh` script which will assist you through the process.\n  - [the hard way](#the-hard-way) - instructions to deploy _open-cluster-management_ with only `oc` commands.\n  - [downstream images v2.0+](#deploying-downstream-builds-snapshots-for-product-quality-engineering-only-20) - instructions to deploy downstream images, i.e. for QE\n\nEither way you choose to go, you are going to need a `pull-secret` in order to gain access to our built images residing in our private [Quay environment](https://quay.io/stolostron). Please follow the instructions [Prepare to deploy Open Cluster Management Instance](#prepare-to-deploy-stolostron-instance-only-do-once) to get your `pull-secret` setup.\n\n## Prepare to deploy Open Cluster Management Instance (only do once)\n\n1. Clone this repo locally\n    ```bash\n    git clone https://github.com/stolostron/deploy.git\n    ```\n\n2. Generate your pull-secret:\n   - ensure you have access to the quay org ([stolostron](https://quay.io/repository/stolostron/acm-custom-registry?tab=tags))\n   - to request access to [stolostron](https://quay.io/repository/stolostron/acm-custom-registry?tab=tags) in quay.io, for external (non Red Hat) users, you can please contact the ACM BU via email at [acm-contact@redhat.com](mailto:acm-contact@redhat.com). Or, if you have access to Red Hat Slack you can contact us on our Slack Channel [#forum-hypbld](https://redhat.enterprise.slack.com/archives/C04L50S5XM4)) and indicate if you want upstream (`stolostron`) or downstream (`acm-d`) repos (or both).  We'll need your quay ID.  Once the team indicates they've granted you access, open your Notifications at quay.io and accept the invitation(s) waiting for you.\n   - you will also need a bot and token generated for each of the repositories you wish to use.\n   - acm-d (stolostron images are public)\n   - :exclamation: **save secret file in the `prereqs` directory as `pull-secret.yaml`**\n   - :exclamation: **edit `pull-secret.yaml` file and change the name to `multiclusterhub-operator-pull-secret`**\n      ```bash\n      apiVersion: v1\n      kind: Secret\n      metadata:\n        name: multiclusterhub-operator-pull-secret\n      ...\n      ```\n\n## Deploy using the ./start.sh script (the easy way)\n\nWe've added a very simple `start.sh` script to make your life easier. To deploy downstream images please refer to \"Deploying downstream builds\" section below. \n\nFirst, you need to `export KUBECONFIG=/path/to/some/cluster/kubeconfig` (or do an `oc login` that will set it for you).\n`deploy` installs ACM to the cluster configured in your `KUBECONFIG` env variable.\n\n_Optionally_ `export DEBUG=true` for additional debugging output for 2.1+ releases. `export USE_STARTING_CSV=true` to use an explicit `STARTING_CSV` variable.\n\n### Running start.sh\n\n1. Run the `start.sh` script. You have the following options when you run the command:\n    ```\n    -t modify the YAML but exit before apply the resources\n    --silent, skip all prompting, uses the previous configuration\n    --watch, will monitor the main Red Hat ACM pod deployments for up to 10min\n    --search, will activate search as part of the deployment.\n    \n    $ ./start.sh --watch --search\n    ```\n\n2. When prompted for the SNAPSHOT tag, either press `Enter` to use the previous tag, or provide a new SNAPSHOT tag.\n    - UPSTREAM snapshot tags - https://quay.io/repository/stolostron/acm-custom-registry?tab=tags\n    - DOWNSTREAM snapshot tag - https://quay.io/repository/acm-d/acm-custom-registry?tab=tags\n    \n    For example, your SNAPSHOT tag might resemble the following information:\n    ```bash\n    2.0.5-SNAPSHOT-2020-10-26-21-38-29\n    ```\n    NOTE: To change the default SNAPSHOT tag, edit `snapshot.ver`, which contains a single line that specifies the SNAPSHOT tag.  This method of updating the default SNAPSHOT tag is useful when using the `--silent` option.\n\n3. Depending on your script option choice, `open-cluster-management` will be deployed or deploying.\n\n    For version 2.1+, you can monitor the status fields of the multiclusterhub object created in the `open-cluster-management` namespace (namespace will differ if TARGET_NAMESPACE is set).\n\n    For version 2.0 and below, use `watch oc -n open-cluster-management get pods` to view the progress.\n\n4. The script provides you with the `Open Cluster Management` URL.\n\nNote: This script can be run multiple times and will attempt to continue where it left off. It is also good practice to run the `uninstall.sh` script if you have a failure and have installed multiple times.\n\n\n## Deploying Downstream Builds SNAPSHOTS for Product Quality Engineering (only 2.0+)\n\n### Requirements\n\n#### Required Access\n\nTo deploy downstream builds, you need access to pull the related images from the downstream mirror respository, quay.io/acm-d.  Access is internal to Red Hat only for Dev/Test/QE use.  Contact us in Slack Channel [#forum-hypbld](https://redhat.enterprise.slack.com/archives/C04L50S5XM4) on Red Hat Slack for access.\n\n#### Configuration\n\nTo deploy a downstream build from `quay.io/acm-d` ensure that your OCP cluster meets the following requirements:\n\n1. The cluster must have an ImageContentSourcePolicy (**Caution**: if you modify this on a running cluster, it will cause a rolling restart of all nodes).\n    To create the ImageContentSourcePolicy run:\n\n    ```\n    echo \"\n    apiVersion: operator.openshift.io/v1alpha1\n    kind: ImageContentSourcePolicy\n    metadata:\n      name: rhacm-repo\n    spec:\n      repositoryDigestMirrors:\n      - mirrors:\n        - quay.io:443/acm-d\n        source: registry.redhat.io/rhacm2\n      - mirrors:\n        - quay.io:443/acm-d\n        source: registry.redhat.io/multicluster-engine\n      - mirrors:\n        - registry.redhat.io/openshift4/ose-oauth-proxy\n        source: registry.access.redhat.com/openshift4/ose-oauth-proxy\" | kubectl apply -f -\n    ```\n\n2. Add the pull-secrets for the `quay.io:443` registry with access to the `quay.io/acm-d` repository in your OpenShift \n   main pull-secret. (**Caution**: if you apply this on a pre-existing cluster, it will cause a rolling restart of all nodes).\n\n   ```\n   # Replace \u003cUSER\u003e and \u003cPASSWORD\u003e with your credentials\n   oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' \u003epull_secret.yaml\n   oc registry login --registry=\"quay.io:443\" --auth-basic=\"\u003cUSER\u003e:\u003cPASSWORD\u003e\" --to=pull_secret.yaml\n   oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml\n   rm pull_secret.yaml\n   ```\n\n   You can also set the pull secrets in the OpenShift console or using the [bootstrap repo](https://github.com/stolostron/bootstrap#how-to-use) at cluster create time.\n\n    Your OpenShift main pull secret should contain an entry with `quay.io:443`.\n    \u003cpre\u003e\n    {\n      \"auths\": {\n        \"cloud.openshift.com\": {\n          \"auth\": \"ENCODED SECRET\",\n          \"email\": \"email@address.com\"\n        },\n        \u003cb\u003e\"quay.io:443\": {\n          \"auth\": \"ENCODED SECRET\",\n          \"email\": \"\"\n        }\u003c/b\u003e\n      }\n    }\n    \u003c/pre\u003e\n\n3. Set the `QUAY_TOKEN` environment variable\n    \n    In order to get a `QUAY_TOKEN`, go to your quay.io \"Account Settings\" page by selecting your username/icon in the top right corner of the page, then \"Generate Encrypted Password\".  \n    Choose \"Kubernetes Secret\" and copy just secret text that follows `.dockerconfigjson:`, `export DOCKER_CONFIG=` this value.\n    \n    If you copy the value of `.dockerconfigjson`, you can simplify setting the `QUAY_TOKEN` as follows:\n    \n    ```bash\n    export DOCKER_CONFIG=\u003cThe value after .dockerconfigjson from the quay.io\u003e\n    export QUAY_TOKEN=$(echo $DOCKER_CONFIG | base64 -d | sed \"s/quay\\.io/quay\\.io:443/g\" | base64)\n    ```\n    \n    (On Linux, use `export QUAY_TOKEN=$(echo $DOCKER_CONFIG | base64 -d | sed \"s/quay\\.io/quay\\.io:443/g\" | base64 -w 0)` to ensure that there are no line breaks in the base64 encoded token)\n\n### Deploy the downstream image\n\n**NOTE: You should only use a downstream build if you're doing QE on the final product builds.**\n\n```bash\nexport COMPOSITE_BUNDLE=true\nexport DOWNSTREAM=true\nexport CUSTOM_REGISTRY_REPO=\"quay.io:443/acm-d\"\nexport QUAY_TOKEN=\u003ca quay token with quay.io:443 as the auth domain\u003e\n./start.sh --watch\n```\n\n### Enable search later\n\nUse the following command to enable search\n```bash\noc set env deploy search-operator DEPLOY_REDISGRAPH=\"true\" -n INSTALL_NAMESPACE\n```\n\n### Deploy a managed cluster with downstream images\n\nRun on the **hub cluster**:\n\n```\n# Create a namespace managed cluster namespace on the hub cluster\nexport CLUSTER_NAME=managed-cluster1\noc new-project \"${CLUSTER_NAME}\"\noc label namespace \"${CLUSTER_NAME}\" cluster.open-cluster-management.io/managedCluster=\"${CLUSTER_NAME}\"\n\n# Create the managed cluster\necho \"\n    apiVersion: cluster.open-cluster-management.io/v1\n    kind: ManagedCluster\n    metadata:\n      name: ${CLUSTER_NAME}\n    spec:\n      hubAcceptsClient: true\" | kubectl apply -f -\n\n# Create the KlusterletAddonConfig\necho \"\napiVersion: agent.open-cluster-management.io/v1\nkind: KlusterletAddonConfig\nmetadata:\n  name: ${CLUSTER_NAME}\n  namespace: ${CLUSTER_NAME}\nspec:\n  clusterName: ${CLUSTER_NAME}\n  clusterNamespace: ${CLUSTER_NAME}\n  applicationManager:\n    enabled: true\n  certPolicyController:\n    enabled: true\n  clusterLabels:\n    cloud: auto-detect\n    vendor: auto-detect\n  iamPolicyController:\n    enabled: true\n  policyController:\n    enabled: true\n  searchCollector:\n    enabled: true\n  version: 2.2.0\" | kubectl apply -f -\n\noc get secret \"${CLUSTER_NAME}\"-import -n \"${CLUSTER_NAME}\" -o jsonpath={.data.crds\\\\.yaml} | base64 --decode \u003e klusterlet-crd.yaml\noc get secret \"${CLUSTER_NAME}\"-import -n \"${CLUSTER_NAME}\" -o jsonpath={.data.import\\\\.yaml} | base64 --decode \u003e import.yaml\n```\n\nNext apply the saved YAML manifests to your **managed cluster**:\n\n```\n# Change kubconfig to the managed cluster\n\n# Add quay credentials to the managed cluster too\n# Replace \u003cUSER\u003e and \u003cPASSWORD\u003e with your credentials\noc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' \u003epull_secret.yaml\noc registry login --registry=\"quay.io:443\" --auth-basic=\"\u003cUSER\u003e:\u003cPASSWORD\u003e\" --to=pull_secret.yaml\noc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml\nrm pull_secret.yaml\n\n# Apply klusterlet-crd\nkubectl apply -f klusterlet-crd.yaml\n\n# replace the registry in import.yaml \"registry.redhat.io/rhacm2\" to \"quay.io:443/acm-d\"\nsed 's/registry.redhat.io\\/rhacm2/quay.io:443\\/acm-d/g' import.yaml \u003e import.yaml\n\n# Apply the import.yaml\nkubectl apply -f import.yaml\n\n# Validate the pod status on the managed cluster\nkubectl get pod -n open-cluster-management-agent\n```\n\nValidate the imported cluster's status in the **hub cluster**:\n\n```\nkubectl get managedcluster ${CLUSTER_NAME}\nkubectl get pod -n open-cluster-management-agent-addon\n```\n\nTest if it works by applying creating a `ManifestWork` in the **hub cluster**:\n\n```\necho \"apiVersion: work.open-cluster-management.io/v1\nkind: ManifestWork\nmetadata:\n  name: mw-01\n  namespace: ${CLUSTER_NAME}\nspec:\n  workload:\n    manifests:\n      - apiVersion: v1\n        kind: Pod\n        metadata:\n          name: hello\n          namespace: default\n        spec:\n          containers:\n            - name: hello\n              image: busybox\n              command: [\"sh\", \"-c\", 'echo \"Hello, Kubernetes!\" \u0026\u0026 sleep 3600']\n          restartPolicy: OnFailure\" | kubectl apply -f -\n```\n\nOn the **managed cluster** validate that the hello pod is running:\n\n```\n$ kubectl get pods -n default\nNAME    READY   STATUS    RESTARTS   AGE\nhello   1/1     Running   0          3m23s\n```\n\n## To Delete a MultiClusterHub Instance (the easy way)\n\n1. Run the `uninstall.sh` script in the root of this repo.\n\n\n## To Delete the multiclusterhub-operator (the easy way)\n\n1. Run the `clean-clusters.sh` script, and enter `DESTROY` to delete any Hive deployments and detach all imported clusters.\n2. Run the `uninstall.sh` script in the root of this repo.\n\n### Troubleshooting\n1. If uninstall hangs on the helmRelease delete, you can run this command to move it along.  This is distructive and can result in orphaned objects.\n```bash\nfor helmrelease in $(oc get helmreleases.apps.open-cluster-management.io | tail -n +2 | cut -f 1 -d ' '); do oc patch helmreleases.apps.open-cluster-management.io $helmrelease --type json -p '[{ \"op\": \"remove\", \"path\": \"/metadata/finalizers\" }]'; done\n```\n2. If you need to get the build snapshot from your hub, the snapshot comes from image tagging CICD does to group components into builds. This snapshot is set in the catalogsource when deploying from acm-d. So to get what version was deployed you would read the image of the ACM catalogsource. An example of how to do this is `oc get catalogsource acm-custom-registry -n openshift-marketplace -o jsonpath='{.spec.image}'`, which returns `quay.io/stolostron/acm-custom-registry:2.5.0-SNAPSHOT-2022-05-26-19-51-06`.\n\n\n#### the hard way\n\u003cdetails\u003e\u003csummary\u003eClick if you dare\u003c/summary\u003e\n\u003cp\u003e\n\n## Manually deploy using `kubectl` commands\n\n1. Create the prereq objects by applying the yaml definitions contained in the `prereqs` dir:\n  ```bash\n  kubectl apply --openapi-patch=true -k prereqs/\n  ```\n\n2. Update the `kustomization.yaml` file in the `acm-operator` dir to set `newTag`\n  You can find a snapshot tag by viewing the list of tags available [here](https://quay.io/stolostron/acm-custom-registry) Use a tag that has the word `SNAPSHOT` in it.\n  For downstream deploys, make sure to set `newName` differently, usually to `acm-d`.\n    ```bash\n    namespace: open-cluster-management\n\n    images:\n      - name: acm-custom-registry\n        newName: quay.io/stolostron/acm-custom-registry\n        newTag: 1.0.0-SNAPSHOT-2020-05-04-17-43-49\n    ```\n\n3. Create the `multiclusterhub-operator` objects by applying the yaml definitions contained in the `acm-operator` dir:\n    ```bash\n    kubectl apply -k acm-operator/\n    ```\n\n4. Wait for subscription to be healthy:\n    ```bash\n    oc get subscription.operators.coreos.com acm-operator-subscription --namespace open-cluster-management -o yaml\n    ...\n    status:\n      catalogHealth:\n      - catalogSourceRef:\n          apiVersion: operators.coreos.com/v1alpha1\n          kind: CatalogSource\n          name: acm-operator-subscription\n          namespace: open-cluster-management\n          resourceVersion: \"1123089\"\n          uid: f6da232b-e7c1-4fc6-958a-6fb1777e728c\n        healthy: true\n        ...\n    ```\n\n5. Once the `open-cluster-management` CatalogSource is healthy you can deploy the `example-multiclusterhub-cr.yaml`\n    ```bash\n    apiVersion: operator.open-cluster-management.io/v1\n    kind: MultiClusterHub\n    metadata:\n      name: multiclusterhub\n      namespace: open-cluster-management\n    spec:\n      imagePullSecret: multiclusterhub-operator-pull-secret\n    ```\n\n6. Create the `example-multiclusterhub` objects by applying the yaml definitions contained in the `multiclusterhub` dir:\n    ```bash\n    kubectl apply -k multiclusterhub/\n    ```\n## To Delete a MultiClusterHub Instance\n\n1. Delete the `example-multiclusterhub` objects by deleting the yaml definitions contained in the `multiclusterhub` dir:\n    ```bash\n    kubectl delete -k multiclusterhub/\n    ```\n\n2. Not all objects are currently being cleaned up by the `multiclusterhub-operator` upon deletion of a `multiclusterhub` instance... you can ensure all objects are cleaned up by executing the `uninstall.sh` script in the `multiclusterhub` dir:\n    ```bash\n    ./multiclusterhub/uninstall.sh\n    ```\n\nAfter completing the steps above you can redeploy the `multiclusterhub` instance by simply running:\n    ```bash\n    kubectl apply -k multiclusterhub/\n    ```\n\n## To Delete the multiclusterhub-operator\n\n1. Delete the `multiclusterhub-operator` objects by deleting the yaml definitions contained in the `acm-operator` dir:\n    ```bash\n    kubectl delete -k acm-operator/\n    ```\n\n2. Not all objects are currently being cleaned up by the `multiclusterhub-operator` upon deletion. You can ensure all objects are cleaned up by executing the `uninstall.sh` script in the `acm-operator` dir:\n    ```bash\n    ./acm-operator/uninstall.sh\n    ```\n\nAfter completing the steps above you can redeploy the `multiclusterhub-operator` by simply running:\n    ```bash\n    kubectl apply -k acm-operator/\n    ```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n# Upgrade for Downstream\nYou can test the upgrade process with `downstream` builds only, using this repo. To test upgrade follow the instructions below:\n\n1. Export environment variables needed for `downstream` deployment:  \n   ```\n   export CUSTOM_REGISTRY_REPO=quay.io/acm-d\n   export DOWNSTREAM=true\n   export COMPOSITE_BUNDLE=true\n   ```\n2. Apply ImageContentSourcePolicy to redirect `registry.redhat.io/rhacm2` to `quay.io:443/acm-d`\n   ```\n   oc apply -k addons/downstream\n   ```\n3. In order to perform an `upgrade` you need to install a previously GA'd version of ACM. To do that you will need to set the following variables:\n   ```\n   export MODE=Manual     # MODE is set to Manual so that we can specify a previous version to install\n   export STARTING_VERSION=2.x.x  # Where 2.x.x is a previously GA'd version of ACM i.e. `STARTING_VERSION=2.0.4`\n   ```\n4. Run the `start.sh` script  \n   ```\n   ./start.sh --watch\n   ```\n\nOnce the installation is complete you can then attempt to upgrade the ACM instance by running the `upgrade.sh` script. You will need to set additional variables in your environment to tell the upgrade script what you want it to do:\n1. Export environment variables needed by the `upgrade.sh` script\n   ```\n   export NEXT_VERSION=2.x.x      # Where 2.x.x is some value greater than the version you previously defined in the STARTING_VERSION=2.x.x\n   export NEXT_SNAPSHOT=2.X.X-DOWNSTREAM-YYYY-MM-DD-HH-MM-SS      #This variable will specify the registry pod and wait for completion\n   ```\n2. Now run the upgrade process:\n   ```\n   ./upgrade.sh\n   ```\n\n# Upgrades for snapshots\nThis approach mostly works, but we do not test or intend to have snapshot to snapshot upgrades.\n\n1. Connect to the Red Hat Advanced Cluster Management for Kubernetes OpenShift cluster\n2. Run the `upgrade-snapshot.sh` script\n   ```\n   ./upgrade-snapshot.sh [--watch] [--debug]\n   ```\n3. It will ask you to provide a snapshot to upgrade to. It does not validate the value, so if you put an older snapshot or an invalid one, the ugprade will get stuck. Press `ctrl-c` and try again.\n4. It takes about 5min to complete, and the final `--watch` output should look like this:\n    ```\n    Upgrade started\n    =---------{ Upgrade takes \u003c10min }---------=\n        Elapsed                    : 300s\n    1. Multiclusterhub CSV        : Succeeded\n    2. Multiclusterengine CSV     : Succeeded\n    3. Multiclusterengine operator: Available\n    4. Multiclusterhub operator   : Running\n\n    DONE!   \n    ```\n\n# MultiCluster Engine\n\nFor detailed instructions to install and manage the MultiCluster Engine, see the following [README](multiclusterengine/README.md).\n\n## Override MultiCluster Engine Catalogsource\n\nThe default MultiClusterEngine catalogsource can be overriden by defining the `MCE_SNAPSHOT_CHOICE` environment variable with the proper tag before calling ./start.sh script.\n\nExample - \n```bash\nMCE_SNAPSHOT_CHOICE=2.0.0-BACKPLANE-2021-12-02-18-35-02 ./start.sh\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstolostron%2Fdeploy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstolostron%2Fdeploy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstolostron%2Fdeploy/lists"}