https://github.com/redhat-developer-demos/hybrid-cloud
Hybrid Cloud Demo using OpenShift on Multiple Clouds (Public/Private)
https://github.com/redhat-developer-demos/hybrid-cloud
Last synced: 7 months ago
JSON representation
Hybrid Cloud Demo using OpenShift on Multiple Clouds (Public/Private)
- Host: GitHub
- URL: https://github.com/redhat-developer-demos/hybrid-cloud
- Owner: redhat-developer-demos
- License: apache-2.0
- Created: 2020-10-15T13:38:07.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2022-10-13T07:40:29.000Z (over 2 years ago)
- Last Synced: 2023-02-28T11:56:53.173Z (over 2 years ago)
- Language: Java
- Size: 541 KB
- Stars: 73
- Watchers: 11
- Forks: 51
- Open Issues: 1
-
Metadata Files:
- Readme: README.adoc
- License: LICENSE
Awesome Lists containing this project
README
= Hybrid Cloud
image:https://github.com/redhat-developer-demos/hybrid-cloud/workflows/backend/badge.svg[]
image:https://github.com/redhat-developer-demos/hybrid-cloud/workflows/frontend/badge.svg[]*Tested with skupper 1.0.2*
== TL;DR
*Hybrid Cloud demo*: Quarkus backends distributed in different OpenShift clusters installed on public clouds, controlled by link:https://skupper.io/[Skupper] and consumed by a Frontend.
You find the guide on how to install such clusters inside link:installation/README.adoc[installation] dir.
https://youtu.be/Y0g4tQ8Prs0
=== Install Skupper
[source, shell-session]
----
# Windows
curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-windows-amd64.zipunzip skupper-cli-1.0.2-windows-amd64.zip
mkdir %UserProfile%\bin
move skupper.exe %UserProfile%\bin
set PATH=%PATH%;%UserProfile%\bin# Linux
curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-linux-amd64.tgz | tar -xzf -
mkdir $HOME/bin
mv skupper $HOME/bin
export PATH=$PATH:$HOME/bin# MacOs
curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-mac-amd64.tgz | tar -xzf -
mkdir $HOME/bin
mv skupper $HOME/bin
export PATH=$PATH:$HOME/bin
----=== Deploy Services
IMPORTANT: If you are using `minikube` as one cluster, run `minikube tunnel` in a new terminal.
As Kubernetes enables you to have cloud portability, the workload can be deployed on any Kubernetes context.
Deploy the backend using the following three commands on all clusters and change only `WORKER_CLOUD_ID` value to differentiate between them. The name of the cloud or the location/geography are useful to illustrate where the work is being conducted/transacted.[source, shell-session]
----
kubectl create namespace hybrid
kubectl -n hybrid apply -f backend.yml
kubectl -n hybrid set env deployment/hybrid-cloud-backend WORKER_CLOUD_ID="localhost" # aws, azr, gcp
----The annotation has already been applied in the backend.yml file but if wish to apply it manually, use the following command.
[source, shell-session]
----
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
----Deploy the frontend to your *main* cluster:
The Kubernetes service is of type `LoadBalancer`.
If you are deploying the frontend in your public cluster open `frontend.yml` file and modify Ingress configuration with your host:[source, yaml]
----
spec:
rules:
- host: ""
----In your *main* cluster, deploy the frontend by calling:
[source, shell-session]
----
kubectl apply -f frontend.yml
----TIP: In case of OpenShift you can run: `oc expose service hybrid-cloud-frontend` after deploying frontend resource, and it is not required to modify the Ingress configuration. But of course, the first approach works as well in OpenShift.
To find the frontend URL, on OpenShift use the Route
[source, shell-session]
----
kubectl get routes | grep frontend
----On vanilla Kubernetes, try the external IP of the Service
[source, shell-session]
----
#AWS
kubectl get service hybrid-cloud-frontend -o jsonpath="{.status.loadBalancer.ingress[0].hostname}"#Azure, GCP
kubectl get service hybrid-cloud-frontend -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
----In your *main* cluster, init `skupper` and create the `connection-token`:
[source, shell-session]
----
skupper init --console-auth unsecured # <1>Skupper is now installed in namespace 'hybrid'. Use 'skupper status' to get more information.
skupper status
Skupper is installed in namespace '"hybrid"'. Status pending...
----
<1> This makes anyone be able to access the Skupper UI to visualize the clouds. Fine for demos, not to be used in production.See the status of the skupper pods.
It takes a bit of time (usually around 2 minutes) until the pods are running:[source, shell-session]
----
kubectl get podsNAME READY STATUS RESTARTS AGE
hybrid-cloud-backend-5cbd67d789-mfvbz 1/1 Running 0 3m55s
hybrid-cloud-frontend-55bdf64c95-gk2tf 1/1 Running 0 3m27s
skupper-router-dd7dfff55-tklgg 2/2 Running 0 59s
skupper-service-controller-dc779b7c-5prhc 1/1 Running 0 56s
----Finally create a token:
----
skupper token create token.yaml -t certConnection token written to token.yaml
----In *all the other clusters*, use the connection token created in the previous step:
[source, shell-session]
----
skupper init
skupper link create token.yaml
----Check the service status on all clusters
[source, shell-session]
----
skupper service status
Services exposed through Skupper:
╰─ hybrid-cloud-backend (http port 8080)
╰─ Targets:
╰─ app.kubernetes.io/name=hybrid-cloud-backend,app.kubernetes.io/version=1.0.0 name=hybrid-cloud-backend
----Check the link status on the 2nd/3rd cluster
[source, shell-session]
----
skupper link status
Link link1 is active
----This has been the short-version to get started, continue reading if you want to learn how to build the Docker images, deply them , etc.
=== Skupper UI
If you run:
[source, shell-session]
----
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hybrid-cloud-backend ClusterIP 172.30.157.62 8080/TCP 10m
hybrid-cloud-frontend LoadBalancer 172.30.70.80 acf3bee14b0274403a6f02dc062a3784-405180745.eu-west-1.elb.amazonaws.com 8080:32156/TCP 10m
skupper ClusterIP 172.30.128.55 8080/TCP,8081/TCP 7m50s
skupper-router ClusterIP 172.30.7.7 55671/TCP,45671/TCP 7m53s
skupper-router-local ClusterIP 172.30.8.239 5671/TCP 7m53s 5671/TCP 34m
----== Services
=== Backend
If you want to build, push and deploy the service:
[source, shell-session]
----
cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure
----If service is already pushed in quay.io, so you can skip the push part:
[source, shell-session]
----
cd backend./mvnw clean package -DskipTests -Pazure -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false
----=== Frontend
If you want to build, push and deploy the service:
[source, shell-session]
----
cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure -Dquarkus.kubernetes.host=
----If service is already pushed in quay.io, so you can skip the push part:
[source, shell-session]
----
cd backend./mvnw clean package -DskipTests -Pazr -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false
----=== Cloud Providers
The next profiles are provided: `-Pazr`, `-Paws`, `-Pgcp` and `-Plocal`, this just sets an environment variable to identify the cluster.
=== Setting up Skupper
Make sure you have a least the `backend` project deployed on 2 different clusters. The `frontend` project can be deployed to just one cluster.
Here, we will make the assumption that we have it deployed in a local cluster *local* and a public cluster *public*.
Make sure to have 2 terminals with separate sessions logged into each of your cluster with the correct namespace context (but within the same folder).
==== Install the Skupper CLI
Follow the instructions provided https://skupper.io/start/index.html#step-1-install-the-skupper-command-line-tool-in-your-environment[here].
==== Skupper setup
. In your *public* terminal session :
```
skupper init --id public
skupper connection-token private-to-public.yaml
```. In your *local* terminal session :
```
skupper init --id private
skupper connect private-to-public.yaml
```==== Annotate the services to join to the Virtual Application Network
. In the terminal for the *local* cluster, annotate the hybrid-cloud-backend service:
```
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
```. In the terminal for the *public* cluster, annotate the hybrid-cloud-backend service:
```
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
```Both services are now connected, if you scale one to 0 or it gets overloaded it will transparently load-balance to the other cluster.