Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/kwonghung-YIP/setup-istio-multi-primary-diff-network

This post is about how to run the Istio “Install Multi-Primary on different networks” example on an old, low-end PC. The instruction started from creating the Ubuntu VM for cluster nodes till verified the Istio mesh, and with the minimal configuration, the whole exercise will take around 2 to 3 hours to complete.
https://github.com/kwonghung-YIP/setup-istio-multi-primary-diff-network

cross-cluster istio kubernetes mesh ubuntu

Last synced: 3 months ago
JSON representation

This post is about how to run the Istio “Install Multi-Primary on different networks” example on an old, low-end PC. The instruction started from creating the Ubuntu VM for cluster nodes till verified the Istio mesh, and with the minimal configuration, the whole exercise will take around 2 to 3 hours to complete.

Awesome Lists containing this project

README

        

# Introduction

The [Istio/Install Multi-Primary on different network example](https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/) is about to form an Istio mesh on top of two kubernetes clusters. In this guide, we will go through how to set up these two clusters from scratch, and finally implement the example on it.

## 1. Configuration

#### 1.1 Component Version
Component | Version
-- | --
VMWare workstation | 16.1.2 build
Linux distribution | Ubuntu 20.04.2 LTS (focal)
Container runtime | Docker Engine 20.10.7
Kubernetes | v1.21.2
CNI | Weavenet v2.8.1
Load Balancer Implementation | MetalLB v0.10.2
Istio | v1.10.1

#### 1.2 VMWare Network config (NAT - VMnet8):
Config | Value
-- | --
Network Address | 194.89.64.0/24
Default Gateway | 194.89.64.2/24
Boardcast Address | 194.89.64.255
DNS Server | 1.1.1.1, 8.8.8.8
DHCP Range | 194.89.64.128/24 - 194.89.64.254/24
Cluster1 MatelLB Ext IP Range | 194.89.64.81/24 - 194.89.64.100/24
Cluster2 MatelLB Ext IP Range | 194.89.64.101/24 - 194.89.64.120/24

#### 1.3 Worker Nodes VM Settings:
Hostname | static IP | Core | Ram | Disk
-- | -- | -- | -- | --
ubuntu-20042-base | 194.89.64.10/24 | - | - | -
cluster1-ctrl-plane | 194.89.64.11/24 | 2 | 4G | 20G
cluster1-worker-node01 | 194.89.64.12/24 | 2 | 4G | 20G
cluster2-ctrl-plane | 194.89.64.13/24 | 2 | 4G | 20G
cluster2-worker-node01 | 194.89.64.14/24 | 2 | 4G | 20G

## 2. Prepare the base image

#### 2.1 Create an Ubuntu 20.04.2 LTS Virtual Machine
- Enable DHCP to get IP address
- Create an admin account, for my case is **hung**
- Install ssh server

#### [take a VM snapshot as checkpoint]

#### 2.2 Apply the ssh public key for passwordless login

1. Generate a ssh key with PuTTY Key Generator
1. Save the private key with or without passphase protection
1. Copy the public key into the file `~/.ssh/authorized_keys`
1. Launch Pagent and add the private key just saved
1. Save a new session and append the login before the hostname (e.g. [email protected])

#### 2.3 Stop sudo to prompt for password again
_*References:*_
[Ask Ubuntu - Execute sudo without password](https://askubuntu.com/questions/147241/execute-sudo-without-password)

1. Run `sudo visudo`
1. Append `hung ALL=(ALL) NOPASSWD: ALL` at the end of the file

#### 2.4 Disable the swap
_*References:*_
[ServerFlaut - Best way to disable swap in linux](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)

1. The step is necessary for initiate Kubernetes cluster
1. Run `sudo swapoff -a`
1. Comment out swap setting in `/etc/fstab` to make the permanent change
1. Run `free -h` to check the swap size

#### 2.5 Switch the netplan config from dhcp client to static IP
_*References:*_
[How to Assign Static IP Address on Ubuntu 20.04 LTS](https://www.linuxtechi.com/assign-static-ip-address-ubuntu-20-04-lts/)

1. Update the netplan config `/etc/netplan/00-installer-config.yaml`:
```yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens33:
addresses: [194.89.64.10/24] # <= the static IP assigned to this node
gateway4: 194.89.64.2 # <= the default gateway
nameservers:
addresses: [1.1.1.1,8.8.8.8] # <= the nameserver entries here will be added as the DNS server in systemd-resolved
version: 2
```

2. Run the following to apply the change without reboot
```bash
sudo netplan apply
```

#### [take a VM snapshot as checkpoint]

## 3 Install container runtime - Docker Engine
_*References:*_
[Kubernetes - Container runtimes: Docker](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker)
[Docker Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/)

#### 3.1 Install packages to allow apt download packages from HTTPS channel
```bash
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
```

#### 3.2 Add Docker’s official GPG key
```bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```

#### 3.3 Add apt repository for Docker's stable release
```bash
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```

#### 3.4 Install docker engine
```bash
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
```

#### 3.5 Verify docker engine by running the hello-world
```bash
sudo docker run hello-world
```

#### 3.6 Update the docker daemon config, particular to use systemd as the cgroup driver
```bash
sudo mkdir /etc/docker
cat < cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
EOF

istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
```

#### 13.2 Install the east-west gateway in cluster1
```bash
samples/multicluster/gen-eastwest-gateway.sh \
--mesh mesh1 --cluster cluster1 --network network1 | \
istioctl --context="${CTX_CLUSTER1}" install -y -f -
```

#### 13.3 Expose services in cluster1
```bash
kubectl --context="${CTX_CLUSTER1}" apply -n istio-system -f \
samples/multicluster/expose-services.yaml
```

#### 13.4 Config cluster2 as primary
```bash
cat < cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
EOF

istioctl install --context="${CTX_CLUSTER1}" -f cluster2.yaml
```

#### 13.5 Install the east-west gateway in cluster2
```bash
samples/multicluster/gen-eastwest-gateway.sh \
--mesh mesh1 --cluster cluster2 --network network2 | \
istioctl --context="${CTX_CLUSTER2}" install -y -f -
```

#### 13.6 Expose services in cluster2
```bash
kubectl --context="${CTX_CLUSTER2}" apply -n istio-system -f \
samples/multicluster/expose-services.yaml
```

#### 13.7 Enable Endpoint Discovery
```bash
istioctl x create-remote-secret \
--context="${CTX_CLUSTER1}" \
--name=cluster1 | \
kubectl apply -f - --context="${CTX_CLUSTER2}"

istioctl x create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
```

## 14. Verify the mesh service discovery and cross-cluster traffic
_*References:*_
[Istio - verify installation](https://istio.io/latest/docs/setup/install/multicluster/verify/)
[Istio - Triubleshooting Multicluster](https://istio.io/latest/docs/ops/diagnostic-tools/multicluster/)

#### 14.1 Create the *sample* namespace, *helloworld* service and *sleep* deployment in both clusters
```bash
kubectl create --context="${CTX_CLUSTER1}" namespace sample
kubectl create --context="${CTX_CLUSTER2}" namespace sample

kubectl label --context="${CTX_CLUSTER1}" namespace sample \
istio-injection=enabled
kubectl label --context="${CTX_CLUSTER2}" namespace sample \
istio-injection=enabled

kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample

kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/sleep/sleep.yaml -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/sleep/sleep.yaml -n sample
```

#### 14.2 Deploy Helloworld v1 into cluster1
```bash
kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l version=v1 -n sample
```

#### 14.3 Deploy Helloworld v2 into cluster2
```bash
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l version=v2 -n sample
```

#### 14.4 Test the *hellowworld service* in cluster1

While you test it repeatly, you should get return from both v1 running on cluster1 and v2 running on cluster2
```bash
kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
```

#### 14.5 And do the same for cluster2
```bash
kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
```

## 15. Check the istio-proxy sidecar config
```bash
kubectl config get-contexts
istioctl --context admin@cluster1 ps
kubectl --context admin@cluster1 get pod --namespace sample --output wide
kubectl --context admin@cluster2 get service --namespace istio-system
istioctl --context admin@cluster1 pc ep sleep-557747455f-4c7vz.sample --cluster="outbound|5000||helloworld.sample.svc.cluster.local"

kubectl config get-contexts
istioctl --context admin@cluster2 ps
kubectl --context admin@cluster2 get pod --namespace sample --output wide
kubectl --context admin@cluster1 get service --namespace istio-system
istioctl --context admin@cluster2 pc ep sleep-557747455f-jznfb.sample --cluster="outbound|5000||helloworld.sample.svc.cluster.local"
```