https://github.com/rgl/terraform-rke-vsphere-cloud-provider-example
Dynamically provision Persistent Volumes using the vSphere Cloud Provider in a RKE cluster
https://github.com/rgl/terraform-rke-vsphere-cloud-provider-example
k8s kubernetes persistent-storage persistent-volume rke terraform vsphere
Last synced: 6 months ago
JSON representation
Dynamically provision Persistent Volumes using the vSphere Cloud Provider in a RKE cluster
- Host: GitHub
- URL: https://github.com/rgl/terraform-rke-vsphere-cloud-provider-example
- Owner: rgl
- Created: 2021-08-09T19:26:39.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2021-08-13T19:30:38.000Z (about 4 years ago)
- Last Synced: 2025-02-06T11:56:46.972Z (8 months ago)
- Topics: k8s, kubernetes, persistent-storage, persistent-volume, rke, terraform, vsphere
- Language: HCL
- Homepage:
- Size: 10.7 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# About
This shows how to dynamically provision Persistent Volumes using the [vSphere Cloud Provider](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/), [terraform](https://www.terraform.io/), and [the rke provider](https://registry.terraform.io/providers/rancher/rke) in a single-node Kubernetes instance.
The Persistent Volume will be dynamically created from a Persistent Volume Claim inside a vSphere Datastore.
The Persistent Volume will be dynamically (de)attached to the Virtual Machine as a SCSI disk.
**NB** There's a big caveat with terraform and vSphere Persistent Volumes: PVs are attached as a Virtual Machine Disks and when you try to use `terraform plan` again, these will appear as being modified outside of the terraform control; `terraform` is therefore [configured to ignore disks changes](https://github.com/hashicorp/terraform-provider-vsphere/issues/1028) to prevent it from modifying the VM configuration.
**NB** This uses the deprecated vSphere Cloud Provider driver (that's what RKE uses out-of-the-box). Newer installations should probably use the [vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/).
**NB** This uses a VMFS Datastore. It does not uses vSAN.
## Usage (Ubuntu 20.04 host)
Install the [Ubuntu 20.04 VM template](https://github.com/rgl/ubuntu-vagrant).
Install `terraform`, `govc` and `kubectl`:
```bash
# install terraform.
wget https://releases.hashicorp.com/terraform/1.0.4/terraform_1.0.4_linux_amd64.zip
unzip terraform_1.0.4_linux_amd64.zip
sudo install terraform /usr/local/bin
rm terraform terraform_*_linux_amd64.zip
# install govc.
wget https://github.com/vmware/govmomi/releases/download/v0.26.0/govc_Linux_x86_64.tar.gz
tar xf govc_Linux_x86_64.tar.gz govc
sudo install govc /usr/local/bin/govc
rm govc govc_Linux_x86_64.tar.gz
# install kubectl.
kubectl_version='1.20.8'
wget -qO /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list >/dev/null
sudo apt-get update
kubectl_package_version="$(apt-cache madison kubectl | awk "/$kubectl_version-/{print \$3}")"
sudo apt-get install -y "kubectl=$kubectl_package_version"
```Save your environment details as a script that sets the terraform variables from environment variables, e.g.:
```bash
cat >secrets.sh <<'EOF'
export TF_VAR_vsphere_user='administrator@vsphere.local'
export TF_VAR_vsphere_password='password'
export TF_VAR_vsphere_server='vsphere.local'
export TF_VAR_vsphere_datacenter='Datacenter'
export TF_VAR_vsphere_compute_cluster='Cluster'
export TF_VAR_vsphere_datastore='Datastore'
export TF_VAR_vsphere_network='VM Network'
export TF_VAR_prefix='rke_example'
export TF_VAR_vsphere_folder="examples/$TF_VAR_prefix"
export TF_VAR_vsphere_ubuntu_template='vagrant-templates/ubuntu-20.04-amd64-vsphere'
export TF_VAR_controller_count='1'
export TF_VAR_worker_count='1'
export GOVC_INSECURE='1'
export GOVC_URL="https://$TF_VAR_vsphere_server/sdk"
export GOVC_USERNAME="$TF_VAR_vsphere_user"
export GOVC_PASSWORD="$TF_VAR_vsphere_password"
EOF
```**NB** You could also add these variables definitions into the `terraform.tfvars` file, but I find the environment variables more versatile as they can also be used from other tools, like govc.
Launch this example:
```bash
source secrets.sh
# see https://github.com/vmware/govmomi/blob/master/govc/USAGE.md
govc version
govc about
govc datacenter.info # list datacenters
govc find # find all managed objects
rm -f *.log kubeconfig.yaml
terraform init
terraform plan -out=tfplan
time terraform apply tfplan
# do another plan and you should verify that there are no changes.
terraform plan -out=tfplan
```Test accessing the cluster:
```bash
terraform output --raw rke_state >rke_state.json # might be useful for troubleshooting.
terraform output --raw kubeconfig >kubeconfig.yaml
export KUBECONFIG=$PWD/kubeconfig.yaml
kubectl get nodes -o wide
```Test creating a persistent workload:
```bash
# create a test StorageClass.
kubectl apply -f - <test-pod.yaml <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: web
volumes:
- name: web
persistentVolumeClaim:
claimName: test
EOF
kubectl apply -f test-pod.yaml
# see the pod status and wait for it be Running.
kubectl get pods -o wide
# show the current VM disks. you should see the sdb disk device and the
# corresponding disk UUID. sdb is the backing device of the pod volume.
ssh "vagrant@$(kubectl get pods test -o json | jq -r .status.hostIP)" \
-- lsblk -o KNAME,SIZE,TRAN,FSTYPE,UUID,LABEL,MODEL,SERIAL
# enter the pod and check the mount volume.
kubectl exec -it test -- /bin/bash
# show the mounts. you should see sdb mounted at /usr/share/nginx/html.
mount | grep nginx
# create the index.html file.
cat >/usr/share/nginx/html/index.html <<'EOF'
This is served from a Persistent Volume!
EOF
# check whether nginx is returning the expected html.
curl localhost
# exit the pod.
exit
# delete the test pod.
# NB this will trigger the removal of the PV volume from the VM.
kubectl delete pod/test
# list the PVs and check that the pv was not deleted.
kubectl get pv
# create a new pod instance.
kubectl apply -f test-pod.yaml
# see the pod status and wait for it be Running.
kubectl get pods -o wide
# check whether nginx is returning the expected html.
kubectl exec -it test -- curl localhost
```Destroy everything:
```bash
time terraform destroy --auto-approve
```**NB** This will not delete the VMDKs that were used to store the Persistent Volumes. You have to manually delete them from the datastore `kubevols` folder (which is a PITA), or before `terraform destroy` manually delete the PVC with `kubectl delete pvc/test`.