https://github.com/rguske/openshift-day-two
OpenShift Day-2 Adjustments and Operations
https://github.com/rguske/openshift-day-two
day2 kubernetes openshift openshift-container-platform redhat
Last synced: about 2 months ago
JSON representation
OpenShift Day-2 Adjustments and Operations
- Host: GitHub
- URL: https://github.com/rguske/openshift-day-two
- Owner: rguske
- Created: 2025-01-31T08:09:17.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-03-24T09:00:55.000Z (about 2 months ago)
- Last Synced: 2025-03-24T10:22:14.158Z (about 2 months ago)
- Topics: day2, kubernetes, openshift, openshift-container-platform, redhat
- Homepage:
- Size: 860 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Red Hat OpenShift Day-2 Operations
⚠️ WIP
- [Red Hat OpenShift Day-2 Operations](#red-hat-openshift-day-2-operations)
- [OpenShift Identity Providers](#openshift-identity-providers)
- [Configure `htpasswd`](#configure-htpasswd)
- [Updating User](#updating-user)
- [Configure RBAC Permissions](#configure-rbac-permissions)
- [Node Configurations](#node-configurations)
- [Make a Control-Plane Node `scheduable`](#make-a-control-plane-node-scheduable)
- [Troubleshooting](#troubleshooting)
- [Gathering Logs](#gathering-logs)
- [Nested Virtualization](#nested-virtualization)
- [Replacing the default Ingress Certificate](#replacing-the-default-ingress-certificate)
- [OpenShift Web Console Customizations](#openshift-web-console-customizations)
- [Customizing the Web Console in OpenShift Container Platform](#customizing-the-web-console-in-openshift-container-platform)
- [Customizing the Login/Provider Page](#customizing-the-loginprovider-page)
- [Registry Authentication](#registry-authentication)
- [Quick NFS Storage](#quick-nfs-storage)
- [Install the NFS Server](#install-the-nfs-server)
- [OpenShift NFS Provisioner Template](#openshift-nfs-provisioner-template)
- [Deploying a Test-workload](#deploying-a-test-workload)
- [USB Client Passthrough](#usb-client-passthrough)
- [Adjust the VM Configuration (specs)](#adjust-the-vm-configuration-specs)
- [Identify USB Vendor and Product ID](#identify-usb-vendor-and-product-id)
- [Connect to your VM using `virtctl`](#connect-to-your-vm-using-virtctl)
- [Start redirecting the USB Device](#start-redirecting-the-usb-device)## OpenShift Identity Providers
### Configure `htpasswd`
[Docs: Configuring an htpasswd identity provider](https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/configuring-identity-providers#configuring-htpasswd-identity-provider)
Step 1: Create an htpasswd file to store the user and password information:
`htpasswd -c -B -b users.htpasswd rguske `
Add a new user to the file:
`htpasswd -bB users.htpasswd rbohne 'r3dh4t1!'`
`htpasswd -bB users.htpasswd devuser 'r3dh4t1!'`
Remove an existing user:
`htpasswd -D users.htpasswd `
Replacing an updated `users.htpasswd` file:
`oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -`
Step 2: Create a Kubernetes secret:
`oc create secret generic htpass-secret-rguske --from-file=htpasswd= -n openshift-config`
`oc create secret generic htpass-secret-devuser --from-file=htpasswd= -n openshift-config`
This can also be done using the OpenShift User Interface:

### Updating User
`oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd`
### Configure RBAC Permissions
[Docs: Using RBAC to define and apply permissions](https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/using-rbac#authorization-overview_using-rbac)

Add cluster-wide admin priviledges to e.g. user rguske:
```yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rguske-cluster-admin
subjects:
- kind: User
apiGroup: rbac.authorization.k8s.io
name: rguske
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
```Alternatively via the WebUi:

## Node Configurations
### Make a Control-Plane Node `scheduable`
Red Hat KB6148012 - [How to schedule pod on master node where scheduling is disabled?](https://access.redhat.com/solutions/6148012)
```code
oc get scheduler cluster -oyamlapiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: "2025-01-28T15:20:20Z"
generation: 1
name: cluster
resourceVersion: "542"
uid: 59f6fef1-e88a-484a-8e3c-fa38e6e300b3
spec:
mastersSchedulable: false
policy:
name: ""
status: {}
```Edit the `scheduler` CR and configure the spec: `mastersSchedulable: true`
```code
oc get nodes
NAME STATUS ROLES AGE VERSION
ocp1-h5ggj-master-0 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-master-1 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-master-2 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-worker-0 Ready worker 2d18h v1.30.6
ocp1-h5ggj-worker-1 Ready worker 2d18h v1.30.6
```## Troubleshooting
### Gathering Logs
[Creating must-gather with more details for specific components in OCP 4
](https://access.redhat.com/solutions/5459251)Data Collection Audit logs:
`oc adm must-gather -- /usr/bin/gather_audit_logs`
Default must-gather including the audit logs:
`oc adm must-gather -- '/usr/bin/gather && /usr/bin/gather_audit_logs'`
OCPV:
`oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel[8,9]:[operator_version]`
The [8,9] should be replaced based on the version of OCP 4.12 uses rhel8, and OCP 4.13 and later uses rhel9.
The [operator_version] tag should be in format v4.y.z.Examples - 4.17: `oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.17.4`
```code
oc adm must-gather \
--image-stream=openshift/must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.4 \
--image=registry.redhat.io/workload-availability/node-healthcheck-must-gather-rhel9:v0.9.0
```[How to generate a sosreport within nodes without SSH in OCP 4](https://access.redhat.com/solutions/4387261)
```code
oc get nodes
NAME STATUS ROLES AGE VERSION
ocp1-h5ggj-master-0 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-master-1 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-master-2 Ready control-plane,master,worker 2d19h v1.30.6
ocp1-h5ggj-worker-0 Ready worker 2d18h v1.30.6
ocp1-h5ggj-worker-1 Ready worker 2d18h v1.30.6
```Then, create a debug session with oc debug node/ (in this example oc debug node/node-1). The debug session will spawn a pod using the tools image from the release (which doesn't contain sos):
```code
oc debug node/ocp1-h5ggj-master-0
``````code
chroot /host bash
[root@ocp1-h5ggj-master-0 /]# cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.17
``````code
$ toolboxTrying to pull registry.redhat.io/rhel9/support-tools:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob facf1e7dd3e0 done |
Copying blob a0e56de801f5 done |
Copying blob ec465ce79861 done |
Copying blob cbea42b25984 done |
Copying config a627accb68 done |
Writing manifest to image destination
Storing signatures
a627accb682adb407580be0d7d707afbcb90abf2f407a0b0519bacafa15dd409
Spawning a container 'toolbox-root' with image 'registry.redhat.io/rhel9/support-tools'
Detected RUN label in the container image. Using that as the default...
ebf4dd2b82bf8ebeab55291c8ca195b61e13c9fc5d8dfb095f5fdcbcdabae2df
toolbox-root
Container started successfully. To exit, type 'exit'.
````sosreport -e openshift -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on --all-logs`
## Nested Virtualization
[How to set the CPU model to Passthrough in OpenShift Virtualization?](https://access.redhat.com/solutions/7069612)
```yaml
oc create -f - <..
- The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate.
- Copy the root CA certificate into an additional PEM format file.
- Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line.Create a config map that includes only the root CA certificate used to sign the wildcard certificate:
```yaml
oc apply -f - < login.html`Alternatively, adjust the existing `login.html` and or `provider.html`.
Export the existing `login.html` and `provider.html`:
`POD=$(oc get pods -n openshift-authentication -o name | head -n 1)`
`oc exec -n openshift-authentication "$POD" -- cat /var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html > login.html`
`oc exec -n openshift-authentication "$POD" -- cat /var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html > providers.html`
Choose an image ewhich you'd like to use for the replacement. Encode the the image into `base64`. [Base64 Guru](https://base64.guru/converter/encode/image) helps.
Replace the base64 value in the `login.html`. Search for `background-image:url(data:image/`, pay attention to the file format (png, svg, jpg), adjust it if necessary and replace the base64 value of the image.
Create the secrets:
```code
oc -n openshift-config get secret
NAME TYPE DATA AGE
etcd-client kubernetes.io/tls 2 8d
htpasswd-dm9mt Opaque 1 6d1h
initial-service-account-private-key Opaque 1 8d
pull-secret kubernetes.io/dockerconfigjson 1 8d
webhook-authentication-integrated-oauth Opaque 1 8d
````oc create secret generic login-template --from-file=login.html -n openshift-config`
`oc create secret generic providers-template --from-file=providers.html -n openshift-config`
Edit the `oauth` CR:
`oc edit oauths cluster`
```yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
# ...
spec:
templates:
error:
name: error-template
login:
name: login-template
providerSelection:
name: providers-template
```After editing the CR, the pods within the `openshift-authentication` namespace will be redeployed.
```code
oc -n openshift-authentication get pods -w
NAME READY STATUS RESTARTS AGE
oauth-openshift-8c7859b9f-fwsnl 1/1 Running 0 6m55s
oauth-openshift-8c7859b9f-kp8rw 1/1 Running 0 7m53s
oauth-openshift-8c7859b9f-qw7wl 1/1 Running 0 7m25s
oauth-openshift-8c7859b9f-kp8rw 1/1 Terminating 0 8m42s
oauth-openshift-664fbb9d49-r5bzk 0/1 Pending 0 0s
oauth-openshift-664fbb9d49-r5bzk 0/1 Pending 0 0s
oauth-openshift-8c7859b9f-kp8rw 0/1 Terminating 0 9m8s
oauth-openshift-664fbb9d49-r5bzk 0/1 Pending 0 26s
oauth-openshift-664fbb9d49-r5bzk 0/1 Pending 0 26s
oauth-openshift-664fbb9d49-r5bzk 0/1 ContainerCreating 0 26s
oauth-openshift-8c7859b9f-kp8rw 0/1 Terminating 0 9m8s
oauth-openshift-8c7859b9f-kp8rw 0/1 Terminating 0 9m8s
oauth-openshift-664fbb9d49-r5bzk 0/1 ContainerCreating 0 27s
oauth-openshift-664fbb9d49-r5bzk 0/1 Running 0 27s
oauth-openshift-664fbb9d49-r5bzk 1/1 Running 0 28s
```
## Registry Authentication
```code
oc create secret docker-registry docker-hub \
--docker-server=docker.io \
--docker-username= \
--docker-password='' \
--docker-email=''
```oc secrets link default docker-hub --for=pull
## Quick NFS Storage
### Install the NFS Server
In can be handy to have a NFS backend storage for an OpenShift cluster available quickly. The following instructions guides you through the installation of a NFS server installed on a RHEL bastion host.
Install the NFS package and activate the service:
```code
dnf install nfs-utils -y
systemctl enable nfs-server.service
systemctl start nfs-server.service
systemctl status nfs-server.service
```Create the directory in which the Persistent Volumes will be stored in:
```code
mkdir /srv/nfs-storage-pv-user-pvs
chmod g+w /srv/nfs-storage-pv-user-pvs
```Configure the folder as well as the network CIDR for the systems which are accessing the NFS server:
```code
vi /etc/exports
/srv/nfs-storage-pv-user-pvs 10.198.15.0/24(rw,sync,no_root_squash)
systemctl restart nfs-server
exportfs -arv
exportfs -s
```Configure the firewall on the RHEL accordingly:
```
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --permanent --add-service=mountd
firewall-cmd --reload
```### OpenShift NFS Provisioner Template
We need a NFS provisioner in order to consume the NFS service. Create the following OpenShift template and make sure to adjust the IP address as well as the path to the NFS folder accordingly at the end of the file:
Example:
```yaml
- name: NFS_SERVER
required: true
value: xxx.xxx.xxx.xxx ## IP of the host which runs the NFS server
- name: NFS_PATH
required: true
value: /srv/nfs-storage-pv-user-pvs ## folder which was configured on the NFS server
```Create the template:
```yaml
tee nfs-provisioner-template.yaml > /dev/null <<'EOF'
apiVersion: template.openshift.io/v1
kind: Template
labels:
template: nfs-client-provisioner
message: 'NFS storage class ${STORAGE_CLASS} created.'
metadata:
annotations:
description: nfs-client-provisioner
openshift.io/display-name: nfs-client-provisioner
openshift.io/provider-display-name: Tiger Team
tags: infra,nfs
template.openshift.io/documentation-url: nfs-client-provisioner
template.openshift.io/long-description: nfs-client-provisioner
version: 0.0.1
name: nfs-client-provisioner
objects:
- kind: Namespace
apiVersion: v1
metadata:
name: ${TARGET_NAMESPACE}
- kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
namespace: ${TARGET_NAMESPACE}
- kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]- kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: ${TARGET_NAMESPACE}
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner
namespace: ${TARGET_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["security.openshift.io"]
resourceNames: ["hostmount-anyuid"]
resources: ["securitycontextconstraints"]
verbs: ["use"]- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner
namespace: ${TARGET_NAMESPACE}
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
roleRef:
kind: Role
name: nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io- kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
namespace: ${TARGET_NAMESPACE}
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: ${PROVISIONER_IMAGE}
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: ${PROVISIONER_NAME}
- name: NFS_SERVER
value: ${NFS_SERVER}
- name: NFS_PATH
value: ${NFS_PATH}
volumes:
- name: nfs-client-root
nfs:
server: ${NFS_SERVER}
path: ${NFS_PATH}- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ${PROVISIONER_NAME}
parameters:
archiveOnDelete: "false"parameters:
- description: Target namespace where nfs-client-provisioner will run.
displayName: Target namespace
name: TARGET_NAMESPACE
required: true
value: openshift-nfs-provisioner
- name: NFS_SERVER
required: true
value: xxx.xxx.xxx.xxx ## IP of the host which runs the NFS server
- name: NFS_PATH
required: true
value: /srv/nfs-storage-pv-user-pvs ## folder which was configured on the NFS server
- name: PROVISIONER_IMAGE
value: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
- name: PROVISIONER_NAME
value: "nfs-client-provisioner"
EOF
```Deploy the template: `oc process -f nfs-provisioner-template.yaml | oc apply -f -`
### Deploying a Test-workload
```yaml
oc -n test1 create -f - < There are two ways of redirecting the same USB devices: Either using its device's vendor and product information or the actual bus and device address information. In Linux, you can gather this info with lsusb, a redacted example below:### Identify USB Vendor and Product ID
Connect an USB device like e.g. an external CD-Rom device. I've connected it to my MacBook, installed `lsusb` via `brew` and checked for the Vendor ID and Product ID.
```shell
lsusb
[...]
Bus 002 Device 001: ID 0e8d:1806 MediaTek Inc. MT1806 Serial: R8RY6GAC60008Y
[...]
```### Connect to your VM using `virtctl`
Connect to your VM running on OpenShift Virtualization.
```shell
virtctl console rguske-rhel9
Successfully connected to rguske-rhel9 console. The escape sequence is ^]rguske-rhel9 login:
[cloud-user@rguske-rhel9 ~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
```### Start redirecting the USB Device
On your local machine, install `virtctl` and the `usbredir`. I've installed both using `brew`.
```shell
sudo virtctl usbredir 0e8d:1806 rguske-rhel9{"component":"portforward","level":"info","msg":"port_arg: '127.0.0.1:49275'","pos":"client.go:166","timestamp":"2025-03-26T10:19:43.292294Z"}
{"component":"portforward","level":"info","msg":"args: '[--device 0e8d:1806 --to 127.0.0.1:49275]'","pos":"client.go:167","timestamp":"2025-03-26T10:19:43.293541Z"}
{"component":"portforward","level":"info","msg":"Executing commandline: 'usbredirect [--device 0e8d:1806 --to 127.0.0.1:49275]'","pos":"client.go:168","timestamp":"2025-03-26T10:19:43.293591Z"}
{"component":"portforward","level":"info","msg":"Connected to usbredirect at 610.549083ms","pos":"client.go:132","timestamp":"2025-03-26T10:19:43.903058Z"}
```The output will show the redirection to your Virtual Machine.
On your target VM, you'll notice:
```shell
[151999.488527] usb 1-1: new high-speed USB device number 9 using xhci_hcd
[152000.279607] usb 1-1: New USB device found, idVendor=0e8d, idProduct=1806, bcdDevice= 0.00
[152000.280126] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[152000.280490] usb 1-1: Product: MT1806
[152000.280786] usb 1-1: Manufacturer: MediaTek Inc
[152000.281075] usb 1-1: SerialNumber: R8RY6GAC60008Y
[152000.548218] usb-storage 1-1:1.0: USB Mass Storage device detected
[152000.551594] scsi host7: usb-storage 1-1:1.0
[152001.907628] scsi 7:0:0:0: CD-ROM ASUS SDRW-08D3S-U F201 PQ: 0 ANSI: 0
[152002.595801] sr 7:0:0:0: [sr0] scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray
[152003.026401] sr 7:0:0:0: Attached scsi generic sg0 type 5
```Using `lsusb` will show the connected device:
```shell
[cloud-user@rguske-rhel9 ~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 009: ID 0e8d:1806 MediaTek Inc. Samsung SE-208 Slim Portable DVD Writer
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
```