{"id":18637293,"url":"https://github.com/openshift/lvm-operator","last_synced_at":"2025-04-11T09:32:43.219Z","repository":{"id":36970137,"uuid":"433787085","full_name":"openshift/lvm-operator","owner":"openshift","description":"The LVM Operator deploys and manages LVM storage on OpenShift clusters","archived":false,"fork":false,"pushed_at":"2024-05-21T20:16:37.000Z","size":25454,"stargazers_count":41,"open_issues_count":4,"forks_count":35,"subscribers_count":18,"default_branch":"main","last_synced_at":"2024-05-22T12:55:30.019Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openshift.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"docs/security.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-12-01T10:47:13.000Z","updated_at":"2024-05-29T23:20:13.404Z","dependencies_parsed_at":"2022-06-28T23:34:51.114Z","dependency_job_id":"c0d1e70d-5a08-40de-a832-7f1b488b9941","html_url":"https://github.com/openshift/lvm-operator","commit_stats":null,"previous_names":["red-hat-storage/lvm-operator"],"tags_count":21,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Flvm-operator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Flvm-operator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Flvm-operator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Flvm-operator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openshift","download_url":"https://codeload.github.com/openshift/lvm-operator/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248368323,"owners_count":21092335,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T05:34:58.089Z","updated_at":"2025-04-11T09:32:38.201Z","avatar_url":"https://github.com/openshift.png","language":"Go","readme":"# The LVM Operator - part of LVMS\n\n## [Official LVMS Product Documentation](https://docs.openshift.com/container-platform/latest/storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.html)\n\nFor the latest information about usage and installation of LVMS (Logical Volume Manager Storage) in OpenShift, please use the official product documentation linked above.\n\n## Overview\n\nUse the LVM Operator with `LVMCluster` custom resources to deploy and manage LVM storage on OpenShift clusters.\n\nThe LVM Operator leverages the [TopoLVM CSI Driver](https://github.com/topolvm/topolvm) on the backend to dynamically create LVM physical volumes, volume groups and logical volumes, and binds them to `PersistentVolumeClaim` resources.\nThis allows applications running on the cluster to consume storage from LVM logical volumes backed by the TopoLVM CSI Driver.\n\nThe LVM Operator, in conjunction with the TopoLVM CSI Driver, Volume Group Manager, and other related components, collectively comprise the Logical Volume Manager Storage (LVMS) solution.\n\nHere is a brief overview of how the Operator works. See [here](docs/design/architecture.md) for the architecture diagram.\n\n```mermaid\ngraph LR\nLVMOperator((LVMOperator))--\u003e|Manages| LVMCluster\nLVMOperator--\u003e|Manages| StorageClass\nStorageClass--\u003e|Creates| PersistentVolumeA\nStorageClass--\u003e|Creates| PersistentVolumeB\nPersistentVolumeA--\u003eLV1\nPersistentVolumeB--\u003eLV2\nLVMCluster--\u003e|Comprised of|Disk1((Disk1))\nLVMCluster--\u003e|Comprised of|Disk2((Disk2))\nLVMCluster--\u003e|Comprised of|Disk3((Disk3))\n\nsubgraph Logical Volume Manager\n  Disk1--\u003e|Abstracted|PV1\n  Disk2--\u003e|Abstracted|PV2\n  Disk3--\u003e|Abstracted|PV3\n  PV1--\u003eVG\n  PV2--\u003eVG\n  PV3--\u003eVG\n  LV1--\u003eVG\n  LV2--\u003eVG\nend\n```\n\n- [Deploying the LVM Operator](#deploying-the-lvm-operator)\n    * [Using the pre-built images](#using-the-pre-built-images)\n    * [Building the Operator yourself](#building-the-operator-yourself)\n    * [Deploying the Operator](#deploying-the-operator)\n    * [Inspecting the storage objects on the node](#inspecting-the-storage-objects-on-the-node)\n    * [Testing the Operator](#testing-the-operator)\n- [Cleanup](#cleanup)\n- [Metrics](#metrics)\n- [Known Limitations](#known-limitations)\n    * [Dynamic Device Discovery](#dynamic-device-discovery)\n    * [Unsupported Device Types](#unsupported-device-types)\n    * [Single LVMCluster support](#single-lvmcluster-support)\n    * [Upgrades from v 4.10 and v4.11](#upgrades-from-v-410-and-v411)\n    * [Missing native LVM RAID Configuration support](#missing-native-lvm-raid-configuration-support)\n    * [Missing LV-level encryption support](#missing-lv-level-encryption-support)\n    * [Snapshotting and Cloning in Multi-Node Topologies](#snapshotting-and-cloning-in-multi-node-topologies)\n    * [Validation of `LVMCluster` CRs outside the `openshift-storage` namespace](#validation-of-lvmcluster-crs-outside-the-openshift-storage-namespace)\n- [Troubleshooting](#troubleshooting)\n- [Contributing](#contributing)\n\n## Deploying the LVM Operator\n\nDue to the absence of a CI pipeline that builds this repository, you will need to either build it yourself or use a pre-built image that has been made available. Please note that the pre-built image may not be in sync with the current state of the repository.\n\n### Using the pre-built images\n\nIf you are comfortable using the pre-built images, simply proceed with the [deployment steps](#deploying-the-operator).\n\n### Building the Operator yourself\n\nTo build the Operator, install Docker or Podman and log into your registry.\n\n1. Set the following environment variables to the repository where you want to host your image:\n\n    ```bash\n    $ export IMAGE_REGISTRY=\u003cquay/docker etc\u003e\n    $ export REGISTRY_NAMESPACE=\u003cregistry-username\u003e\n    $ export IMAGE_TAG=\u003csome-tag\u003e\n    ```\n\n2. Build and push the container image:\n\n    ```bash\n    $ make docker-build docker-push\n    ```\n\n\u003cdetails\u003e\u003csummary\u003e\u003cstrong\u003eBuilding the Operator for OLM deployment\u003c/strong\u003e\u003c/summary\u003e\n\u003cp\u003e\n\nIf you intend to deploy the Operator using the Operator Lifecycle Manager (OLM), there are some additional steps you should follow.\n\n1. Build and push the bundle image:\n\n    ```bash\n    $ make bundle-build bundle-push\n    ```\n\n2. Build and push the catalog image:\n\n    ```bash\n    $ make catalog-build catalog-push\n    ```\n\n\u003c/p\u003e\n\u003c/details\u003e\n\nEnsure that the OpenShift cluster has read access to that repository. Once this is complete, you are ready to proceed with the next steps.\n\n### Deploying the Operator\n\nYou can begin the deployment by running the following command:\n\n```bash\n$ make deploy\n```\n\n\u003cdetails\u003e\u003csummary\u003e\u003cstrong\u003eDeploying the Operator with OLM\u003c/strong\u003e\u003c/summary\u003e\n\u003cp\u003e\n\nYou can begin the deployment using the Operator Lifecycle Manager (OLM) by running the following command:\n\n```bash\n$ make deploy-with-olm\n```\n\nThe process involves the creation of several resources to deploy the Operator using OLM. These include a custom `CatalogSource` to define the Operator source, the `openshift-storage` namespace to contain the Operator components, an `OperatorGroup` to manage the lifecycle of the Operator, a `Subscription` to subscribe to the Operator catalog in the `openshift-storage` namespace, and finally, the creation of a `ClusterServiceVersion` to describe the Operator's capabilities and requirements.\n\nWait until the `ClusterServiceVersion` (CSV) reaches the `Succeeded` status:\n\n```bash\n$ kubectl get csv -n openshift-storage\n\nNAME                   DISPLAY       VERSION   REPLACES   PHASE\nlvms-operator.v0.0.1   LVM Storage   0.0.1                Succeeded\n```\n\n\u003c/p\u003e\n\u003c/details\u003e\n\nAfter the previous command has completed successfully, switch over to the `openshift-storage` namespace:\n\n```bash\n$ oc project openshift-storage\n```\n\nWait until all pods have started running:\n\n```bash\n$ oc get pods -w\n```\n\nOnce all pods are running, create a sample `LVMCluster` custom resource (CR):\n\n```bash\n$ oc create -n openshift-storage -f https://github.com/openshift/lvm-operator/raw/main/config/samples/lvm_v1alpha1_lvmcluster.yaml\n```\n\nAfter the CR is deployed, the following actions are executed:\n\n- A Logical Volume Manager (LVM) volume group named `vg1` is created, utilizing all available disks on the cluster.\n- A thin pool named `thin-pool-1` is created within `vg1`, with a size equivalent to 90% of `vg1`.\n- The TopoLVM Container Storage Interface (CSI) plugin is deployed.\n- A storage class and a volume snapshot class are created, both named `lvms-vg1`. This facilitates storage provisioning for OpenShift workloads. The storage class is configured with the `WaitForFirstConsumer` volume binding mode that is utilized in a multi-node configuration to optimize the scheduling of pod placement. This strategy prioritizes the allocation of pods to nodes with the greatest amount of available storage capacity.\n- The LVMS system also creates two additional internal CRs to support its functionality:\n  * `LVMVolumeGroup` is generated and managed by LVMS to monitor the individual volume groups across multiple nodes in the cluster.\n  * `LVMVolumeGroupNodeStatus` is created by the [Volume Group Manager](docs/design/vg-manager.md). This CR is used to monitor the status of volume groups on individual nodes in the cluster.\n\nWait until the `LVMCluster` reaches the `Ready` status:\n\n```bash\n$ oc get lvmclusters.lvm.topolvm.io my-lvmcluster\n\nNAME            STATUS\nmy-lvmcluster   Ready\n```\n\nWait until all pods are active:\n\n```bash\n$ oc get pods -w\n```\n\nOnce all the pods have been launched, the LVMS is ready to manage your logical volumes and make them available for use in your applications.\n\n### Inspecting the storage objects on the node\n\nPrior to the deployment of the Logical Volume Manager Storage (LVMS), there are no pre-existing LVM physical volumes, volume groups, or logical volumes associated with the disks.\n\n```bash\nsh-4.4# lsblk\nNAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT\nsdb       8:16   0 893.8G  0 disk\n|-sdb1    8:17   0     1M  0 part\n|-sdb2    8:18   0   127M  0 part\n|-sdb3    8:19   0   384M  0 part /boot\n`-sdb4    8:20   0 893.3G  0 part /sysroot\nsr0      11:0    1   987M  0 rom\nnvme0n1 259:0    0   1.5T  0 disk\nnvme1n1 259:1    0   1.5T  0 disk\nnvme2n1 259:2    0   1.5T  0 disk\nsh-4.4# pvs\nsh-4.4# vgs\nsh-4.4# lvs\n```\n\nAfter successful deployment, the necessary LVM physical volumes, volume groups, and thin pools are created on the host.\n\n```bash\nsh-4.4# pvs\n  PV           VG  Fmt  Attr PSize  PFree\n  /dev/nvme0n1 vg1 lvm2 a--  \u003c1.46t \u003c1.46t\n  /dev/nvme1n1 vg1 lvm2 a--  \u003c1.46t \u003c1.46t\n  /dev/nvme2n1 vg1 lvm2 a--  \u003c1.46t \u003c1.46t\nsh-4.4# vgs\n  VG  #PV #LV #SN Attr   VSize  VFree\n  vg1   3   0   0 wz--n- \u003c4.37t \u003c4.37t\nsh-4.4# lvs\n  LV          VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert\n  thin-pool-1 vg1 twi-a-tz-- \u003c3.93t             0.00   1.19\n```\n\n### Testing the Operator\n\nOnce you have completed [the deployment steps](#deploying-the-operator), you can proceed to create a basic test application that will consume storage.\n\nTo initiate the process, create a Persistent Volume Claim (PVC):\n\n```bash\n$ cat \u003c\u003cEOF | oc apply -f -\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: lvms-test\n  labels:\n    type: local\nspec:\n  storageClassName: lvms-vg1\n  resources:\n    requests:\n      storage: 5Gi\n  accessModes:\n    - ReadWriteOnce\n  volumeMode: Filesystem\nEOF\n```\n\nUpon creation, you may observe that the PVC remains in a `Pending` state.\n\n```bash\n$ oc get pvc\n\nNAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE\nlvms-test   Pending                                      lvms-vg1       7s\n```\n\nThis behavior is expected as the storage class awaits the creation of a pod that requires the PVC.\n\nTo move forward, create a pod that can utilize this PVC:\n\n```bash\n$ cat \u003c\u003cEOF | oc apply -f -\napiVersion: v1\nkind: Pod\nmetadata:\n  name: lvms-test\nspec:\n  volumes:\n    - name: storage\n      persistentVolumeClaim:\n        claimName: lvms-test\n  containers:\n    - name: container\n      image: public.ecr.aws/docker/library/nginx:latest\n      ports:\n        - containerPort: 80\n          name: \"http-server\"\n      volumeMounts:\n        - mountPath: \"/usr/share/nginx/html\"\n          name: storage\nEOF\n```\n\nOnce the pod has been created and associated with the corresponding PVC, the PVC is bound, and the pod transitions to the `Running` state.\n\n```bash\n$ oc get pvc,pods\n\nNAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE\npersistentvolumeclaim/lvms-test   Bound    pvc-a37ef71c-a9b9-45d8-96e8-3b5ad30a84f6   5Gi        RWO            lvms-vg1       3m2s\n\nNAME            READY   STATUS    RESTARTS   AGE\npod/lvms-test   1/1     Running   0          28s\n```\n\n## Cleanup\n\nTo perform a full cleanup, follow these steps:\n\n1. Remove all the application pods which are using PVCs created with LVMS, and then remove all these PVCs.\n\n2. Ensure that there are no remaining `LogicalVolume` custom resources that were created by LVMS.\n\n    ```bash\n    $ oc get logicalvolumes.topolvm.io\n    No resources found\n    ```\n\n3. Remove the `LVMCluster` CR.\n\n    ```bash\n    $ oc delete lvmclusters.lvm.topolvm.io my-lvmcluster\n    lvmcluster.lvm.topolvm.io \"my-lvmcluster\" deleted\n    ```\n\n    If the previous command is stuck, it may be necessary to perform a [forced cleanup procedure](./docs/troubleshooting.md#forced-cleanup).\n\n4. Verify that the only remaining resource in the `openshift-storage` namespace is the Operator.\n\n    ```bash\n    oc get pods -n openshift-storage\n    NAME                                 READY   STATUS    RESTARTS   AGE\n    lvms-operator-8bf864c85-8zjlp        3/3     Running   0          125m\n    ```\n\n5. To begin the undeployment process of LVMS, use the following command:\n\n    ```bash\n    make undeploy\n    ```\n\n## E2E Tests\n\nThere are a few steps required to run the end-to-end tests for LVMS.\n\nYou will need the following environment variables set:\n```bash\nIMAGE_REGISTRY={{REGISTRY_URL}} # Ex: quay.io\nREGISTRY_NAMESPACE={{REGISTRY_NAMESPACE}} # Ex: lvms-dev, this should be your own personal namespace\n```\n\nOnce the environment variables are set, you can run\n```bash\n# build and deploy your local code to the cluster\n$ make deploy-local\n\n# Wait for the lvms-operator to have status=Running\n$ oc -n openshift-storage get pods\n# NAME                             READY   STATUS    RESTARTS   AGE\n# lvms-operator-579fbf46d5-vjwhp   3/3     Running   0          3m27s\n\n# run the e2e tests\n$ make e2e\n\n# undeploy the operator from the cluster\n$ make undeploy\n```\n\n## Metrics\n\nTo enable monitoring on OpenShift clusters, assign the `openshift.io/cluster-monitoring` label to the same namespace that you deployed LVMS to.\n\n```bash\n$ oc patch namespace/openshift-storage -p '{\"metadata\": {\"labels\": {\"openshift.io/cluster-monitoring\": \"true\"}}}'\n```\n\nLVMS provides [TopoLVM metrics](https://github.com/topolvm/topolvm/blob/v0.21.0/docs/topolvm-node.md#prometheus-metrics) and `controller-runtime` metrics, which can be accessed via OpenShift Console.\n\n## Known Limitations\n\n### Dynamic Device Discovery\n\nWhen a `DeviceSelector` isn't configured for a device class, LVMS operates dynamically, continuously monitoring attached devices on the node and adding them to the volume group if they're unused and supported. However, this approach presents several potential issues:\n\n- LVMS may inadvertently add a device to the volume group that wasn't intended for LVMS.\n- Removing devices could disrupt the volume group.\n- LVMS lacks awareness of volume group changes that could lead to data loss, potentially necessitating manual node remediation.\n\nGiven these considerations, it's advised against using LVMS in dynamic discovery mode for production environments.\n\n### Unsupported Device Types\n\nHere is a list of the types of devices that are excluded by LVMS. To get more information about the devices on your machine and to check if they fall under any of these filters, run:\n\n```bash\n$ lsblk --paths --json -o NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE\n```\n\n1. **Read-Only Devices:**\n    - *Condition:* Devices marked as `read-only` are unsupported.\n    - *Why:* LVMS requires the ability to write and modify data dynamically, which is not possible with devices set to read-only mode.\n    - *Filter:* `ro` is set to `true`.\n\n2. **Suspended Devices:**\n    - *Condition:* Devices in a `suspended` state are unsupported.\n    - *Why:* A suspended state implies that a device is temporarily inactive or halted, and attempting to incorporate such devices into LVMS can introduce complexities and potential issues.\n    - *Filter:* `state` is `suspended`.\n\n3. **Devices with Invalid Partition Labels:**\n    - *Condition:* Devices with partition labels such as `bios`, `boot`, or `reserved` are unsupported.\n    - *Why:* These labels indicate reserved or specialized functionality associated with specific system components. Attempting to use such devices within LVMS may lead to unintended consequences, as these labels may be reserved for system-related activities.\n    - *Filter:* `partlabel` has either `bios`, `boot`, or `reserved`.\n\n4. **Devices with Invalid Filesystem Signatures:**\n    - *Condition:* Devices with invalid filesystem signatures are unsupported. This includes:\n        - Devices with a filesystem type set to `LVM2_member` (only valid if no children).\n        - Devices with no free capacity as a physical volume.\n        - Devices already part of another volume group.\n    - *Why:* These conditions indicate that either this device is already used by another volume group or have no free capacity to be used within LVMS.\n    - *Filter:* `fstype` is not `null`, or `fstype` is set to `LVM2_member` and has children block devices, or `pvs --units g -v --reportformat json` returns `pv_free` for the block device set to `0G`.\n\n5. **Devices with Children:**\n    - *Condition:* Devices with children block devices are unsupported.\n    - *Why:* LVMS operates optimally with standalone block devices that are not part of a hierarchical structure. Devices with children can complicate volume management, potentially causing conflicts, errors, or difficulties in tracking and managing logical volumes.\n    - *Filter:* `children` has children block devices.\n\n6. **Devices with Bind Mounts:**\n    - *Condition:* Devices with bind mounts are unsupported.\n    - *Why:* Managing logical volumes becomes more complex when dealing with devices that have bind mounts, potentially causing conflicts or difficulties in maintaining the integrity of the logical volume setup.\n    - *Filter:* `cat /proc/1/mountinfo | grep \u003cdevice-name\u003e` returns mount points for the device in the 4th or 10th field.\n\n7. **ROM Devices:**\n    - *Condition:* Devices of type `rom` are unsupported.\n    - *Why:* Such devices are designed for static data storage and lack the necessary read-write capabilities essential for dynamic operations performed by LVMS.\n    - *Filter:* `type` is set to `rom`.\n\n8. **LVM Partitions:**\n    - *Condition:* Devices of type `LVM` partition are unsupported.\n    - *Why:* These partitions are already dedicated to LVM and are managed as part of an existing volume group.\n    - *Filter:* `type` is set to `lvm`.\n\n9. **Loop Devices:**\n    - *Condition:* Loop Devices must not be used if they are already in use by Kubernetes.\n    - *Why:* When loop devices are utilized by Kubernetes, they are likely configured for specific tasks or processes managed by the Kubernetes environment. Integrating loop devices that are already in use by Kubernetes into LVMS can lead to potential conflicts and interference with the Kubernetes system.\n    - *Filter:* `type` is set to `loop`, and `losetup \u003cloop-device\u003e -O BACK-FILE --json` returns a `back-file` which contains `plugins/kubernetes.io`.\n\nDevices meeting any of these conditions are filtered out for LVMS operations.\n\n_NOTE: It is strongly recommended to perform a thorough wipe of a device before using it within LVMS to proactively prevent unintended behaviors or potential issues._\n\n### Single LVMCluster support\n\nLVMS does not support the reconciliation of multiple LVMCluster custom resources simultaneously.\n\n### Upgrades from v 4.10 and v4.11\n\nIt is not possible to upgrade from release-4.10 and release-4.11 to a newer version due to a breaking change that has been implemented. For further information on this matter, consult [the relevant documentation](https://github.com/topolvm/topolvm/blob/main/docs/proposals/rename-group.md).\n\n### Missing native LVM RAID Configuration support\n\nCurrently, LVM Operator forces all LVMClusters to work with a thinly provisioned volume in order to support Snapshotting and Cloning on PVCs.\nThis is backed by an LVM Logical Volume of type `thin`, which is reflected in the LVM flags as an attribute.\nWhen trying to use LVM's inbuilt RAID capabilities, it conflicts with this `thin` attribute as the same flag is also indicative whether a volume is part of LVM RAID configurations (`r` or `R` flag).\nThis means that the only way to support RAID configuration from within `LVM` would be to do a conversion from two RAID Arrays into a thinpool with `lvconvert`, after which the RAID is no longer recognized by LVM (due to said conflict in the volume attributes).\nWhile this would enable initial synchronization and redundancy, all repair and extend operations would not longer respect the RAID topology in the Volume Group, and operations like `lvconvert --repair` are not even supported anymore.\nThis means that it would be quite a complex situation to recover from.\n\nInstead of doing LVM based RAIDs, we recommend using the [`mdraid`](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#linux-raid-subsystems_managing-raid) subsystem in linux instead of the LVM RAID capabilities.\nSimply create a RAID array with `mdadm` and then use this in your `deviceSelector` within `LVMCluster`:\n\n1. For a simple RAID1, you could use `mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdc1`\n2. Then you can reference `/dev/md0` in the `deviceSelector` as normal\n3. Any recovery and syncing will then happen with `mdraid`: [Replacing Disks](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#replacing-a-failed-disk-in-raid_managing-raid) and [Repairing](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#repairing-raid-disks_managing-raid) will work transparently of LVMS and can be covered by a sysadmin of the Node.\n\n_NOTE: Currently, RAID Arrays created with `mdraid` are not automatically recognized when not using any `deviceSelector`, thus they MUST be specified explicitly._\n\n### Missing LV-level encryption support\n\nCurrently, LVM Operator does not have a native LV-level encryption support. Instead, you can encrypt the entire disk or partitions, and use them within LVMCluster. This way all LVs created by LVMS on this disk will be encrypted out-of-the-box.\n\nHere is an example `MachineConfig` that can be used to configure encrypted partitions during an OpenShift installation:\n\n```yaml\napiVersion: machineconfiguration.openshift.io/v1\nkind: MachineConfig\nmetadata:\n  name: 98-encrypted-disk-partition-master\n  labels:\n    machineconfiguration.openshift.io/role: master\nspec:\n  config:\n    ignition:\n      version: 3.2.0\n    storage:\n      disks:\n        - device: /dev/nvme0n1\n          wipeTable: false\n          partitions:\n            - sizeMiB: 204800\n              startMiB: 600000\n              label: application\n              number: 5\n      luks:\n        - clevis:\n            tpm2: true\n          device: /dev/disk/by-partlabel/application\n          name: application\n          options:\n          - --cipher\n          - aes-cbc-essiv:sha256\n          wipeVolume: true\n```\n\nThen, the path to the encrypted partition `/dev/mapper/application` can be specified in the `deviceSelector`.\n\nFor non-OpenShift clusters, you can encrypt a disk using LUKS with `cryptsetup`, and then use this in your `deviceSelector` within `LVMCluster`:\n\n1. Set up the `/dev/sdb` device for encryption. This will also remove all the data on the device:\n\n   ```bash\n   cryptsetup -y -v luksFormat /dev/sdb\n   ```\n\n    You'll be prompted to set a passphrase to unlock the volume.\n\n2. Create a logical device-mapper device named `encrypted`, mounted to the LUKS-encrypted device:\n\n   ```bash\n   cryptsetup luksOpen /dev/sdb encrypted\n   ```\n\n    You'll be prompted to enter the passphrase you set when creating the volume.\n\n3. You can now reference `/dev/mapper/encrypted` in the `deviceSelector`.\n\n### Snapshotting and Cloning in Multi-Node Topologies\n\nIn general, since LVMCluster does not ensure data replication, `VolumeSnapshots` and consumption of them is always limited to the original dataSource.\nThus, snapshots must be created on the same node as the original data. Also, all pods relying on a PVC that is using the snapshot data will have to be scheduled\non the node that contained the original `LogicalVolume` in TopoLVM.\n\nIt should be noted that snapshotting is based on Thin-Pool Snapshots from upstream TopoLVM and are still considered [experimental in upstream](https://github.com/topolvm/topolvm/discussions/737).\nThis is because multi-node Kubernetes clusters have the scheduler figure out pod placement logically onto different nodes (with the node topology from the native Kubernetes Scheduler responsible for deciding the node where Pods should be deployed),\nand it cannot always be guaranteed that Snapshots are provisioned on the same node as the original data (which is based on the CSI topology, known by TopoLVM) if the `PersistentVolumeClaim` is not created upfront.\n\nIf you are unsure what to make of this, always make sure that the original `PerstistentVolumeClaim` that you want to have Snapshots on is already created and `Bound`.\nWith these prerequisites it can be guaranteed that all follow-up `VolumeSnapshot` Objects as well as `PersistentVolumeClaim` objects depending on the original one are scheduled correctly.\nThe easiest way to achieve this is to use precreated `PersistentVolumeClaims` and non-ephemeral `StatefulSet` for your workload.\n\n_NOTE: All of the above also applies for cloning the `PersistentVolumeClaims` directly by using the original `PersistentVolumeClaims` as data source instead of using a Snapshot._\n\n### Validation of `LVMCluster` CRs outside the `openshift-storage` namespace\n\nWhen creating an `LVMCluster` CR outside the `openshift-storage` namespace by installing it via `ClusterServiceVersion`, the Operator will not be able to validate the CR.\nThis is because the `ValidatingWebhookConfiguration` is restricted to the `openshift-storage` namespace and does not have access to the `LVMCluster` CRs in other namespaces.\nThus, the Operator will not be able to prevent the creation of invalid `LVMCluster` CRs outside the `openshift-storage` namespace.\nHowever, it will also not pick it up and simply ignore it.\n\nThis is because Operator Lifecycle Manager (OLM) does not allow the creation of `ClusterServiceVersion` with installMode `OwnNamespace` while also not restricting the webhook configuration.\nValidation in the `openshift-storage` namespace is processed normally.\n\n## Troubleshooting\n\nSee the [troubleshooting guide](docs/troubleshooting.md).\n\n## Contributing\n\nSee the [contribution guide](CONTRIBUTING.md).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Flvm-operator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenshift%2Flvm-operator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Flvm-operator/lists"}