Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/openebs/lvm-localpv
Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
https://github.com/openebs/lvm-localpv
csi csi-driver data-visualization hacktoberfest kubernetes lvm lvm-snapshot lvm-volumes lvm2 storage storage-api storage-engine storage-manager
Last synced: 7 days ago
JSON representation
Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
- Host: GitHub
- URL: https://github.com/openebs/lvm-localpv
- Owner: openebs
- License: apache-2.0
- Created: 2020-12-23T13:27:56.000Z (about 4 years ago)
- Default Branch: develop
- Last Pushed: 2024-12-16T18:39:25.000Z (27 days ago)
- Last Synced: 2024-12-24T20:36:52.548Z (19 days ago)
- Topics: csi, csi-driver, data-visualization, hacktoberfest, kubernetes, lvm, lvm-snapshot, lvm-volumes, lvm2, storage, storage-api, storage-engine, storage-manager
- Language: Go
- Homepage:
- Size: 9.03 MB
- Stars: 262
- Watchers: 18
- Forks: 100
- Open Issues: 34
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Governance: GOVERNANCE.md
Awesome Lists containing this project
- stars - openebs/lvm-localpv - Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack. (HarmonyOS / Windows Manager)
README
## OpenEBS - LocalPV-LVM CSI Driver
[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Flvm-localpv.svg?type=shield&issueType=license)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Flvm-localpv?ref=badge_shield&issueType=license)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/4548/badge)](https://www.bestpractices.dev/projects/4548)
[![Slack](https://img.shields.io/badge/chat-slack-ff1493.svg?style=flat-square)](https://kubernetes.slack.com/messages/openebs)
[![Community Meetings](https://img.shields.io/badge/Community-Meetings-blue)](https://us05web.zoom.us/j/87535654586?pwd=CigbXigJPn38USc6Vuzt7qSVFoO79X.1)
[![Go Report](https://goreportcard.com/badge/github.com/openebs/lvm-localpv)](https://goreportcard.com/report/github.com/openebs/lvm-localpv)| [![Linux LVM2](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png "Linux LVM2")](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png) | The OpenEBS LocalPV-LVM Data-Engine is a mature and well deployed production grade CSI driver for dynamically provisioning Node Local Volumes into a K8s cluster utilizing the LINUX LVM2 Data / storage Mgmt stack as the storage backend. It integrates LVM2 into the OpenEBS platform and exposes many LVM2 services and capabilities. |
| :--- | :--- |## Overview
LocalPV-LVM CSI Driver became GA in August 2021 (with the release v0.8.0). It is now a very mature product and a core component of the OpenEBS storage platform.
Due to the major adoption of LocalPV-LVM (+50,000 users), this Data-Engine is now being unified and integrated into the core OpenEBS Storage platform; instead of being maintained as an external Data-Engine within our project.Our [2024 Roadmap is here](https://github.com/openebs/openebs/blob/main/ROADMAP.md). It defines a rich set of new features, which covers the integration of LocalPV-LVM into the core OpenEBS platform.
Please review this roadmap and feel free to pass back any feedback on it, as well as recommend and suggest new ideas regarding LocalPV-LVM. We welcome all your feedback.
> **LocalPV-LVM is very popular** : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user).
> Here are our key project popularity metrics as of: 01 Mar 2024
>
> :rocket: OpenEBS is the #1 deployed Storage Platform for Kubernetes
> :zap: LocalPV-LVM is the 3rd most deployed Data-Engine within the platform
> :sunglasses: LocalPV-LVM has +50,000 Daily Active Users
> :sunglasses: LocalPV-LVM has +120,000 Global installations
> :floppy_disk: +49 Million OpenEBS Volumes have been deployed globally
> :tv: We have +8 Million Global OpenEBS installations
> :star: We are the [#1 GitHub Star ranked](https://github.com/openebs/website/blob/main/website/public/images/png/github_star-history-2024_Feb_1.png) K8s Data Storage platform
## Dev Activity dashboard
![Alt](https://repobeats.axiom.co/api/embed/baab8c2a9d1606494ab32714cbf91b65845a6001.svg "Repobeats analytics image")## Project info
The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects/30). This tracks our base historical engineering development work and is now somewhat out of date. We will be publish an updated 2024 Unified Roadmp soon, as LocalPV-LVM is now being integrated and unified into the core OpenEBS storage platform.
## Usage and Deployment
## Prerequisites
> [!IMPORTANT]
> Before installing the LocalPV-LVM driver please make sure your Kubernetes Cluster meets the following prerequisites:
> 1. All the nodes must have LVM2 utils package installed
> 2. All the nodes must have dm-snapshot Kernel Module loaded - (Device Mapper Snapshot)
> 3. You have access to install RBAC components into `` namespace.### Supported System
> | Name | Version |
> | :--- | :--- |
> | K8S | 1.23+ |
> | Distro | Alpine, Arch, CentOS, Debian, Fedora, NixOS, SUSE, Talos, RHEL, Ubuntu |
> | Kernel | oldest supported kernel is 2.6 |
> | LVM2 | 2.03.21 |
> | Min RAM | LVM2 is a kernel native module. It is very efficent and fast. It has no strict memory requirements |
> Stability | LVM2 is extremly stable and very mature. The Kernel was released ~2005. It exists in most LINUX distros |
## Setup
Find the disk which you want to use for the LocalPV-LVM. Note: For testing you can use the loopback device.
```
truncate -s 1024G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show
```> [!NOTE]
> - LocalPV-LVM will not provision the VG for the user
> - The required Physical Volumes(PV) and Volume Group(VG) names will need to be created and present beforehand.Create the Volume group on all the nodes, which will be used by the LVM2 Driver for provisioning the volumes
```
sudo pvcreate /dev/loop0
sudo vgcreate lvmvg /dev/loop0 ## here lvmvg is the volume group name to be created
```
## Installation
Install the latest release of OpenEBS LVM2 LocalPV-LVM driver by running the following command. Note: All nodes must be running the same version of LocalPV-LVM, LMV2, device-mapper & dm-snapshot.
**NOTE:** Installation using operator YAMLs is not the supported way any longer.
We can install the latest release of OpenEBS LVM driver by running the following command:
```bash
helm repo add openebs https://openebs.github.io/openebs
helm repo update
helm install openebs --namespace openebs openebs/openebs --create-namespace
```**NOTE:** If you are running a custom Kubelet location, or a Kubernetes distribution that uses a custom Kubelet location, the `kubelet` directory must be changed on the helm values at install-time using the flag option `--set lvm-localpv.lvmNode.kubeletDir=` in the `helm install` command.
- For `microk8s`, we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`, we need to replace `/var/lib/kubelet/` with `/var/snap/microk8s/common/var/lib/kubelet/`.
- For `k0s`, the default directory (`/var/lib/kubelet`) should be changed to `/var/lib/k0s/kubelet`.
- For `RancherOS`, the default directory (`/var/lib/kubelet`) should be changed to `/opt/rke/var/lib/kubelet`.Verify that the LVM driver Components are installed and running using below command. Depending on number of nodes, you will see one lvm-controller pod and lvm-node daemonset running on the nodes :
```bash
$ kubectl get pods -n openebs -l role=openebs-lvm
NAME READY STATUS RESTARTS AGE
openebs-lvm-localpv-controller-7b6d6b4665-fk78q 5/5 Running 0 11m
openebs-lvm-localpv-node-mcch4 2/2 Running 0 11m
openebs-lvm-localpv-node-pdt88 2/2 Running 0 11m
openebs-lvm-localpv-node-r9jn2 2/2 Running 0 11m
```Once LVM driver is installed and running we can provision a volume.
### Deployment
#### 1. Create a Storage class
```
$ cat sc.yamlapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
parameters:
storage: "lvm"
volgroup: "lvmvg"
provisioner: local.csi.openebs.io
```Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for LocalPV-LVM
##### VolumeGroup Availability
If LVM volume group is available on certain nodes only, then make use of topology to tell the list of nodes where we have the volgroup available.
As shown in the below storage class, we can use allowedTopologies to describe volume group availability on nodes.```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
allowVolumeExpansion: true
parameters:
storage: "lvm"
volgroup: "lvmvg"
provisioner: local.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- lvmpv-node1
- lvmpv-node2
```The above storage class tells that volume group "lvmvg" is available on nodes lvmpv-node1 and lvmpv-node2 only. The LVM driver will create volumes on those nodes only.
Please note that the provisioner name for LVM driver is "local.csi.openebs.io", we have to use this while creating the storage class so that the volume provisioning/deprovisioning request can come to LVM driver.
#### 2. Create the PVC
```
$ cat pvc.yamlkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-lvmpv
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
```Create a PVC using the storage class created for the LVM driver.
#### 3. Deploy the application
Create the deployment yaml using the pvc backed by LVM storage.
```
$ cat fio.yamlapiVersion: v1
kind: Pod
metadata:
name: fio
spec:
restartPolicy: Never
containers:
- name: perfrunner
image: openebs/tests-fio
command: ["/bin/bash"]
args: ["-c", "while true ;do sleep 50; done"]
volumeMounts:
- mountPath: /datadir
name: fio-vol
tty: true
volumes:
- name: fio-vol
persistentVolumeClaim:
claimName: csi-lvmpv
```After the deployment of the application, we can go to the node and see that the lvm volume is being used
by the application for reading/writing the data and space is consumed from the LVM. Please note that to check the provisioned volumes on the node, we need to run `pvscan --cache` command to update the lvm cache and then we can use lvdisplay and all other lvm commands on the node.#### 4. Deprovisioning
for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the volume group and data will be freed.
```
$ kubectl delete -f fio.yaml
pod "fio" deleted
$ kubectl delete -f pvc.yaml
persistentvolumeclaim "csi-lvmpv" deleted
```Features
---- [x] Access Modes
- [x] ReadWriteOnce
- ~~ReadOnlyMany~~
- ~~ReadWriteMany~~
- [x] Volume modes
- [x] `Filesystem` mode
- [x] [`Block`](docs/raw-block-volume.md) mode
- [x] Supports fsTypes: `ext4`, `btrfs`, `xfs`
- [x] Volume metrics
- [x] Topology
- [x] [Snapshot](docs/snapshot.md)
- [ ] Clone
- [x] [Volume Resize](docs/resize.md)
- [x] [Thin Provision](docs/thin_provision.md)
- [ ] Backup/Restore
- [ ] Ephemeral inline volume### Limitation
- Resize of volumes with snapshot is not supported## License Compliance
[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Flvm-localpv.svg?type=large&issueType=license)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fopenebs%2Flvm-localpv?ref=badge_large&issueType=license)