diff --git a/.github/ISSUE_TEMPLATE/bug-report.md b/.github/ISSUE_TEMPLATE/bug-report.md
index bf7b4709..9a1db2b2 100644
--- a/.github/ISSUE_TEMPLATE/bug-report.md
+++ b/.github/ISSUE_TEMPLATE/bug-report.md
@@ -15,9 +15,9 @@ labels: Bug
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other [Pastebin](https://pastebin.com/) is fine.)
-* `kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin`
-* `kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin`
-* `kubectl get pods -n kube-system`
+* `kubectl logs -f openebs-lvm-localpv-controller-7b6d6b4665-fk78q -n openebs -c openebs-lvm-plugin`
+* `kubectl logs -f openebs-lvm-localpv-node-[xxxx] -n openebs -c openebs-lvm-plugin`
+* `kubectl get pods -n openebs`
* `kubectl get lvmvol -A -o yaml`
**Anything else you would like to add:**
diff --git a/Adopters.md b/Adopters.md
index c33b4d1f..9c29c6ba 100644
--- a/Adopters.md
+++ b/Adopters.md
@@ -1,5 +1,5 @@
-# OpenEBS LVM-LocalPV Adopters
+# OpenEBS LocalPV-LVM Adopters
-This is the list of organizations and users that publicly shared details of how they are using OpenEBS LVM-LocalPV CSI driver for running their Stateful workloads.
+This is the list of organizations and users that publicly shared details of how they are using OpenEBS LocalPV-LVM CSI driver for running their Stateful workloads.
[click here](https://github.com/openebs/openebs/issues/2719) to see the list of organizations/users who have publicly shared the usage of OpenEBS.
\ No newline at end of file
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 70b9a980..1dcc5ba9 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,4 +1,4 @@
-# Contributing to LVM-LocalPV
+# Contributing to LocalPV-LVM
LVM LocalPV uses the standard GitHub pull requests process to review and accept contributions. There are several areas that could use your help. For starters, you could help in improving the sections in this document by either creating a new issue describing the improvement or submitting a pull request to this repository. The issues are maintained at [lvm-localpv/issues](https://github.com/openebs/lvm-localpv/issues) repository.
@@ -9,7 +9,7 @@ LVM LocalPV uses the standard GitHub pull requests process to review and accept
## Steps to Contribute
-LVM-LocalPV is an Apache 2.0 Licensed project and all your commits should be signed with Developer Certificate of Origin. See [Sign your work](#sign-your-work).
+LocalPV-LVM is an Apache 2.0 Licensed project and all your commits should be signed with Developer Certificate of Origin. See [Sign your work](#sign-your-work).
* Find an issue to work on or create a new issue. The issues are maintained at [lvm-localpv/issues](https://github.com/openebs/lvm-localpv/issues). You can pick up from a list of [good-first-issues](https://github.com/openebs/lvm-localpv/labels/good%20first%20issue).
* Claim your issue by commenting your intent to work on it to avoid duplication of efforts.
diff --git a/README.md b/README.md
index d224a39d..ef3e2190 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-## OpenEBS - LVM-LocalPV CSI Driver
+## OpenEBS - LocalPV-LVM CSI Driver
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fopenebs%2Flvm-localpv.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Fopenebs%2Flvm-localpv?ref=badge_shield)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3523/badge)](https://bestpractices.coreinfrastructure.org/en/projects/4548)
[![Slack](https://img.shields.io/badge/chat!!!-slack-ff1493.svg?style=flat-square)](https://kubernetes.slack.com/messages/openebs)
@@ -7,26 +7,26 @@
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fopenebs%2Flvm-localpv.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fopenebs%2Flvm-localpv?ref=badge_shield)
-| [![Linux LVM2](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png "Linux LVM2")](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png) | The OpenEBS LVM-LocalPV Data-Engine is a mature and well deployed production grade CSI driver for dynamically provisioning Node Local Volumes into a K8s cluster utilizing the LINUX LVM2 Data / storage Mgmt stack as the storage backend. It integrates LVM2 into the OpenEBS platform and exposes many LVM2 services and capabilities. |
+| [![Linux LVM2](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png "Linux LVM2")](https://github.com/openebs/website/blob/main/website/public/images/png/LVM_logo_1.png) | The OpenEBS LocalPV-LVM Data-Engine is a mature and well deployed production grade CSI driver for dynamically provisioning Node Local Volumes into a K8s cluster utilizing the LINUX LVM2 Data / storage Mgmt stack as the storage backend. It integrates LVM2 into the OpenEBS platform and exposes many LVM2 services and capabilities. |
| :--- | :--- |
## Overview
-LVM-LocalPV CSI Driver becasme GA in August 2021 (with the release v0.8.0). It is now a a very mature product and a core component of the OpenEBS storage platform.
-Due to the major adoption of LVM-LocalPV (+50,000 users), this Data-Engine is now being unified and integrated into the core OpenEBS Storage platform; instead of being maintained as an external Data-Engine within our project.
+LocalPV-LVM CSI Driver becasme GA in August 2021 (with the release v0.8.0). It is now a a very mature product and a core component of the OpenEBS storage platform.
+Due to the major adoption of LocalPV-LVM (+50,000 users), this Data-Engine is now being unified and integrated into the core OpenEBS Storage platform; instead of being maintained as an external Data-Engine within our project.
-Our [2024 Roadmap is here](https://github.com/openebs/openebs/blob/main/ROADMAP.md). It defines a rich set of new featrues, which covers the integration of LVM-LocalPV into the core OpenEBS platform.
-Please review this roadmp and feel free to pass back any feedback on it, as well as recommend and suggest new ideas regarding LVM-LocalPV. We welcome all your feedback.
+Our [2024 Roadmap is here](https://github.com/openebs/openebs/blob/main/ROADMAP.md). It defines a rich set of new featrues, which covers the integration of LocalPV-LVM into the core OpenEBS platform.
+Please review this roadmp and feel free to pass back any feedback on it, as well as recommend and suggest new ideas regarding LocalPV-LVM. We welcome all your feedback.
-> **LVM-LocalPV is very popular** : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user).
+> **LocalPV-LVM is very popular** : Live OpenEBS systems actively report back product metrics every day, to our Global Anaytics metrics engine (unless disabled by the user).
> Here are our key project popularity metrics as of: 01 Mar 2024
>
> :rocket: OpenEBS is the #1 deployed Storage Platform for Kubernetes
-> :zap: LVM-LocalPV is the 3rd most deployed Data-Engine within the platform
-> :sunglasses: LVM-LocalPV has +50,000 Daily Acive Users
-> :sunglasses: LVM-LocalPV has +120,000 Global instllations
+> :zap: LocalPV-LVM is the 3rd most deployed Data-Engine within the platform
+> :sunglasses: LocalPV-LVM has +50,000 Daily Acive Users
+> :sunglasses: LocalPV-LVM has +120,000 Global instllations
> :floppy_disk: +49 Million OpenEBS Volumes have been deployed globally
> :tv: We have +8 Million Global OpenEBS installations
> :star: We are the [#1 GitHub Star ranked](https://github.com/openebs/website/blob/main/website/public/images/png/github_star-history-2024_Feb_1.png) K8s Data Storage platform
@@ -38,7 +38,7 @@ Please review this roadmp and feel free to pass back any feedback on it, as well
## Project info
-The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects/30). This tracks our base historical engineering development work and is now somewhat out of date. We will be publish an updated 2024 Unified Roadmp soon, as ZFS-LoalPV is now being integrated and unified into the core OpenEBS storage platform.
+The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects/30). This tracks our base historical engineering development work and is now somewhat out of date. We will be publish an updated 2024 Unified Roadmp soon, as LocalPV-LVM is now being integrated and unified into the core OpenEBS storage platform.
@@ -48,23 +48,23 @@ The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects
> [!IMPORTANT]
-> Before installing the LVM-LocalPV driver please make sure your Kubernetes Cluster meets the following prerequisites:
+> Before installing the LocalPV-LVM driver please make sure your Kubernetes Cluster meets the following prerequisites:
> 1. All the nodes must have LVM2 utils package installed
> 2. All the nodes must have dm-snapshot Kernel Module loaded - (Device Mapper Snapshot)
-> 4. You have access to install RBAC components into kube-system namespace. The OpenEBS LVM driver components are installed in kube-system namespace to allow them to be flagged as system critical components.
+> 4. You have access to install RBAC components into `` namespace.
-> [!NOTE]
+
### Supported System
> | Name | Version |
> | :--- | :--- |
-> | K8S | 1.20+ |
+> | K8S | 1.23+ |
> | Distro | Alpine, Arch, CentOS, Debian, Fedora, NixOS, SUSE, Talos, RHEL, Ubuntu |
> | Kernel | oldest supported kernel is 2.6 |
> | LVM2 | 2.03.21 |
@@ -75,7 +75,7 @@ The orignal v1.0 dev roadmap [is here ](https://github.com/orgs/openebs/projects
## Setup
-Find the disk which you want to use for the LVM-LocalPV. Note: For testing you can use the loopback device.
+Find the disk which you want to use for the LocalPV-LVM. Note: For testing you can use the loopback device.
```
truncate -s 1024G /tmp/disk.img
@@ -84,8 +84,8 @@ sudo losetup -f /tmp/disk.img --show
> [!NOTE]
> - This is the old maual config process
-> - LVM-LocalPV will num dynamically provision the VG fro you
-> - The PV, VG and LV names will be dynamically provisioned by OpenEBS LVM-LocalPV as K8s unique entities (for safety, you cannot provide your own PV, VG or LV names)
+> - LocalPV-LVM will num dynamically provision the VG fro you
+> - The PV, VG and LV names will be dynamically provisioned by OpenEBS LocalPV-LVM as K8s unique entities (for safety, you cannot provide your own PV, VG or LV names)
Create the Volume group on all the nodes, which will be used by the LVM2 Driver for provisioning the volumes
@@ -98,45 +98,33 @@ sudo vgcreate lvmvg /dev/loop0 ## here lvmvg is the volume group name to b
## Installation
-Install the latest release of OpenEBS LVM2 LVM-LocalPV driver by running the following command. Note: All nodes must be running the same verison of LVM-LocalPV, LMV2, device-mapper & dm-snapshot.
+Install the latest release of OpenEBS LVM2 LocalPV-LVM driver by running the following command. Note: All nodes must be running the same verison of LocalPV-LVM, LMV2, device-mapper & dm-snapshot.
+**NOTE:** Installation using operator YAMLs is not the supported way any longer.
+We can install the latest release of OpenEBS LVM driver by running the following command:
+```bash
+helm repo add openebs https://openebs.github.io/openebs
+helm repo update
+helm install openebs --namespace openebs openebs/openebs --create-namespace
```
-$ kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml
-```
-
-If you want to fetch a versioned manifest, you can use the manifests for a
-specific OpenEBS release version, for example:
-
-```
-$ kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/versioned/3.0.0/lvm-operator.yaml
-```
-
-**NOTE:** For some Kubernetes distributions, the `kubelet` directory must be changed at all relevant places in the YAML powering the operator (both the `openebs-lvm-controller` and `openebs-lvm-node`).
-- For `microk8s`, we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`, we need to replace `/var/lib/kubelet/` with `/var/snap/microk8s/common/var/lib/kubelet/` at all the places in the operator yaml and then we can apply it on microk8s.
+**NOTE:** If you are running a custom Kubelet location, or a Kubernetes distribution that uses a custom Kubelet location, the `kubelet` directory must be changed on the helm values at install-time using the flag option `--set lvm-localpv.lvmNode.kubeletDir=` in the `helm install` command.
+- For `microk8s`, we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`, we need to replace `/var/lib/kubelet/` with `/var/snap/microk8s/common/var/lib/kubelet/`.
- For `k0s`, the default directory (`/var/lib/kubelet`) should be changed to `/var/lib/k0s/kubelet`.
-
- For `RancherOS`, the default directory (`/var/lib/kubelet`) should be changed to `/opt/rke/var/lib/kubelet`.
-Verify that the LVM driver Components are installed and running using below command :
-
+Verify that the LVM driver Components are installed and running using below command. Depending on number of nodes, you will see one lvm-controller pod and lvm-node daemonset running on the nodes :
+```bash
+$ kubectl get pods -n openebs -l role=openebs-lvm
+NAME READY STATUS RESTARTS AGE
+openebs-lvm-localpv-controller-7b6d6b4665-fk78q 5/5 Running 0 11m
+openebs-lvm-localpv-node-mcch4 2/2 Running 0 11m
+openebs-lvm-localpv-node-pdt88 2/2 Running 0 11m
+openebs-lvm-localpv-node-r9jn2 2/2 Running 0 11m
```
-$ kubectl get pods -n kube-system -l role=openebs-lvm
-```
-
-Depending on number of nodes, you will see one lvm-controller pod and lvm-node daemonset running
-on the nodes.
-```
-NAME READY STATUS RESTARTS AGE
-openebs-lvm-controller-0 5/5 Running 0 35s
-openebs-lvm-node-54slv 2/2 Running 0 35s
-openebs-lvm-node-9vg28 2/2 Running 0 35s
-openebs-lvm-node-qbv57 2/2 Running 0 35s
-
-```
-Once LVM driver is successfully installed, we can provision volumes.
+Once LVM driver is installed and running we can provision a volume.
### Deployment
@@ -156,7 +144,7 @@ parameters:
provisioner: local.csi.openebs.io
```
-Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for LVM-LocalPV
+Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for LocalPV-LVM
##### VolumeGroup Availability
diff --git a/ci/ci-test.sh b/ci/ci-test.sh
index 497a0bc0..5e9c8b21 100755
--- a/ci/ci-test.sh
+++ b/ci/ci-test.sh
@@ -197,4 +197,4 @@ fi
printf "\n\n######### All test cases passed #########\n\n"
# last statement formatted to always return true
-[ -z "${CLEANUP}" ] || cleanup 2>/dev/null
+[ -z "${CLEANUP}" ] || cleanup 2>/dev/null
\ No newline at end of file
diff --git a/deploy/yamls/lvm-driver.yaml b/deploy/yamls/lvm-driver.yaml
index 0385df6c..95979a42 100644
--- a/deploy/yamls/lvm-driver.yaml
+++ b/deploy/yamls/lvm-driver.yaml
@@ -1191,7 +1191,7 @@ spec:
labels:
app: openebs-lvm-node
role: openebs-lvm
- openebs.io/component-name: openebs-lvm-node
+ openebs.io/component-name: openebs-lvm-localpv-node
openebs.io/version: ci
spec:
priorityClassName: openebs-lvm-localpv-csi-node-critical
diff --git a/design/lvm/persistent-volume-claim/access_mode.md b/design/lvm/persistent-volume-claim/access_mode.md
index 05eda16b..28eafc03 100644
--- a/design/lvm/persistent-volume-claim/access_mode.md
+++ b/design/lvm/persistent-volume-claim/access_mode.md
@@ -67,7 +67,7 @@ spec:
### Test Plan
-- Provision an application with LVM-LocalPV supported access mode and verify accessibility of volume from application.
+- Provision an application with LocalPV-LVM supported access mode and verify accessibility of volume from application.
- Provision an application with unsupported access modes and verify that volume should not get provisioned.
- Provision multiple applications on the same volume and verify that only one application instance should be in running state.
diff --git a/design/lvm/resize_workflow.md b/design/lvm/resize_workflow.md
index 9a66cc27..d8396c57 100644
--- a/design/lvm/resize_workflow.md
+++ b/design/lvm/resize_workflow.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV Volume Expansion
+title: LocalPV-LVM Volume Expansion
authors:
- "@pawanpraka1"
owners:
@@ -9,11 +9,11 @@ last-updated: 2021-05-28
status: Implemented
---
-# LVM-LocalPV Volume Expansion
+# LocalPV-LVM Volume Expansion
## Table of Contents
-- [LVM-LocalPV Volume Expansion](#lvm-localpv-volume-expansion)
+- [LocalPV-LVM Volume Expansion](#lvm-localpv-volume-expansion)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
@@ -33,13 +33,13 @@ status: Implemented
## Summary
-This proposal charts out the design details to implement expansion workflow on LVM-LocalPV Volumes.
+This proposal charts out the design details to implement expansion workflow on LocalPV-LVM Volumes.
## Motivation
### Goals
-- As a user, I should be able to resize the volumes provisioned via LVM-LocalPV by updating the size
+- As a user, I should be able to resize the volumes provisioned via LocalPV-LVM by updating the size
on PersistentVolumeClaim(PVC).
### Non-Goals
diff --git a/design/lvm/snapshot.md b/design/lvm/snapshot.md
index 3bd204a1..acaf3462 100644
--- a/design/lvm/snapshot.md
+++ b/design/lvm/snapshot.md
@@ -1,5 +1,5 @@
---
-title: Snapshot Support for LVM-LocalPV
+title: Snapshot Support for LocalPV-LVM
authors:
- "@pawanpraka1"
owners:
@@ -9,7 +9,7 @@ last-updated: 2021-06-21
status: In Progress
---
-# Snapshot Support for LVM-LocalPV
+# Snapshot Support for LocalPV-LVM
## Table of Contents
diff --git a/design/lvm/storageclass-parameters/allowed_topologies.md b/design/lvm/storageclass-parameters/allowed_topologies.md
index 5adb705f..08a79cf1 100644
--- a/design/lvm/storageclass-parameters/allowed_topologies.md
+++ b/design/lvm/storageclass-parameters/allowed_topologies.md
@@ -35,7 +35,7 @@ This proposal points out workflow details to support allowed topologies.
### Implementation Details
-This feature is natively driven by Kubernetes(for more information about workflow click [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md#volume-topology-aware-scheduling)) and LVM-LocalPV CSI-Driver is a consumer of topology
+This feature is natively driven by Kubernetes(for more information about workflow click [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md#volume-topology-aware-scheduling)) and LocalPV-LVM CSI-Driver is a consumer of topology
feature.
- During volume provisioning time external-provisioner will read topology information specified
diff --git a/design/lvm/storageclass-parameters/fs_type.md b/design/lvm/storageclass-parameters/fs_type.md
index 78aee1a3..c6e4a877 100644
--- a/design/lvm/storageclass-parameters/fs_type.md
+++ b/design/lvm/storageclass-parameters/fs_type.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV fsType
+title: LocalPV-LVM fsType
authors:
- "@pawanpraka1"
owners:
@@ -9,10 +9,10 @@ last-updated: 2021-06-17
status: Implemented
---
-# LVM-LocalPV fsType Parameter
+# LocalPV-LVM fsType Parameter
## Table of Contents
-- [LVM-LocalPV fsType Parameter](#lvm-localpv-fstype-parameter)
+- [LocalPV-LVM fsType Parameter](#lvm-localpv-fstype-parameter)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
@@ -50,7 +50,7 @@ Kubernetes provides a placeholder in StorageClasss to specify driver & Storage P
supported key-value pairs under the parameters section. K8s registered a key called `fsType`
to specify filesystem.
-- Filesystem information is propagated to LVM-LocalPV CSI Driver during as payload during
+- Filesystem information is propagated to LocalPV-LVM CSI Driver during as payload during
`NodePublishVolume` gRPC request.
- During `NodePublishVolume` gRPC request CSI driver reads required information(fsType,
volume mode, mount options, and so on) if volume mode is filesystem then driver will
diff --git a/design/lvm/storageclass-parameters/mount_options.md b/design/lvm/storageclass-parameters/mount_options.md
index 5e80e44b..a7817f07 100644
--- a/design/lvm/storageclass-parameters/mount_options.md
+++ b/design/lvm/storageclass-parameters/mount_options.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV Mount Options
+title: LocalPV-LVM Mount Options
authors:
- "@pawanpraka1"
owners:
@@ -9,10 +9,10 @@ last-updated: 2021-06-16
status: Implemented
---
-# LVM-LocalPV Mount Options
+# LocalPV-LVM Mount Options
## Table of Contents
-- [LVM-LocalPV Mount Options](#lvm-localpv-mount-options)
+- [LocalPV-LVM Mount Options](#lvm-localpv-mount-options)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
diff --git a/design/lvm/storageclass-parameters/reclaim_policy.md b/design/lvm/storageclass-parameters/reclaim_policy.md
index bbe0f852..02975ca3 100644
--- a/design/lvm/storageclass-parameters/reclaim_policy.md
+++ b/design/lvm/storageclass-parameters/reclaim_policy.md
@@ -40,7 +40,7 @@ This proposal points out workflow details of volume reclaim policies.
### Implementation Details
-LVM-LocalPV doesn't have any direct dependency over volume reclaim policies moreover
+LocalPV-LVM doesn't have any direct dependency over volume reclaim policies moreover
these are is a standard Kubernetes storageclass option. Kubernetes supports two kind
of volume policies that are `Retain` & `Delete`. By the name `Retain` states underlying
volume should exist even after deleting PVC whereas `Delete` states underlying volume
diff --git a/design/lvm/storageclass-parameters/shared.md b/design/lvm/storageclass-parameters/shared.md
index 232467b1..1a96d05a 100644
--- a/design/lvm/storageclass-parameters/shared.md
+++ b/design/lvm/storageclass-parameters/shared.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV Shared Volume
+title: LocalPV-LVM Shared Volume
authors:
- "@pawanpraka1"
owners:
@@ -9,10 +9,10 @@ last-updated: 2021-06-16
status: Implemented
---
-# LVM-LocalPV Shared Volume
+# LocalPV-LVM Shared Volume
## Table of Contents
-- [LVM-LocalPV Shared Volume](#lvm-localpv-shared-volume)
+- [LocalPV-LVM Shared Volume](#lvm-localpv-shared-volume)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
diff --git a/design/lvm/storageclass-parameters/thin_provision.md b/design/lvm/storageclass-parameters/thin_provision.md
index e182c1f6..f3074b10 100644
--- a/design/lvm/storageclass-parameters/thin_provision.md
+++ b/design/lvm/storageclass-parameters/thin_provision.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV Thin Provision
+title: LocalPV-LVM Thin Provision
authors:
- "@pawanpraka1"
owners:
@@ -9,10 +9,10 @@ last-updated: 2021-06-16
status: Implemented
---
-# LVM-LocalPV Thin Provisioning
+# LocalPV-LVM Thin Provisioning
## Table of Contents
-- [LVM-LocalPV Thin Provisioning](#lvm-localpv-thin-provisioning)
+- [LocalPV-LVM Thin Provisioning](#lvm-localpv-thin-provisioning)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
diff --git a/design/lvm/storageclass-parameters/vg_pattern.md b/design/lvm/storageclass-parameters/vg_pattern.md
index c3609c64..edb63ec1 100644
--- a/design/lvm/storageclass-parameters/vg_pattern.md
+++ b/design/lvm/storageclass-parameters/vg_pattern.md
@@ -1,5 +1,5 @@
---
-title: LVM-LocalPV VG Pattern
+title: LocalPV-LVM VG Pattern
authors:
- "@pawanpraka1"
owners:
@@ -9,10 +9,10 @@ last-updated: 2021-06-16
status: Implemented
---
-# LVM-LocalPV Support of VG Pattern
+# LocalPV-LVM Support of VG Pattern
## Table of Contents
-- [LVM-LocalPV Support of VG Pattern](#lvm-localpv-support-of-vg-pattern)
+- [LocalPV-LVM Support of VG Pattern](#lvm-localpv-support-of-vg-pattern)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
diff --git a/design/lvm/storageclass-parameters/volume_binding_mode.md b/design/lvm/storageclass-parameters/volume_binding_mode.md
index c9053c83..31d2fe6d 100644
--- a/design/lvm/storageclass-parameters/volume_binding_mode.md
+++ b/design/lvm/storageclass-parameters/volume_binding_mode.md
@@ -41,11 +41,11 @@ This proposal points out workflow details to support volume binding modes.
### Implementation Details
-LVM-LocalPV doesn't have any direct dependency over volumebinding modes moreover these are
+LocalPV-LVM doesn't have any direct dependency over volumebinding modes moreover these are
standard Kubernetes storageclass option. For more information about workflow is
available [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md#delayed-volume-binding).
-LVM-LocalPV honours both types of VolumeBindingModes `Immediate` & `WaitForFirstConsumer`.
+LocalPV-LVM honours both types of VolumeBindingModes `Immediate` & `WaitForFirstConsumer`.
- Configuring `Immediate` informs Kubernetes volume provisioning should be instantiated
right after creation of PersistentVolumeClaim(PVC).
- Configuring `WaitForFirstConsumer` inform Kubernetes volume provisioning should be
diff --git a/design/monitoring/capacity_monitor.md b/design/monitoring/capacity_monitor.md
index 1f78082d..3b555a80 100644
--- a/design/monitoring/capacity_monitor.md
+++ b/design/monitoring/capacity_monitor.md
@@ -43,7 +43,7 @@ status: In-progress
- [Alternatives](#alternatives)
## Summary
-This proposal charts out the design details to implement monitoring for doing effective capacity management on nodes having LVM-LocalPV Volumes.
+This proposal charts out the design details to implement monitoring for doing effective capacity management on nodes having LocalPV-LVM Volumes.
## Motivation
Platform SREs must be able to easily query the capacity details at per node level for checking the utilization and planning purposes.
@@ -138,7 +138,7 @@ This [document](https://docs.google.com/document/d/1Nm84UJsRKlOFtxY9eSGZGDwUSJWt
##### Custom Exporter
Node-exporter is able to fetch all metrics related to Logical Volumes. However, there is currently no in-built support for collecting metrics related to Volume Groups. We need a custom-exporter to scrape VG metrics like vg_size, vg_used and vg_free.
This [document](https://docs.google.com/document/d/1Lk__5J4MDa1fEgYFWFPCx1_Guo3Ai2EnnY1e39N7_gA/edit) describes the approach for custom-exporter deployment.
-![LVM-LocalPV-CSI-Plugin](https://user-images.githubusercontent.com/7765078/122904191-bcf4fc00-d36d-11eb-8219-1e0a475728da.png)
+![LocalPV-LVM-CSI-Plugin](https://user-images.githubusercontent.com/7765078/122904191-bcf4fc00-d36d-11eb-8219-1e0a475728da.png)
### Sample Dashboards
Below are sample Grafana dashboards:
diff --git a/docs/developer-setup.md b/docs/developer-setup.md
index 8471107d..15271a1d 100644
--- a/docs/developer-setup.md
+++ b/docs/developer-setup.md
@@ -2,7 +2,7 @@
## Prerequisites
-* You have Go 1.14.7 installed on your local host/development machine.
+* You have Go 1.19 installed on your local host/development machine.
* You have Docker installed on your local host/development machine. Docker is required for building lvm-driver container images and to push them into a Kubernetes cluster for testing.
* You have `kubectl` installed. For running integration tests, you can create a Minikube cluster on local host/development machine. Don't worry if you don't have access to the Kubernetes cluster, raising a PR with the lvm-localpv repository will run integration tests for your changes against a Minikube cluster.
diff --git a/docs/faq.md b/docs/faq.md
index 2cecdba5..d4cc4d74 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -2,7 +2,7 @@
To add custom topology key:
* Label the nodes with the required key and value.
-* Set env variables in the LVM driver daemonset yaml(openebs-lvm-node), if already deployed, you can edit the daemonSet directly.
+* Set env variables in the LVM driver daemonset yaml(openebs-lvm-localpv-node), if already deployed, you can edit the daemonSet directly.
* "openebs.io/nodename" has been added as default topology key.
* Create storageclass with above specific labels keys.
@@ -16,7 +16,7 @@ NAME STATUS ROLES AGE VERSION LABELS
k8s-node-1 Ready worker 16d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true,openebs.io/rack=rack1
-$ kubectl get ds -n kube-system openebs-lvm-node -o yaml
+$ kubectl get ds -n openebs openebs-lvm-localpv-node -o yaml
...
env:
- name: OPENEBS_NODE_ID
@@ -35,17 +35,17 @@ env:
```
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
-Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node).
+Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV-LVM CSI driver daemon sets (openebs-lvm-localpv-node).
```sh
-$ kubectl get pods -n kube-system -l role=openebs-lvm
+$ kubectl get pods -n openebs -l role=openebs-lvm
NAME READY STATUS RESTARTS AGE
openebs-lvm-controller-0 4/4 Running 0 5h28m
-openebs-lvm-node-4d94n 2/2 Running 0 5h28m
-openebs-lvm-node-gssh8 2/2 Running 0 5h28m
-openebs-lvm-node-twmx8 2/2 Running 0 5h28m
+openebs-lvm-localpv-node-4d94n 2/2 Running 0 5h28m
+openebs-lvm-localpv-node-gssh8 2/2 Running 0 5h28m
+openebs-lvm-localpv-node-twmx8 2/2 Running 0 5h28m
```
We can verify that key has been registered successfully with the LVM LocalPV CSI Driver by checking the CSI node object yaml :-
diff --git a/docs/persistentvolumeclaim.md b/docs/persistentvolumeclaim.md
index 082723c5..c5043b7f 100644
--- a/docs/persistentvolumeclaim.md
+++ b/docs/persistentvolumeclaim.md
@@ -74,7 +74,7 @@ Following matrix shows supported PersistentVolumeClaim parameters for lvm-localp
### AccessMode
-LVM-LocalPV supports only `ReadWriteOnce` access mode i.e volume can be mounted as read-write by a single node. AccessMode is a required field, if the field is unspecified then it will lead to a creation error. For more information about access modes workflow click [here](../design/lvm/persistent-volume-claim/access_mode.md).
+LocalPV-LVM supports only `ReadWriteOnce` access mode i.e volume can be mounted as read-write by a single node. AccessMode is a required field, if the field is unspecified then it will lead to a creation error. For more information about access modes workflow click [here](../design/lvm/persistent-volume-claim/access_mode.md).
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
@@ -127,7 +127,7 @@ spec:
### VolumeMode (Optional)
-LVM-LocalPV supports two kind of volume modes(Defaults to Filesystem mode):
+LocalPV-LVM supports two kind of volume modes(Defaults to Filesystem mode):
- Block (Block mode can be used in a case where application itself maintains filesystem)
- Filesystem (Application which requires filesystem as a prerequisite)
Note: If unspecified defaults to **Filesystem** mode. More information about workflow
diff --git a/docs/raw-block-volume.md b/docs/raw-block-volume.md
index 0936c498..a08a14df 100644
--- a/docs/raw-block-volume.md
+++ b/docs/raw-block-volume.md
@@ -32,7 +32,7 @@ spec:
storage: 5Gi
```
-Now we can deploy the application using the above PVC, the LVM-LocalPV driver will attach a Raw block device at the given mount path. We can provide the device path using volumeDevices in the application yaml :-
+Now we can deploy the application using the above PVC, the LocalPV-LVM driver will attach a Raw block device at the given mount path. We can provide the device path using volumeDevices in the application yaml :-
```yaml
apiVersion: apps/v1
diff --git a/docs/resize.md b/docs/resize.md
index c448f97a..423f1983 100644
--- a/docs/resize.md
+++ b/docs/resize.md
@@ -1,4 +1,4 @@
-## LVM-LocalPV Volume Resize
+## LocalPV-LVM Volume Resize
We can resize the volume by updating the PVC yaml to the desired size and apply it. The LVM Driver will take care of expanding the volume via lvextend command using "-r" option which will resize the filesystem.
diff --git a/docs/storageclasses.md b/docs/storageclasses.md
index dd01cdb8..12c56512 100644
--- a/docs/storageclasses.md
+++ b/docs/storageclasses.md
@@ -147,7 +147,7 @@ parameters:
### MountOptions (Optional)
-Volumes that are provisioned via LVM-LocalPV will use the mount options specified in storageclass during volume mounting time inside an application. If a field is unspecified/specified, `-o default` option will be added to mount the volume. For more information about mount options workflow click [here](../design/lvm/storageclass-parameters/mount_options.md).
+Volumes that are provisioned via LocalPV-LVM will use the mount options specified in storageclass during volume mounting time inside an application. If a field is unspecified/specified, `-o default` option will be added to mount the volume. For more information about mount options workflow click [here](../design/lvm/storageclass-parameters/mount_options.md).
**Note**: Mount options are not validated. If mount options are invalid, then volume mount fails.
```yaml
@@ -165,11 +165,11 @@ parameters:
### Parameters
-LVM-LocalPV storageclass supports various parameters for different use cases. Following are the supported parameters
+LocalPV-LVM storageclass supports various parameters for different use cases. Following are the supported parameters
- #### FsType (Optional)
- Admin can specify filesystem in storageclass. LVM-LocalPV CSI-Driver will format block device with specified filesystem and mount in application pod. If fsType is not specified defaults to `ext4` filesystem. For more information about filesystem type workflow click [here](../design/lvm/storageclass-parameters/fs_type.md).
+ Admin can specify filesystem in storageclass. LocalPV-LVM CSI-Driver will format block device with specified filesystem and mount in application pod. If fsType is not specified defaults to `ext4` filesystem. For more information about filesystem type workflow click [here](../design/lvm/storageclass-parameters/fs_type.md).
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
@@ -321,7 +321,7 @@ allowedTopologies:
- node-2
```
-At the same time, you must set env variables in the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node) so that it can pick the node label as the supported topology. It add "openebs.io/nodename" as default topology key. If the key doesn't exist in the node labels when the CSI LVM driver register, the key will not add to the topologyKeys. Set more than one keys separated by commas.
+At the same time, you must set env variables in the LocalPV-LVM CSI driver daemon sets (openebs-lvm-localpv-node) so that it can pick the node label as the supported topology. It add "openebs.io/nodename" as default topology key. If the key doesn't exist in the node labels when the CSI LVM driver register, the key will not add to the topologyKeys. Set more than one keys separated by commas.
```yaml
env:
@@ -369,12 +369,12 @@ spec:
If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES) .Check [faq](./faq.md#1-how-to-add-custom-topology-key) for more details.
```
-$ kubectl edit ds -n kube-system openebs-lvm-node
+$ kubectl edit ds -n openebs openebs-lvm-localpv-node
```
Here we can have volume group of name “lvmvg” created on the nvme disks and want to use this high performing LVM volume group for the applications that need higher IOPS. We can use the above SorageClass to create the PVC and deploy the application using that.
-The LVM-LocalPV driver will create the Volume in the volume group “lvmvg” present on the node with fewer of volumes provisioned among the given node list. In the above StorageClass, if there provisioned volumes on node-1 are less, it will create the volume on node-1 only. Alternatively, we can use `volumeBindingMode: WaitForFirstConsumer` to let the k8s select the node where the volume should be provisioned.
+The LocalPV-LVM driver will create the Volume in the volume group “lvmvg” present on the node with fewer of volumes provisioned among the given node list. In the above StorageClass, if there provisioned volumes on node-1 are less, it will create the volume on node-1 only. Alternatively, we can use `volumeBindingMode: WaitForFirstConsumer` to let the k8s select the node where the volume should be provisioned.
The problem with the above StorageClass is that it works fine if the number of nodes is less, but if the number of nodes is huge, it is cumbersome to list all the nodes like this. In that case, what we can do is, we can label all the similar nodes using the same key value and use that label to create the StorageClass.
@@ -385,7 +385,7 @@ user@k8s-master:~ $ kubectl label node k8s-node-1 openebs.io/lvmvg=nvme
node/k8s-node-1 labeled
```
-Add "openebs.io/lvmvg" to the LVM-LocalPV CSI driver daemon sets env(ALLOWED_TOPOLOGIES). Now, we can create the StorageClass like this:
+Add "openebs.io/lvmvg" to the LocalPV-LVM CSI driver daemon sets env(ALLOWED_TOPOLOGIES). Now, we can create the StorageClass like this:
```yaml
apiVersion: storage.k8s.io/v1
diff --git a/e2e-tests/experiments/functional/lvmpv-custom-topology/test.yml b/e2e-tests/experiments/functional/lvmpv-custom-topology/test.yml
index 2b4b7dba..7b900494 100644
--- a/e2e-tests/experiments/functional/lvmpv-custom-topology/test.yml
+++ b/e2e-tests/experiments/functional/lvmpv-custom-topology/test.yml
@@ -175,7 +175,7 @@
failed_when: "pvc_status.stdout != 'Pending'"
- name: Set the ALLOWED_TOPOLOGIES env in lvm node-daemonset with test-specific topology key
- shell: kubectl set env daemonset/openebs-lvm-node -n kube-system ALLOWED_TOPOLOGIES=kubernetes.io/hostname,{{ lkey }}
+ shell: kubectl set env daemonset/openebs-lvm-localpv-node -n kube-system ALLOWED_TOPOLOGIES=kubernetes.io/hostname,{{ lkey }}
args:
executable: /bin/bash
register: topology_status
diff --git a/e2e-tests/experiments/upgrade-lvm-localpv/test.yml b/e2e-tests/experiments/upgrade-lvm-localpv/test.yml
index 760ba226..55204047 100644
--- a/e2e-tests/experiments/upgrade-lvm-localpv/test.yml
+++ b/e2e-tests/experiments/upgrade-lvm-localpv/test.yml
@@ -141,7 +141,7 @@
- name: Verify that lvm-node agent daemonset image is upgraded
shell: >
- kubectl get ds openebs-lvm-node -n kube-system
+ kubectl get ds openebs-lvm-localpv-node -n kube-system
-o jsonpath='{.spec.template.spec.containers[?(@.name=="openebs-lvm-plugin")].image}'
args:
executable: /bin/bash
diff --git a/legacy-upgrade/README.md b/legacy-upgrade/README.md
new file mode 100644
index 00000000..99575e7d
--- /dev/null
+++ b/legacy-upgrade/README.md
@@ -0,0 +1,89 @@
+### *Prerequisite*
+
+Please do not provision/deprovision any volumes during the upgrade, if we can not control it, then we can scale down the openebs-lvm-controller stateful set to zero replica which will pause all the provisioning/deprovisioning request. And once upgrade is done, we can scale up the controller pod and then volume provisioning/deprovisioning will resume on the upgraded system.
+
+```
+$ kubectl edit deploy openebs-lvm-controller -n kube-system
+
+```
+And set replicas to zero :
+
+```
+spec:
+ podManagementPolicy: OrderedReady
+ *replicas: 0*
+ revisionHistoryLimit: 10
+```
+
+### *upgrade the driver*
+
+We can upgrade the lvm driver to the latest stable release version by apply the following command:
+
+```
+$ kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml
+```
+
+Please note that if you were using the LVM_NAMESPACE env value other than `openebs` (default value) in which lvm-localpv CR's are created, don't forget to update that value in lvm-operator yaml file under LVM_NAMESPACE env.
+
+For upgrading the driver to any particular release, download the lvm-operator yaml from the desired branch and update the lvm-driver image tag to the corresponding tag. For e.g, to upgrade the lvm-driver to 0.7.0 version, follow these steps:
+
+1. Download operator yaml from specific branch
+```
+wget https://raw.githubusercontent.com/openebs/lvm-localpv/v0.7.x/deploy/lvm-operator.yaml
+```
+
+2. Update the lvm-driver image tag. We have to update this at two places,
+
+one at `openebs-lvm-plugin` container image in lvm-controller deployment
+```
+ - name: openebs-lvm-plugin
+ image: openebs/lvm-driver:ci // update it to openebs/lvm-driver:0.7.0
+ imagePullPolicy: IfNotPresent
+ env:
+ - name: OPENEBS_CONTROLLER_DRIVER
+ value: controller
+ - name: OPENEBS_CSI_ENDPOINT
+ value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
+ - name: LVM_NAMESPACE
+ value: openebs
+```
+and other one at `openebs-lvm-plugin` container in lvm-node daemonset.
+```
+ - name: openebs-lvm-plugin
+ securityContext:
+ privileged: true
+ allowPrivilegeEscalation: true
+ image: openebs/lvm-driver:ci // Update it to openebs/lvm-driver:0.7.0
+ imagePullPolicy: IfNotPresent
+ args:
+ - "--nodeid=$(OPENEBS_NODE_ID)"
+ - "--endpoint=$(OPENEBS_CSI_ENDPOINT)"
+ - "--plugin=$(OPENEBS_NODE_DRIVER)"
+ - "--listen-address=$(METRICS_LISTEN_ADDRESS)"
+```
+
+3. If you were using lvm-controller in high-availability (HA) mode, make sure to update deployment replicas. By default it is set to one (1).
+
+```
+spec:
+ selector:
+ matchLabels:
+ app: openebs-lvm-controller
+ role: openebs-lvm
+ serviceName: "openebs-lvm"
+ replicas: 1 // update it to desired lvm-controller replicas.
+```
+
+4. Now we can apply the lvm-operator.yaml file to upgrade lvm-driver to 0.7.0 version.
+
+### *Note*
+
+While upgrading lvm-driver from v0.8.0 to later version by applying lvm-operator file, we may get this error.
+```
+The CSIDriver "local.csi.openebs.io" is invalid: spec.storageCapacity: Invalid value: true: field is immutable
+```
+It occurs due to newly added field `storageCapacity: true` in csi driver spec. In that case, first delete the csi-driver by running this command:
+```
+$ kubectl delete csidriver local.csi.openebs.io
+```
+Now we can again apply the operator yaml file.
diff --git a/upgrade/README.md b/upgrade/README.md
index 99575e7d..5b615484 100644
--- a/upgrade/README.md
+++ b/upgrade/README.md
@@ -1,89 +1 @@
-### *Prerequisite*
-
-Please do not provision/deprovision any volumes during the upgrade, if we can not control it, then we can scale down the openebs-lvm-controller stateful set to zero replica which will pause all the provisioning/deprovisioning request. And once upgrade is done, we can scale up the controller pod and then volume provisioning/deprovisioning will resume on the upgraded system.
-
-```
-$ kubectl edit deploy openebs-lvm-controller -n kube-system
-
-```
-And set replicas to zero :
-
-```
-spec:
- podManagementPolicy: OrderedReady
- *replicas: 0*
- revisionHistoryLimit: 10
-```
-
-### *upgrade the driver*
-
-We can upgrade the lvm driver to the latest stable release version by apply the following command:
-
-```
-$ kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml
-```
-
-Please note that if you were using the LVM_NAMESPACE env value other than `openebs` (default value) in which lvm-localpv CR's are created, don't forget to update that value in lvm-operator yaml file under LVM_NAMESPACE env.
-
-For upgrading the driver to any particular release, download the lvm-operator yaml from the desired branch and update the lvm-driver image tag to the corresponding tag. For e.g, to upgrade the lvm-driver to 0.7.0 version, follow these steps:
-
-1. Download operator yaml from specific branch
-```
-wget https://raw.githubusercontent.com/openebs/lvm-localpv/v0.7.x/deploy/lvm-operator.yaml
-```
-
-2. Update the lvm-driver image tag. We have to update this at two places,
-
-one at `openebs-lvm-plugin` container image in lvm-controller deployment
-```
- - name: openebs-lvm-plugin
- image: openebs/lvm-driver:ci // update it to openebs/lvm-driver:0.7.0
- imagePullPolicy: IfNotPresent
- env:
- - name: OPENEBS_CONTROLLER_DRIVER
- value: controller
- - name: OPENEBS_CSI_ENDPOINT
- value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
- - name: LVM_NAMESPACE
- value: openebs
-```
-and other one at `openebs-lvm-plugin` container in lvm-node daemonset.
-```
- - name: openebs-lvm-plugin
- securityContext:
- privileged: true
- allowPrivilegeEscalation: true
- image: openebs/lvm-driver:ci // Update it to openebs/lvm-driver:0.7.0
- imagePullPolicy: IfNotPresent
- args:
- - "--nodeid=$(OPENEBS_NODE_ID)"
- - "--endpoint=$(OPENEBS_CSI_ENDPOINT)"
- - "--plugin=$(OPENEBS_NODE_DRIVER)"
- - "--listen-address=$(METRICS_LISTEN_ADDRESS)"
-```
-
-3. If you were using lvm-controller in high-availability (HA) mode, make sure to update deployment replicas. By default it is set to one (1).
-
-```
-spec:
- selector:
- matchLabels:
- app: openebs-lvm-controller
- role: openebs-lvm
- serviceName: "openebs-lvm"
- replicas: 1 // update it to desired lvm-controller replicas.
-```
-
-4. Now we can apply the lvm-operator.yaml file to upgrade lvm-driver to 0.7.0 version.
-
-### *Note*
-
-While upgrading lvm-driver from v0.8.0 to later version by applying lvm-operator file, we may get this error.
-```
-The CSIDriver "local.csi.openebs.io" is invalid: spec.storageCapacity: Invalid value: true: field is immutable
-```
-It occurs due to newly added field `storageCapacity: true` in csi driver spec. In that case, first delete the csi-driver by running this command:
-```
-$ kubectl delete csidriver local.csi.openebs.io
-```
-Now we can again apply the operator yaml file.
+Please refer to the documentation at https://openebs.io/docs/user-guides/upgrade.
\ No newline at end of file