OpenEBS Local PV is a CAS engine that can create persistent volumes using either local disks or host paths on the worker nodes. With this CAS engine, the performance will be equivalent to local disk or the file system (host path) on which the volumes are created. Many cloud native applications may not require advanced storage features like replication, snapshots and clones as they themselves can handles those. Such applications require access to managed disks as persistent volumes.
Benefits of OpenEBS Local PVs
OpenEBS Local PVs are analogous to Kubernetes LocalPV. In addition, OpenEBS LocalPVs have the following benefits.
- Local PVs are provisioned dynamically by OpenEBS Local PV provisioner. When the Local PV is provisioned with the default StorageClass with storage-type as :
hostpath, the default
BasePathis created dynamically on the node and mapped to the Local PV.
device, one of the matching BlockDevice on the node is claimed and mapped to the Local PV.
- BlockDevice for Local PVs are managed by OpenEBS NDM. Disk IO metrics of managed devices can also be obtained with help of NDM.
- Provisioning of Local PVs is done through the Kubernetes standards. Admin users create storage class to enforce the storage type (device or hostpath) and put additional control through RBAC policies.
- By specifying the node selector in the application spec YAML , the application pods can be scheduled on specific nodes. After scheduling the application pod, OpenEBS Local PV will be deployed on the same node. It guarantees that the pod is always rescheduled on the same node to retain access to the data all the time.
How to use OpenEBS Local PVs
OpenEBS create two Storage Classes of Local PVs by default as
openebs-device. For simple provisioning of OpenEBS Local PV, these default Storage Classes can be used. More details can be found here.
End users or developers will provision the OpenEBS Local PVs like any other PV, by creating a PVC using a StorageClass provided by the admin user. The StorageClass has
volumeBindingMode: WaitForFirstConsumer which means delay volume binding until application pod is scheduled on the node.
OpenEBS Local PV based on device
Admin user can create a customized StorageClass using the following sample configuration.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-device annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: StorageType value: "device" - name: FSType value: ext4 provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
When a PVC is created using the above StorageClass, OpenEBS Local PV Provisioner uses NDM operator to claim a matching BlockDevice from the worker node where the application pod is scheduled.
The Local PV volume will be provisioned with
filesystem by default. Kubelet will format the block device with the filesystem specified as
cas.openebs.io/config to the path
metadata.annotations in the StorageClass while provisioning the Local PV. Currently supported filesystems are
xfs. If no
FSType is specified, by default Kubelet will format the BlockDevice as
From OpenEBS 1.5, Local PV volume has Raw Block Volume support. The Raw Block Volume support can be added to the path
Block in the Persistent Volume spec. The sample YAML spec of PVC to provision Local PV on Raw Block volume can be found here.
For provisioning Local PV using the BlockDevice attached to the nodes, the BlockDevice should be in one of the following states:
User has attached the BlockDevice, formatted and mounted them. This means, the BlockDevice is already formatted and is mounted on the worker node.
- For Example: Local SSD in GKE.
User has attached the BlockDevice, un-formatted and not mounted. This means, the BlockDevice is attached on the worker node without any file system.
- For Example: GPD in GKE.
User has attached the block device, but device has only device path and no dev links.
- For Example: VM with VMDK disks or AWS node with EBS.
OpenEBS Local PV based on hostpath
Admin user creates a customized StorageClass using the following sample configuration.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-hostpath annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: BasePath value: "/var/openebs/local" - name: StorageType value: "hostpath" provisioner: openebs.io/local reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
When a PVC is created using the above StorageClass, OpenEBS Local PV provisioner will create a new sub directory inside the
BasePath and maps it to the PV.
Note: If default
Basepath needs to be changed by mentioning different hostpath, then the specified hostpath(directory) must be present of the Node.
When to use OpenEBS Local PVs
- High performance is needed by those applications which manage their own replication, data protection and other features such as snapshots and clones.
- When local disks need to be managed dynamically and monitored for impending notice of them going bad.
When not to use OpenEBS Local PVs
When applications expect replication from storage.
When the volume size may need to be changed dynamically but the underlying disk is not resizable.
Limitations (or Roadmap items ) of OpenEBS Local PVs
- Size of the Local PV cannot be increased dynamically. LVM type of functionality inside Local PVs is a potential feature in roadmap.
- Disk quotas are not enforced by Local PV. An underlying device or hostpath can have more data than requested by a PVC or storage class. Enforcing the capacity is a roadmap feature.
- Enforce capacity and PVC resource quotas on the local disks or host paths.
- SMART statistics of the managed disks is also a potential feature in roadmap.