$ oc new-project local-storage
OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard PVC interface.
Local volumes can be used without manually scheduling Pods to nodes, because the system is aware of the volume node’s constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.
Local volumes can only be used as a statically created Persistent Volume. |
The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.
Access to the OpenShift Container Platform web console or command-line interface (CLI).
Create the local-storage
project:
$ oc new-project local-storage
Optional: Allow local storage creation on master and infrastructure nodes.
You might want to use the Local Storage Operator to create volumes on master and infrastructure nodes, and not just worker nodes, to support components such as logging and monitoring.
To allow local storage creation on master and infrastructure nodes, add a toleration to the DaemonSet by entering the following commands:
$ oc patch ds local-storage-local-diskmaker -n local-storage -p '{"spec": {"template": {"spec": {"tolerations":[{"operator": "Exists"}]}}}}'
$ oc patch ds local-storage-local-provisioner -n local-storage -p '{"spec": {"template": {"spec": {"tolerations":[{"operator": "Exists"}]}}}}'
To install the Local Storage Operator from the web console, follow these steps:
Log in to the OpenShift Container Platform web console.
Navigate to Operators → OperatorHub.
Type Local Storage into the filter box to locate the Local Storage Operator.
Click Install.
On the Create Operator Subscription page, select A specific namespace on the cluster. Select local-storage from the drop-down menu.
Adjust the values for Update Channel and Approval Strategy to the values that you want.
Click Subscribe.
Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.
Install the Local Storage Operator from the CLI.
Create an object YAML file to define a Namespace, OperatorGroup, and Subscription for the Local Storage Operator,
such as local-storage.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: local-storage
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: local-operator-group
namespace: local-storage
spec:
targetNamespaces:
- local-storage
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: local-storage
spec:
channel: "{product-version}" (1)
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
1 | This field can be edited to match your release selection of OpenShift Container Platform. |
Create the Local Storage Operator object by entering the following command:
$ oc apply -f local-storage.yaml
At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verify local storage installation by checking that all Pods and the Local Storage Operator have been created:
Check that all the required Pods have been created:
$ oc -n local-storage get pods NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m
Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the local-storage
project:
$ oc get csvs -n local-storage NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded
After all checks have passed, the Local Storage Operator is installed successfully.
Local volumes cannot be created by dynamic provisioning. Instead, PersistentVolumes must be created by the Local Storage Operator. This provisioner will look for any devices, both file system and block volumes, at the paths specified in defined resource.
The Local Storage Operator is installed.
Local disks are attached to the OpenShift Container Platform nodes.
Create the local volume resource. This must define the nodes and paths to the local volumes.
Do not use different StorageClass names for the same device. Doing so will create multiple persistent volumes (PVs). |
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage" (1)
spec:
nodeSelector: (2)
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-140-183
- ip-10-0-158-139
- ip-10-0-164-33
storageClassDevices:
- storageClassName: "local-sc"
volumeMode: Filesystem (3)
fsType: xfs (4)
devicePaths: (5)
- /path/to/device (6)
1 | The namespace where the Local Storage Operator is installed. |
2 | Optional: A node selector containing a list of nodes where the local storage volumes are attached. This
example uses the node host names, obtained from oc get node . If a value is not
defined, then the Local Storage Operator will attempt to find matching disks
on all available nodes. |
3 | The volume mode, either Filesystem or Block , defining the type of the
local volumes. |
4 | The file system that is created when the local volume is mounted for the first time. |
5 | The path containing a list of local storage devices to choose from. |
6 | Replace this value with your actual local disks filepath to the LocalVolume resource, such as /dev/xvdg . PVs are created for these local disks when the provisioner is deployed successfully. |
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage" (1)
spec:
nodeSelector: (2)
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-136-143
- ip-10-0-140-255
- ip-10-0-144-180
storageClassDevices:
- storageClassName: "localblock-sc"
volumeMode: Block (3)
devicePaths: (4)
- /path/to/device (5)
1 | The namespace where the Local Storage Operator is installed. |
2 | Optional: A node selector containing a list of nodes where the local storage volumes are attached. This
example uses the node host names, obtained from oc get node . If a value is not
defined, then the Local Storage Operator will attempt to find matching disks
on all available nodes. |
3 | The volume mode, either Filesystem or Block , defining the type of the
local volumes. |
4 | The path containing a list of local storage devices to choose from. |
5 | Replace this value with your actual local disks filepath to the LocalVolume resource, such as /dev/xvdg . PVs are created for these local disks when the provisioner is deployed successfully. |
Create the local volume resource in your OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-volume>.yaml
Verify the provisioner was created, and the corresponding DaemonSets were created:
$ oc get all -n local-storage NAME READY STATUS RESTARTS AGE pod/local-disks-local-provisioner-h97hj 1/1 Running 0 46m pod/local-disks-local-provisioner-j4mnn 1/1 Running 0 46m pod/local-disks-local-provisioner-kbdnx 1/1 Running 0 46m pod/local-disks-local-diskmaker-ldldw 1/1 Running 0 46m pod/local-disks-local-diskmaker-lvrv4 1/1 Running 0 46m pod/local-disks-local-diskmaker-phxdq 1/1 Running 0 46m pod/local-storage-operator-54564d9988-vxvhx 1/1 Running 0 47m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator ClusterIP 172.30.49.90 <none> 60000/TCP 47m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/local-disks-local-provisioner 3 3 3 3 3 <none> 46m daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3 <none> 46m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-54564d9988 1 1 1 47m
Note the desired and current number of DaemonSet processes. If the desired
count is 0
, it indicates the label selectors were invalid.
Verify that the PersistentVolumes were created:
$ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Local volumes must be statically created as a PersistentVolumeClaim (PVC) to be accessed by the Pod.
PersistentVolumes have been created using the local volume provisioner.
Create the PVC using the corresponding StorageClass:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-pvc-name (1)
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem (2)
resources:
requests:
storage: 100Gi (3)
storageClassName: local-sc (4)
1 | Name of the PVC. |
2 | The type of the PVC. Defaults to Filesystem . |
3 | The amount of storage available to the PVC. |
4 | Name of the StorageClass required by the claim. |
Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pvc>.yaml
After a local volume has been mapped to a PersistentVolumeClaim (PVC) it can be specified inside of a resource.
A PVC exists in the same namespace.
Include the defined claim in the resource’s Spec. The following example declares the PVC inside a Pod:
apiVersion: v1 kind: Pod spec: ... containers: volumeMounts: - name: localpvc (1) mountPath: "/data" (2) volumes: - name: localpvc persistentVolumeClaim: claimName: localpvc (3)
1 | Name of the volume to mount. |
2 | Path inside the Pod where the volume is mounted. |
3 | Name of the existing PVC to use. |
Create the resource in the OpenShift Container Platform cluster, specifying the file you just created:
$ oc create -f <local-pod>.yaml
Occasionally, local volumes must be deleted. While removing the entry in the LocalVolume resource and deleting the PersistentVolume is typically enough, if you want to re-use the same device path or have it managed by a different StorageClass, then additional steps are needed.
The following procedure involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. |
The PersistentVolume must be in a Released
or Available
state.
Deleting a PersistentVolume that is still in use can result in data loss or corruption. |
Edit the previously created LocalVolume to remove any unwanted disks.
Edit the cluster resource:
$ oc edit localvolume <name> -n local-storage
Navigate to the lines under devicePaths
, and delete any representing unwanted disks.
Delete any PersistentVolumes created.
$ oc delete pv <pv-name>
Delete any symlinks on the node.
Create a debug pod on the node:
$ oc debug node/<node-name>
Change your root directory to the host:
$ chroot /host
Navigate to the directory containing the local volume symlinks.
$ cd /mnt/local-storage/<sc-name> (1)
1 | The name of the StorageClass used to create the local volumes. |
Delete the symlink belonging to the removed device.
$ rm <symlink>
To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the local-storage
project.
Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. |
Access to the OpenShift Container Platform web console.
Delete any local volume resources in the project:
$ oc delete localvolume --all --all-namespaces
Uninstall the Local Storage Operator from the web console.
Log in to the OpenShift Container Platform web console.
Navigate to Operators → Installed Operators.
Type Local Storage into the filter box to locate the Local Storage Operator.
Click the Options menu at the end of the Local Storage Operator.
Click Uninstall Operator.
Click Remove in the window that appears.
The PVs created by the Local Storage Operator will remain in the cluster until deleted. Once these volumes are no longer in use, delete them by running the following command:
$ oc delete pv <pv-name>
Delete the local-storage
project:
$ oc delete project local-storage