×

Overview

OpenShift Container Platform clusters can be provisioned with persistent storage by using local volumes. Local persistent volume allows you to access local storage devices such as a disk, partition or directory by using the standard PVC interface.

Local volumes can be used without manually scheduling pods to nodes, because the system is aware of the volume’s node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Local volumes is an alpha feature and may change in a future release of OpenShift Container Platform. See Feature Status(Local Volume) section for details on known issues and workarounds.

Local volumes can only be used as a statically created Persistent Volume.

Provisioning

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Ensure that OpenShift Container Platform is configured for Local Volumes, before using the PersistentVolume API.

Creating Local Persistent Volume

Define the persistent volume in an object definition.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

Creating Local Persistent Volume Claim

Define the persistent volume claim in an object definition.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-local-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi (1)
  storageClassName: local-storage (2)
1 The required size of storage volume.
2 The name of storage class, which is used for local PVs.

Feature Status

What Works:

  • Creating a PV by specifying a directory with node affinity.

  • A Pod using the PVC that is bound to the previously mentioned PV always get scheduled to that node.

  • External static provisioner daemonset that discovers local directories, creates, cleans up and deletes PVs.

What does not work:

  • Multiple local PVCs in a single pod.

  • PVC binding does not consider pod scheduling requirements and may make sub-optimal or incorrect decisions.

    • Workarounds:

      • Run those pods first, which requires local volume.

      • Give the pods high priority.

      • Run a workaround controller that unbinds PVCs for pods that are stuck pending.

  • If mounts are added after the external provisioner is started, then external provisioner cannot detect the correct capcity of mounts.

    • Workarounds:

      • Before adding any new mount points, first stop the daemonset, add the new mount points, and then start the daemonset.

  • fsgroup conflict occurs if multiple pods using the same PVC specify different fsgroup 's.