×

Overview

You can provision your OpenShift Enterprise cluster with storage dynamically when running in a cloud environment. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Many storage types are available for use as persistent volumes in OpenShift Enterprise. While all of them can be statically provisioned by an administrator, some types of storage can be created dynamically using an API. These types of storage can be provisioned in an OpenShift Enterprise cluster using the new and experimental dynamic storage feature.

Dynamic provisioning of persistent volumes is currently a Technology Preview feature, introduced in OpenShift Enterprise 3.1.1. This feature is experimental and expected to change in the future as it matures and feedback is received from users. New ways to provision the cluster are planned and the means by which one accesses this feature is going to change. Backwards compatibility is not guaranteed.

Enabling Provisioner Plug-ins

OpenShift Enterprise provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured cloud provider’s API to create new storage resources:

Storage Type Provisioner Plug-in Name Required Cloud Configuration Notes

OpenStack Cinder

kubernetes.io/cinder

Configuring for OpenStack

AWS Elastic Block Store (EBS)

kubernetes.io/aws-ebs

Configuring for AWS

For dynamic provisioning when using multiple clusters in different zones, each node must be tagged with Key=KubernetesCluster,Value=clusterid.

GCE Persistent Disk (gcePD)

kubernetes.io/gce-pd

Configuring for GCE

In multi-zone configurations, PVs must be created in the same region/zone as the master node. Do this by setting the failure-domain.beta.kubernetes.io/region and failure-domain.beta.kubernetes.io/zone PV labels to match the master node.

For any chosen provisioner plug-ins, the relevant cloud configuration must also be set up, per Required Cloud Configuration in the above table.

When your OpenShift Enterprise cluster is configured for EBS, GCE, or Cinder, the associated provisioner plug-in is implied and automatically enabled. No additional OpenShift Enterprise configuration by the cluster administration is required for dynamic provisioning.

For example, if your OpenShift Enterprise cluster is configured to run in AWS, the EBS provisioner plug-in is automatically available for creating dynamically provisioned storage requested by a user.

Future provisioner plug-ins will include the many types of storage a single provider offers. AWS, for example, has several types of EBS volumes to offer, each with its own performance characteristics; there is also an NFS-like storage option. More provisioner plug-ins will be implemented for the supported storage types available in OpenShift Enterprise.

Requesting Dynamically Provisioned Storage

Users can request dynamically provisioned storage by including a storage class annotation in their persistent volume claim:

Example 1. Persistent Volume Claim Requesting Dynamic Storage
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
  name: "claim1"
  annotations:
    volume.alpha.kubernetes.io/storage-class: "foo" (1)
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "3Gi"
1 The value of the volume.alpha.kubernetes.io/storage-class annotation is not meaningful at this time. The presence of the annotation, with any arbitrary value, triggers provisioning using the single implied provisioner plug-in per cloud.

Volume Owner Information

For dynamically provisioned storage, OpenShift Enterprise defines three key/value pairs, collectively known as the volume owner information, and arranges for the storage to associate this triplet with the provisioned volume. The keys are normally not visible to OpenShift Enterprise users, while the values are taken from user-visible PV and PVC objects.

Keys

kubernetes.io/created-for/pv/name

Name of the PersistentVolume.

There is no key for the PV namespace because that has value default and cannot be changed.
kubernetes.io/created-for/pvc/namespace
kubernetes.io/created-for/pvc/name

Namespace and name, respectively, of the PersistentVolumeClaim.

Other Terms for Volume Owner Information

Each storage type saves the volume owner information in its own way. When communicating with the storage administrator, use these specific terms to avoid confusion:

Term for Key/Value Pairs Storage Type

tags

AWS EBS

metadata

OpenStack Cinder

JSON-in-description

GCE PD

Using Volume Owner Information

The main benefit of saving the volume owner information is to enable storage administrators to recognize volumes dynamically created by OpenShift Enterprise.

Example scenarios:

  • OpenShift Enterprise terminates unexpectedly and the dynamically provisioned AWS EBS contains useful data that must be recovered. The OpenShift Enterprise users provide the storage administrators with a list of affected projects and their PVCs:

    Project Name PVC Name

    app-server

    a-pv-01

    a-pv-02

    notifications

    n-pv-01

    The storage administrators search for the orphaned volumes, matching project names and PVC names to the kubernetes.io/created-for/pvc/namespace and kubernetes.io/created-for/pvc/name tags, respectively. They find them and arrange to make them available again for data-recovery efforts.

  • The users do not explicitly delete the dynamically provisioned storage volumes when they are finished with a project. The storage administrators find the defunct volumes and delete them. Unlike the preceding scenario, they need match only the project names to kubernetes.io/created-for/pvc/namespace.

Volume Recycling

Volumes created dynamically by a provisioner have their persistentVolumeReclaimPolicy set to Delete. When a persistent volume claim is deleted, its backing persistent volume is considered released of its claim, and that resource can be reclaimed by the cluster. Dynamic provisioning utilizes the provider’s API to delete the volume from the provider and then removes the persistent volume from the cluster.