×

Overview

In this example, a legacy data volume exists and a cluster-admin or storage-admin needs to make it available for consumption in a particular project. Using StorageClasses decreases the likelihood of other users and projects gaining access to this volume from a claim because the claim would have to have an exact matching value for the StorageClass annotation. This example also disables dynamic provisioning. This example assumes:

Scenario 1: Link StorageClass to existing Persistent Volume with Legacy Data

As a cluster-admin or storage-admin, define and create the StorageClass for historical financial data.

Example 1. StorageClass finance-history Object Definitions
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: finance-history (1)
provisioner: no-provisioning (2)
parameters: (3)
1 Name of the StorageClass.
2 This is a required field, but since there is to be no dynamic provisioning, a value must be put here as long as it is not an actual provisioner plug-in type.
3 Parameters can simply be left blank, since these are only used for the dynamic provisioner.

Save the definitions to a YAML file (finance-history-storageclass.yaml) and create the StorageClass.

# oc create -f finance-history-storageclass.yaml
storageclass "finance-history" created


# oc get storageclass
NAME              TYPE
finance-history   no-provisioning

cluster-admin or storage-admin users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.

The StorageClass exists. A cluster-admin or storage-admin can create the Persistent Volume (PV) for use with the StorageClass. Create a manually provisioned disk using GCE (not dynamically provisioned) and a Persistent Volume that connects to the new GCE disk (gce-pv.yaml).

Example 2. Finance History PV Object
apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-finance-history
 annotations:
   volume.beta.kubernetes.io/storage-class: finance-history (1)
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-existing-PD-volume-name-that-contains-the-valuable-data (2)
   fsType: ext4
1 The StorageClass annotation, that must match exactly.
2 The name of the GCE disk that already exists and contains the legacy data.

As a cluster-admin or storage-admin, create and view the PV.

# oc create -f gce-pv.yaml
persistentvolume "pv-finance-history" created

# oc get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                        REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Available                                          2d

Notice you have a pv-finance-history Available and ready for consumption.

As a user, create a Persistent Volume Claim (PVC) as a YAML file and specify the correct StorageClass annotation:

Example 3. Claim for finance-history Object Definition
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-finance-history
 annotations:
   volume.beta.kubernetes.io/storage-class: finance-history (1)
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 20Gi
1 The StorageClass annotation, that must match exactly or the claim will go unbound until it is deleted or another StorageClass is created that matches the annotation.

Create and view the PVC and PV to see if it is bound.

# oc create -f pvc-finance-history.yaml
persistentvolumeclaim "pvc-finance-history" created

# oc get pvc
NAME                  STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
pvc-finance-history   Bound     pv-finance-history   35Gi       RWX           9m


# oc get pv  (cluster/storage-admin)
NAME                 CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                         REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Bound       default/pvc-finance-history             5m

You can use StorageClasses in the same cluster for both legacy data (no dynamic provisioning) and with dynamic provisioning.