×

Volume snapshot is deprecated in OpenShift Container Platform 4.2.

This document describes how to use VolumeSnapshots to protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested.

Volume snapshot is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About snapshots

A volume snapshot is a snapshot taken from a storage volume in a cluster. The external snapshot controller and provisioner enable use of the feature in the OpenShift Container Platform cluster and handle volume snapshots through the OpenShift Container Platform API.

With volume snapshots, a cluster administrator can:

  • Create a snapshot of a PersistentVolume bound to a PersistentVolumeClaim.

  • List existing VolumeSnapshots.

  • Delete an existing VolumeSnapshot.

  • Create a new PersistentVolume from an existing VolumeSnapshot.

Supported PersistentVolume types:

  • AWS Elastic Block Store (EBS)

  • Google Compute Engine (GCE) Persistent Disk (PD)

External controller and provisioner

The controller and provisioner provide volume snapshotting. These external components run in the cluster.

There are two external components that provide volume snapshotting:

External controller

Creates, deletes, and reports events on volume snapshots.

External provisioner

Creates new PersistentVolumes from VolumeSnapshots.

The external controller and provisioner services are distributed as container images and can be run in the OpenShift Container Platform cluster as usual.

Running the external controller and provisioner

The cluster administrator must configure access to run the external controller and provisioner.

Procedure

To allow the containers managing the API objects:

  1. Create a ServiceAccount and ClusterRole:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: snapshot-controller-runner
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: snapshot-controller-role
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["list", "watch", "create", "update", "patch"]
      - apiGroups: ["apiextensions.k8s.io"]
        resources: ["customresourcedefinitions"]
        verbs: ["create", "list", "watch", "delete"]
      - apiGroups: ["volumesnapshot.external-storage.k8s.io"]
        resources: ["volumesnapshots"]
        verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
      - apiGroups: ["volumesnapshot.external-storage.k8s.io"]
        resources: ["volumesnapshotdatas"]
        verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  2. As the cluster administrator, provide the hostNetwork security context constraint (SCC):

    # oc adm policy add-scc-to-user hostnetwork -z snapshot-controller-runner

    This SCC controls access to the snapshot-controller-runner service account that the Pod is using.

  3. Bind the rules via ClusterRoleBinding:

    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: snapshot-controller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: snapshot-controller-role
    subjects:
    - kind: ServiceAccount
      name: snapshot-controller-runner
      namespace: default (1)
    1 Specify the project name where the snapshot-controller resides.

AWS and GCE authentication

To authenticate the external controller and provisioner, your cloud provider may require the administrator to provide a secret.

AWS authentication

If the external controller and provisioner are deployed in Amazon Web Services (AWS), AWS must be able to authenticate using the access key.

To provide the credential to the Pod, the cluster administrator creates a new secret:

apiVersion: v1
kind: Secret
metadata:
  name: awskeys
type: Opaque
data:
  access-key-id: <base64 encoded AWS_ACCESS_KEY_ID>
  secret-access-key: <base64 encoded AWS_SECRET_ACCESS_KEY>

When generating the base64 values required for the awskeys secret, remove any trailing newline character as follows:

$ echo -n "<aws_access_key_id>" | base64
$ echo -n "<aws_secret_access_key>" | base64

The following example displays the AWS deployment of the external controller and provisioner containers. Both Pod containers use the secret to access the AWS API.

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: snapshot-controller
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: snapshot-controller
    spec:
      serviceAccountName: snapshot-controller-runner
      hostNetwork: true
      containers:
        - name: snapshot-controller
          image: "registry.redhat.io/openshift3/snapshot-controller:latest"
          imagePullPolicy: "IfNotPresent"
          args: ["-cloudprovider", "aws"]
          env:
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: awskeys
                  key: access-key-id
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: awskeys
                  key: secret-access-key
        - name: snapshot-provisioner
          image: "registry.redhat.io/openshift3/snapshot-provisioner:latest"
          imagePullPolicy: "IfNotPresent"
          args: ["-cloudprovider", "aws"]
          env:
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: awskeys
                  key: access-key-id
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: awskeys
                  key: secret-access-key

GCE authentication

For Google Compute Engine (GCE), there is no need to use secrets to access the GCE API.

The administrator can proceed with the deployment as shown in the following example:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: snapshot-controller
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: snapshot-controller
    spec:
      serviceAccountName: snapshot-controller-runner
      containers:
        - name: snapshot-controller
          image: "registry.redhat.io/openshift3/snapshot-controller:latest"
          imagePullPolicy: "IfNotPresent"
          args: ["-cloudprovider", "gce"]
        - name: snapshot-provisioner
          image: "registry.redhat.io/openshift3/snapshot-provisioner:latest"
          imagePullPolicy: "IfNotPresent"
          args: ["-cloudprovider", "gce"]

Managing snapshot users

Depending on the cluster configuration, it might be necessary to allow non-administrator users to manipulate the VolumeSnapshot objects on the API server. This can be done by creating a ClusterRole bound to a particular user or group.

For example, assume the user "alice" needs to work with snapshots in the cluster. The cluster administrator completes the following steps:

  1. Define a new ClusterRole:

    apiVersion: v1
    kind: ClusterRole
    metadata:
      name: volumesnapshot-admin
    rules:
    - apiGroups:
      - "volumesnapshot.external-storage.k8s.io"
      attributeRestrictions: null
      resources:
      - volumesnapshots
      verbs:
      - create
      - delete
      - deletecollection
      - get
      - list
      - patch
      - update
      - watch
  2. Bind the cluster role to the user "alice" by creating a ClusterRoleBinding object:

    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: volumesnapshot-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: volumesnapshot-admin
    subjects:
    - kind: User
      name: alice

This is only an example of API access configuration. The VolumeSnapshot objects behave similar to other OpenShift Container Platform API objects. See the API access control documentation for more information on managing the API RBAC.

Creating and deleting snapshots

Similar to how a persistent volume claim (PVC) binds to a persistent volume (PV) to provision a volume, VolumeSnapshotData and VolumeSnapshot are used to create a volume snapshot.

Volume snapshots must use a supported PersistentVolume type.

Create snapshot

To take a snapshot of a PV, create a new VolumeSnapshotData object based on the VolumeSnapshot, as shown in the following example:

apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot (1)
metadata:
  name: snapshot-demo
spec:
  persistentVolumeClaimName: ebs-pvc (2)
1 A VolumeSnapshotData object is automatically created based on the VolumeSnapshot.
2 persistentVolumeClaimName is the name of the PersistentVolumeClaim bound to a PersistentVolume. This particular PV is snapshotted.

Depending on the PV type, the create snapshot operation might go through several phases, which are reflected by the VolumeSnapshot status:

  1. Create the new VolumeSnapshot object.

  2. Start the controller. The snapshotted PersistentVolume might need to be frozen and the applications paused.

  3. Create ("cut") the snapshot. The snapshotted PersistentVolume might return to normal operation, but the snapshot itself is not yet ready (status=True, type=Pending).

  4. Create the new VolumeSnapshotData object, representing the actual snapshot.

  5. The snapshot is complete and ready to use (status=True, type=Ready).

It is the user’s responsibility to ensure data consistency (stop the Pod or application, flush caches, freeze the file system, and so on).

In case of error, the VolumeSnapshot status is appended with an Error condition.

To display the VolumeSnapshot status:

$ oc get volumesnapshot -o yaml

The status is displayed, as shown in the following example:

apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  clusterName: ""
  creationTimestamp: 2017-09-19T13:58:28Z
  generation: 0
  labels:
    Timestamp: "1505829508178510973"
  name: snapshot-demo
  namespace: default (1)
  resourceVersion: "780"
  selfLink: /apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/snapshot-demo
  uid: 9cc5da57-9d42-11e7-9b25-90b11c132b3f
spec:
  persistentVolumeClaimName: ebs-pvc
  snapshotDataName: k8s-volume-snapshot-9cc8813e-9d42-11e7-8bed-90b11c132b3f
status:
  conditions:
  - lastTransitionTime: null
    message: Snapshot created successfully
    reason: ""
    status: "True"
    type: Ready
  creationTimestamp: null
1 Specify the project name where the snapshot-controller resides.

Restore snapshot

A PVC is used to restore a snapshot. But first, the administrator must create a StorageClass to restore a PersistentVolume from an existing VolumeSnapshot.

  1. Create a StorageClass:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: snapshot-promoter
    provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter
    parameters: (1)
      encrypted: "true"
      type: gp2
    1 If you are using AWS EBS storage with gp2 encryption configured, you must set the parameters for encrypted and type.
  2. Create a PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: snapshot-pv-provisioning-demo
      annotations:
        snapshot.alpha.kubernetes.io/snapshot: snapshot-demo (1)
    spec:
      storageClassName: snapshot-promoter (2)
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi (3)
    1 The name of the VolumeSnapshot to be restored.
    2 Created by the administrator for restoring VolumeSnapshots.
    3 Storage size for a restored snapshot must be large enough to accommodate the original PV size.

    A new PersistentVolume is created and bound to the PersistentVolumeClaim. The process might take several minutes depending on the PV type.

Delete snapshot

To delete a VolumeSnapshot:

$ oc delete volumesnapshot/<snapshot-name>

The VolumeSnapshotData bound to the VolumeSnapshot is automatically deleted.