$ oc get backupStorageLocations -n openshift-adp
You back up applications by creating a Backup
custom resource (CR). See Creating a Backup CR.
The Backup
CR creates backup files for Kubernetes resources and internal images, on S3 object storage, and snapshots for persistent volumes (PVs), if the cloud provider uses a native snapshot API or the Container Storage Interface (CSI) to create snapshots, such as OpenShift Data Foundation 4.
For more information about CSI volume snapshots, see CSI volume snapshots.
The For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup
CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots.
If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Restic. See Backing up applications with Restic.
The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. |
You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks.
You can schedule backups by creating a Schedule
CR instead of a Backup
CR. See Scheduling backups.
You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup
custom resource (CR).
You must install the OpenShift API for Data Protection (OADP) Operator.
The DataProtectionApplication
CR must be in a Ready
state.
Backup location prerequisites:
You must have S3 object storage configured for Velero.
You must have a backup location configured in the DataProtectionApplication
CR.
Snapshot location prerequisites:
Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots.
For CSI snapshots, you must create a VolumeSnapshotClass
CR to register the CSI driver.
You must have a volume location configured in the DataProtectionApplication
CR.
Retrieve the backupStorageLocations
CRs by entering the following command:
$ oc get backupStorageLocations -n openshift-adp
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT
openshift-adp velero-sample-1 Available 11s 31m
Create a Backup
CR, as in the following example:
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup>
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
hooks: {}
includedNamespaces:
- <namespace> (1)
includedResources: [] (2)
excludedResources: [] (3)
storageLocation: <velero-sample-1> (4)
ttl: 720h0m0s
labelSelector: (5)
matchLabels:
app=<label_1>
app=<label_2>
app=<label_3>
orLabelSelectors: (6)
- matchLabels:
app=<label_1>
app=<label_2>
app=<label_3>
1 | Specify an array of namespaces to back up. |
2 | Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. |
3 | Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. |
4 | Specify the name of the backupStorageLocations CR. |
5 | Map of {key,value} pairs of backup resources that have all of the specified labels. |
6 | Map of {key,value} pairs of backup resources that have one or more of the specified labels. |
Verify that the status of the Backup
CR is Completed
:
$ oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'
You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass
custom resource (CR) of the cloud storage before you create the Backup
CR.
The cloud provider must support CSI snapshots.
You must enable CSI in the DataProtectionApplication
CR.
Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true"
key-value pair to the VolumeSnapshotClass
CR:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: <volume_snapshot_class_name>
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: <csi_driver>
deletionPolicy: Retain
You can now create a Backup
CR.
You back up Kubernetes resources, internal images, and persistent volumes with Restic by editing the Backup
custom resource (CR).
You do not need to specify a snapshot location in the DataProtectionApplication
CR.
Restic does not support backing up |
You must install the OpenShift API for Data Protection (OADP) Operator.
You must not disable the default Restic installation by setting spec.configuration.restic.enable
to false
in the DataProtectionApplication
CR.
The DataProtectionApplication
CR must be in a Ready
state.
Edit the Backup
CR, as in the following example:
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup>
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
defaultVolumesToRestic: true (1)
...
1 | Add defaultVolumesToRestic: true to the spec block. |
The OADP Data Mover enables customers to back up Container Storage Interface (CSI) volume snapshots to a remote object store.
When Data Mover is enabled, you can restore stateful applications, using CSI volume snapshots pulled from the object store if a failure, accidental deletion, or corruption of the cluster occurs.
The Data Mover solution uses the Restic option of VolSync.
Data Mover supports backup and restore of CSI volume snapshots only.
In OADP 1.2 Data Mover VolumeSnapshotBackups
(VSBs) and VolumeSnapshotRestores
(VSRs) are queued using the VolumeSnapshotMover (VSM). The VSM’s performance is improved by specifying a concurrent number of VSBs and VSRs simultaneously InProgress
. After all async plugin operations are complete, the backup is marked as complete.
The OADP 1.1 Data Mover is a Technology Preview feature. The OADP 1.2 Data Mover has significantly improved features and performances, but is still a Technology Preview feature. |
The OADP Data Mover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Red Hat recommends that customers who use OADP 1.2 Data Mover in order to back up and restore ODF CephFS volumes, upgrade or install OpenShift Container Platform version 4.12 or later for improved performance. OADP Data Mover can leverage CephFS shallow volumes in OpenShift Container Platform version 4.12 or later, which based on our testing, can improve the performance of backup times. |
You have verified that the StorageClass
and VolumeSnapshotClass
custom resources (CRs) support CSI.
You have verified that only one volumeSnapshotClass
CR has the annotation snapshot.storage.kubernetes.io/is-default-class: true
.
In OpenShift Container Platform version 4.12 or later, verify that this is the only default |
You have verified that deletionPolicy
of the VolumeSnapshotClass
CR is set to Retain
.
You have verified that only one storageClass
CR has the annotation storageclass.kubernetes.io/is-default-class: true
.
You have included the label velero.io/csi-volumesnapshot-class: 'true'
in your VolumeSnapshotClass
CR.
You have verified that the OADP namespace
has the annotation oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers='true'
.
In OADP 1.1 the above setting is mandatory. In OADP 1.2 the |
You have installed the VolSync Operator by using the Operator Lifecycle Manager (OLM).
The VolSync Operator is required for using OADP Data Mover. |
You have installed the OADP operator by using OLM.
Configure a Restic secret by creating a .yaml
file:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
type: Opaque
stringData:
# The repository encryption key
RESTIC_PASSWORD: my-secure-restic-password
Create a DPA CR similar to the following example. The default plugins include CSI.
Add the restic secret name from the step above to your DPA CR as spec.features.dataMover.credentialName
. If this step is not completed, then it will default to the secret name dm-credential
.
In this DPA, the |
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
features:
dataMover:
enable: true
credentialName: <secret-name>
maxConcurrentBackupVolumes: "3" (1)
maxConcurrentRestoreVolumes: "3" (2)
pruneInterval: "14" (3)
volumeOptionsForStorageClasses: (4)
gp2-csi-copy-1:
destinationVolumeOptions:
storageClassName: csi-copy-2
sourceVolumeOptions:
storageClassName: csi-copy-1
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <bucket_name>
prefix: <bucket-prefix>
provider: aws
configuration:
restic:
enable: false
velero:
defaultPlugins:
- openshift
- aws
- csi
- vsm (5)
1 | OADP 1.2 only. Optional: Specify the upper limit of the number of snapshots allowed to be queued for backup. The default value is 10. |
2 | OADP 1.2 only. Optional: Specify the upper limit of the number of snapshots allowed to be queued for restore. The default value is 10. |
3 | OADP 1.2 only. Optional: Specify the number of days, between running Restic pruning on the repository. The prune operation repacks the data to free space, but it can also generate significant I/O traffic as a part of the process. Setting this option allows a trade-off between storage consumption, from no longer referenced data, and access costs. |
4 | OADP 1.2 only. Optional: Specify VolumeSync volume options for backup and restore. |
5 | OADP 1.2 only. |
The OADP Operator installs two custom resource definitions (CRDs), VolumeSnapshotBackup
and VolumeSnapshotRestore
.
VolumeSnapshotBackup
CRDapiVersion: datamover.oadp.openshift.io/v1alpha1
kind: VolumeSnapshotBackup
metadata:
name: <vsb_name>
namespace: <namespace_name> (1)
spec:
volumeSnapshotContent:
name: <snapcontent_name>
protectedNamespace: <adp_namespace>
resticSecretRef:
name: <restic_secret_name>
1 | Specify the namespace where the volume snapshot exists. |
VolumeSnapshotRestore
CRDapiVersion: datamover.oadp.openshift.io/v1alpha1
kind: VolumeSnapshotRestore
metadata:
name: <vsr_name>
namespace: <namespace_name> (1)
spec:
protectedNamespace: <protected_ns> (2)
resticSecretRef:
name: <restic_secret_name>
volumeSnapshotMoverBackupRef:
sourcePVCData:
name: <source_pvc_name>
size: <source_pvc_size>
resticrepository: <your_restic_repo>
volumeSnapshotClassName: <vsclass_name>
1 | Specify the namespace where the volume snapshot exists. |
2 | Specify the namespace where the Operator is installed. The default is openshift-adp . |
You can back up a volume snapshot by performing the following steps:
Create a backup CR:
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup_name>
namespace: <protected_ns> (1)
spec:
includedNamespaces:
- <app_ns>
storageLocation: velero-sample-1
1 | Specify the namespace where the Operator is installed. The default namespace is openshift-adp . |
Wait up to 10 minutes and check whether the VolumeSnapshotBackup
CR status is Completed
by entering the following commands:
$ oc get vsb -n <app_ns>
$ oc get vsb <vsb_name> -n <app_ns> -o jsonpath="{.status.phase}"
A snapshot is created in the object store was configured in the DPA.
If the status of the |
You can restore a volume snapshot by performing the following steps:
Delete the application namespace and the volumeSnapshotContent
that was created by the Velero CSI plugin.
Create a Restore
CR and set restorePVs
to true
.
Restore
CRapiVersion: velero.io/v1
kind: Restore
metadata:
name: <restore_name>
namespace: <protected_ns>
spec:
backupName: <previous_backup_name>
restorePVs: true
Wait up to 10 minutes and check whether the VolumeSnapshotRestore
CR status is Completed
by entering the following command:
$ oc get vsr -n <app_ns>
$ oc get vsr <vsr_name> -n <app_ns> -o jsonpath="{.status.phase}"
Check whether your application data and resources have been restored.
If the status of the |
You can use OADP 1.2 Data Mover to backup and restore application data for clusters that use CephFS, CephRBD, or both.
OADP 1.2 Data Mover leverages Ceph features that support large-scale environments. One of these is the shallow copy method, which is available for OpenShift Container Platform 4.12 and later. This feature supports backing up and restoring StorageClass
and AccessMode
resources other than what is found on the source persistent volume claim (PVC).
The CephFS shallow copy feature is a back up feature. It is not part of restore operations. |
The following prerequisites apply to all back up and restore operations of data using OpenShift API for Data Protection (OADP) 1.2 Data Mover in a cluster that uses Ceph storage:
You have installed OpenShift Container Platform 4.12 or later.
You have installed the OADP Operator.
You have created a secret cloud-credentials
in the namespace openshift-adp.
You have installed Red Hat OpenShift Data Foundation.
You have installed the latest VolSync Operator using the Operator Lifecycle Manager.
When you install Red Hat OpenShift Data Foundation, it automatically creates default CephFS and a CephRBD StorageClass
and VolumeSnapshotClass
custom resources (CRs). You must define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
After you define the CRs, you must make several other changes to your environment before you can perform your back up and restore operations.
When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephFS StorageClass
custom resource (CR) and a default CephFS VolumeSnapshotClass
CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
Define the VolumeSnapshotClass
CR as in the following example:
VolumeSnapshotClass
CRapiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Retain (1)
driver: openshift-storage.cephfs.csi.ceph.com
kind: VolumeSnapshotClass
metadata:
annotations:
snapshot.storage.kubernetes.io/is-default-class: true (2)
labels:
velero.io/csi-volumesnapshot-class: true (3)
name: ocs-storagecluster-cephfsplugin-snapclass
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
1 | Must be set to Retain . |
2 | Must be set to true . |
3 | Must be set to true . |
Define the StorageClass
CR as in the following example:
StorageClass
CRkind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-cephfs
annotations:
description: Provides RWO and RWX Filesystem volumes
storageclass.kubernetes.io/is-default-class: true (1)
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
1 | Must be set to true . |
When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephRBD StorageClass
custom resource (CR) and a default CephRBD VolumeSnapshotClass
CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
Define the VolumeSnapshotClass
CR as in the following example:
VolumeSnapshotClass
CRapiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Retain (1)
driver: openshift-storage.rbd.csi.ceph.com
kind: VolumeSnapshotClass
metadata:
labels:
velero.io/csi-volumesnapshot-class: true (2)
name: ocs-storagecluster-rbdplugin-snapclass
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
1 | Must be set to Retain . |
2 | Must be set to true . |
Define the StorageClass
CR as in the following example:
StorageClass
CRkind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-ceph-rbd
annotations:
description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes'
provisioner: openshift-storage.rbd.csi.ceph.com
parameters:
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
imageFormat: '2'
clusterID: openshift-storage
imageFeatures: layering
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
pool: ocs-storagecluster-cephblockpool
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
After you redefine the default StorageClass
and CephRBD VolumeSnapshotClass
custom resources (CRs), you must create the following CRs:
A CephFS StorageClass
CR defined to use the shallow copy feature
A Rustic Secret
CR
Create a CephFS StorageClass
CR and set the backingSnapshot
parameter set to true
as in the following example:
StorageClass
CR with backingSnapshot
set to true
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-cephfs-shallow
annotations:
description: Provides RWO and RWX Filesystem volumes
storageclass.kubernetes.io/is-default-class: false
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
backingSnapshot: true (1)
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
1 | Must be set to true . |
Ensure that the CephFS |
Configure a Restic Secret
CR as in the following example:
Secret
CRapiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: <namespace>
type: Opaque
stringData:
RESTIC_PASSWORD: <restic_password>
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage by enabling the shallow copy feature of CephFS.
A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner.
The StorageClass
and VolumeSnapshotClass
custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover.
There is a secret cloud-credentials
in the openshift-adp
namespace.
You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage.
Verify that the deletionPolicy
field of the VolumeSnapshotClass
CR is set to Retain
by running the following command:
$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}'
Verify that the labels of the VolumeSnapshotClass
CR are set to true
by running the following command:
$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}'
Verify that the storageclass.kubernetes.io/is-default-class
annotation of the StorageClass
CR is set to true
by running the following command:
$ oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}'
Create a Data Protection Application (DPA) CR similar to the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <my_bucket>
prefix: velero
provider: aws
configuration:
restic:
enable: false (1)
velero:
defaultPlugins:
- openshift
- aws
- csi
- vsm
features:
dataMover:
credentialName: <restic_secret_name> (2)
enable: true (3)
volumeOptionsForStorageClasses:
ocs-storagecluster-cephfs:
sourceVolumeOptions:
accessMode: ReadOnlyMany
cacheAccessMode: ReadWriteMany
cacheStorageClassName: ocs-storagecluster-cephfs
storageClassName: ocs-storagecluster-cephfs-shallow
1 | There is no default value for the enable field. Valid values are true or false . |
2 | Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not use your Restic Secret , the CR uses the default value dm-credential for this parameter. |
3 | There is no default value for the enable field. Valid values are true or false . |
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data using CephFS storage by enabling the shallow copy feature of CephFS storage.
Create a Backup
CR as in the following example:
Backup
CRapiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup_name>
namespace: <protected_ns>
spec:
includedNamespaces:
- <app_ns>
storageLocation: velero-sample-1
Monitor the progress of the VolumeSnapshotBackup
CRs by completing the following steps:
To check the progress of all the VolumeSnapshotBackup
CRs, run the following command:
$ oc get vsb -n <app_ns>
To check the progress of a specific VolumeSnapshotBackup
CR, run the following command:
$ oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}`
Wait several minutes until the VolumeSnapshotBackup
CR has the status Completed
.
Verify that there is at least one snapshot in the object store that is given in the Restic Secret
. You can check for this snapshot in your targeted BackupStorageLocation
storage provider that has a prefix of /<OADP_namespace>
.
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data using CephFS storage if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.
Delete the application namespace by running the following command:
$ oc delete vsb -n <app_namespace> --all
Delete any VolumeSnapshotContent
CRs that were created during backup by running the following command:
$ oc delete volumesnapshotcontent --all
Create a Restore
CR as in the following example:
Restore
CRapiVersion: velero.io/v1
kind: Restore
metadata:
name: <restore_name>
namespace: <protected_ns>
spec:
backupName: <previous_backup_name>
Monitor the progress of the VolumeSnapshotRestore
CRs by doing the following:
To check the progress of all the VolumeSnapshotRestore
CRs, run the following command:
$ oc get vsr -n <app_ns>
To check the progress of a specific VolumeSnapshotRestore
CR, run the following command:
$ oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase}
Verify that your application data has been restored by running the following command:
$ oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}"
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data in an environment that has split volumes, that is, an environment that uses both CephFS and CephRBD.
A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner.
The StorageClass
and VolumeSnapshotClass
custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover.
There is a secret cloud-credentials
in the openshift-adp
namespace.
You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using split volumes.
Create a Data Protection Application (DPA) CR as in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <my-bucket>
prefix: velero
provider: aws
configuration:
restic:
enable: false
velero:
defaultPlugins:
- openshift
- aws
- csi
- vsm
features:
dataMover:
credentialName: <restic_secret_name> (1)
enable: true
volumeOptionsForStorageClasses: (2)
ocs-storagecluster-cephfs:
sourceVolumeOptions:
accessMode: ReadOnlyMany
cacheAccessMode: ReadWriteMany
cacheStorageClassName: ocs-storagecluster-cephfs
storageClassName: ocs-storagecluster-cephfs-shallow
ocs-storagecluster-ceph-rbd:
sourceVolumeOptions:
storageClassName: ocs-storagecluster-ceph-rbd
cacheStorageClassName: ocs-storagecluster-ceph-rbd
destinationVolumeOptions:
storageClassName: ocs-storagecluster-ceph-rbd
cacheStorageClassName: ocs-storagecluster-ceph-rbd
1 | Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not, then the CR will use the default value dm-credential for this parameter. |
2 | A different set of VolumeOptionsForStorageClass labels can be defined for each storageClass volume, thus allowing a backup to volumes with different providers. |
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data in an environment that has split volumes.
Create a Backup
CR as in the following example:
Backup
CRapiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup_name>
namespace: <protected_ns>
spec:
includedNamespaces:
- <app_ns>
storageLocation: velero-sample-1
Monitor the progress of the VolumeSnapshotBackup
CRs by completing the following steps:
To check the progress of all the VolumeSnapshotBackup
CRs, run the following command:
$ oc get vsb -n <app_ns>
To check the progress of a specific VolumeSnapshotBackup
CR, run the following command:
$ oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}`
Wait several minutes until the VolumeSnapshotBackup
CR has the status Completed
.
Verify that there is at least one snapshot in the object store that is given in the Restic Secret
. You can check for this snapshot in your targeted BackupStorageLocation
storage provider that has a prefix of /<OADP_namespace>
.
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data in an environment that has split volumes, if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.
Delete the application namespace by running the following command:
$ oc delete vsb -n <app_namespace> --all
Delete any VolumeSnapshotContent
CRs that were created during backup by running the following command:
$ oc delete volumesnapshotcontent --all
Create a Restore
CR as in the following example:
Restore
CRapiVersion: velero.io/v1
kind: Restore
metadata:
name: <restore_name>
namespace: <protected_ns>
spec:
backupName: <previous_backup_name>
Monitor the progress of the VolumeSnapshotRestore
CRs by doing the following:
To check the progress of all the VolumeSnapshotRestore
CRs, run the following command:
$ oc get vsr -n <app_ns>
To check the progress of a specific VolumeSnapshotRestore
CR, run the following command:
$ oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase}
Verify that your application data has been restored by running the following command:
$ oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}"
For OADP 1.1 Data Mover, you must perform a data cleanup after you perform a backup.
The cleanup consists of deleting the following resources:
Snapshots in a bucket
Cluster resources
Volume snapshot backups (VSBs) after a backup procedure that is either run by a schedule or is run repetitively
OADP 1.1 Data Mover might leave one or more snapshots in a bucket after a backup. You can either delete all the snapshots or delete individual snapshots.
To delete all snapshots in your bucket, delete the /<protected_namespace>
folder that is specified in the Data Protection Application (DPA) .spec.backupLocation.objectStorage.bucket
resource.
To delete an individual snapshot:
Browse to the /<protected_namespace>
folder that is specified in the DPA .spec.backupLocation.objectStorage.bucket
resource.
Delete the appropriate folders that are prefixed with /<volumeSnapshotContent name>-pvc
where <VolumeSnapshotContent_name>
is the VolumeSnapshotContent
created by Data Mover per PVC.
OADP 1.1 Data Mover might leave cluster resources whether or not it successfully backs up your container storage interface (CSI) volume snapshots to a remote object store.
You can delete any VolumeSnapshotBackup
or VolumeSnapshotRestore
CRs that remain in your application namespace after a successful backup and restore where you used Data Mover.
Delete cluster resources that remain on the application namespace, the namespace with the application PVCs to backup and restore, after a backup where you use Data Mover:
$ oc delete vsb -n <app_namespace> --all
Delete cluster resources that remain after a restore where you use Data Mover:
$ oc delete vsr -n <app_namespace> --all
If needed, delete any VolumeSnapshotContent
resources that remain after a backup and restore where you use Data Mover:
$ oc delete volumesnapshotcontent --all
If your backup and restore operation that uses Data Mover either fails or only partially succeeds, you must clean up any VolumeSnapshotBackup
(VSB) or VolumeSnapshotRestore
custom resource definitions (CRDs) that exist in the application namespace, and clean up any extra resources created by these controllers.
Clean up cluster resources that remain after a backup operation where you used Data Mover by entering the following commands:
Delete VSB CRDs on the application namespace, the namespace with the application PVCs to backup and restore:
$ oc delete vsb -n <app_namespace> --all
Delete VolumeSnapshot
CRs:
$ oc delete volumesnapshot -A --all
Delete VolumeSnapshotContent
CRs:
$ oc delete volumesnapshotcontent --all
Delete any PVCs on the protected namespace, the namespace the Operator is installed on.
$ oc delete pvc -n <protected_namespace> --all
Delete any ReplicationSource
resources on the namespace.
$ oc delete replicationsource -n <protected_namespace> --all
Clean up cluster resources that remain after a restore operation using Data Mover by entering the following commands:
Delete VSR CRDs:
$ oc delete vsr -n <app-ns> --all
Delete VolumeSnapshot
CRs:
$ oc delete volumesnapshot -A --all
Delete VolumeSnapshotContent
CRs:
$ oc delete volumesnapshotcontent --all
Delete any ReplicationDestination
resources on the namespace.
$ oc delete replicationdestination -n <protected_namespace> --all
You create backup hooks to run commands in a container in a pod by editing the Backup
custom resource (CR).
Pre hooks run before the pod is backed up. Post hooks run after the backup.
Add a hook to the spec.hooks
block of the Backup
CR, as in the following example:
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup>
namespace: openshift-adp
spec:
hooks:
resources:
- name: <hook_name>
includedNamespaces:
- <namespace> (1)
excludedNamespaces: (2)
- <namespace>
includedResources: []
- pods (3)
excludedResources: [] (4)
labelSelector: (5)
matchLabels:
app: velero
component: server
pre: (6)
- exec:
container: <container> (7)
command:
- /bin/uname (8)
- -a
onError: Fail (9)
timeout: 30s (10)
post: (11)
...
1 | Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. |
2 | Optional: You can specify namespaces to which the hook does not apply. |
3 | Currently, pods are the only supported resource that hooks can apply to. |
4 | Optional: You can specify resources to which the hook does not apply. |
5 | Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all namespaces. |
6 | Array of hooks to run before the backup. |
7 | Optional: If the container is not specified, the command runs in the first container in the pod. |
8 | This is the entrypoint for the init container being added. |
9 | Allowed values for error handling are Fail and Continue . The default is Fail . |
10 | Optional: How long to wait for the commands to run. The default is 30s . |
11 | This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. |
You schedule backups by creating a Schedule
custom resource (CR) instead of a Backup
CR.
Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. |
You must install the OpenShift API for Data Protection (OADP) Operator.
The DataProtectionApplication
CR must be in a Ready
state.
Retrieve the backupStorageLocations
CRs:
$ oc get backupStorageLocations -n openshift-adp
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT
openshift-adp velero-sample-1 Available 11s 31m
Create a Schedule
CR, as in the following example:
$ cat << EOF | oc apply -f -
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: <schedule>
namespace: openshift-adp
spec:
schedule: 0 7 * * * (1)
template:
hooks: {}
includedNamespaces:
- <namespace> (2)
storageLocation: <velero-sample-1> (3)
defaultVolumesToRestic: true (4)
ttl: 720h0m0s
EOF
1 | cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. |
2 | Array of namespaces to back up. |
3 | Name of the backupStorageLocations CR. |
4 | Optional: Add the defaultVolumesToRestic: true key-value pair if you are backing up volumes with Restic. |
Verify that the status of the Schedule
CR is Completed
after the scheduled backup runs:
$ oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'
You can remove backup files by deleting the Backup
custom resource (CR).
After you delete the |
You created a Backup
CR.
You know the name of the Backup
CR and the namespace that contains it.
You downloaded the Velero CLI tool.
You can access the Velero binary in your cluster.
Choose one of the following actions to delete the Backup
CR:
To delete the Backup
CR and keep the associated object storage data, issue the following command:
$ oc delete backup <backup_CR_name> -n <velero_namespace>
To delete the Backup
CR and delete the associated object storage data, issue the following command:
$ velero backup delete <backup_CR_name> -n <velero_namespace>
Where:
Specifies the name of the Backup
custom resource.
Specifies the namespace that contains the Backup
custom resource.