spec:
configuration:
nodeAgent:
enable: true
uploaderType: kopia
# ...
The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues.
When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials
secret fails. With this release, the issue has been fixed. OADP-4211
Always
In previous versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always
. This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager
and Velero pods is set to IfNotPresent
.
The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira.
CrashLoopBackoff
status after restoring OADPAfter OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff
status. To work around this problem, delete the StatefulSet
pods that are returning an error or the CrashLoopBackoff
state after restoring OADP. The StatefulSet
controller recreates these pods and it runs normally.
The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues.
DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials
initially. When the custom secret is re-created, it can be used and checked for its existence.
oadp-velero-container
: Golang net/http
: Memory exhaustion in Request.ParseMultipartForm
A flaw was found in the net/http
Golang standard library package, which impacts previous versions of OADP. When parsing a multipart
form, either explicitly with Request.ParseMultipartForm
or implicitly with Request.FormValue
, Request.PostFormValue
, or Request.FormFile
, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45290.
oadp-velero-container
: Golang net/http/cookiejar
: Incorrect forwarding of sensitive headers and cookies on HTTP redirectA flaw was found in the net/http/cookiejar
Golang standard library package, which impacts previous versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client
does not forward sensitive headers such as Authorization
or Cookie
. A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45289.
oadp-velero-container
: Golang crypto/x509
: Verify panics on certificates with an unknown public key algorithmA flaw was found in the crypto/x509
Golang standard library package, which impacts previous versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify
to panic. This affects all crypto/tls
clients and servers that set Config.ClientAuth
to VerifyClientCertIfGiven
or RequireAndVerifyClientCert
. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24783.
oadp-velero-plugin-container
: Golang net/mail
: Comments in display names are incorrectly handledA flaw was found in the net/mail
Golang standard library package, which impacts previous versions of OADP. The ParseAddressList
function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24784.
oadp-velero-container
: Golang: html/template: errors returned from MarshalJSON
methods may break template escapingA flaw was found in the html/template
Golang standard library package, which impacts previous versions of OADP. If errors returned from MarshalJSON
methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24785.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira.
CrashLoopBackoff
status after restoring OADPAfter OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff
status. To work around this problem, delete the StatefulSet
pods that are returning an error or the CrashLoopBackoff
state after restoring OADP. The StatefulSet
controller recreates these pods and it runs normally.
The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues.
The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads.
IBM Cloud® Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud® Object Storage is now supported as an AWS S3 compatible backup storage provider.
Previously, when you specified profile:default
without specifying the region
in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region
error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region
error.
Previously, the openshift-adp-controller-manager
pod would reset the labels attached to the openshift-adp
namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp
namespace.
Previously, the OADP must-gather
image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg
tool to extract data in the support shell.
With this fix, the must-gather
image now collects CRDs shipped by OADP and can use the omg
tool to extract data.
Previously, the garbage-collection-frequency
field had a wrong description for the default frequency value. With this update, garbage-collection-frequency
has a correct value of one hour for the gc-controller
reconciliation default frequency.
By setting the fips-compliant
flag to true
, the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled.
Previously, when the csiSnapshotTimeout
parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference
.
With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot
.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira.
Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power® and IBM Z® platforms:
Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms.
Only the backing up applications with File System Backup such as kopia
and restic
are supported for backup and restore operations.
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff
status. To work around this problem, delete the StatefulSet
pods with any error or the CrashLoopBackoff
state after restoring OADP. The StatefulSet
controller recreates these pods and it runs normally.
The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues.
Velero built-in DataMover
Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository.
Velero’s File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path.
Velero allows users to select between the two paths.
For backup, specify the path during the installation through the uploader-type
flag. The valid value is either restic
or kopia
. This field defaults to kopia
if the value is not specified. The selection cannot be changed after the installation.
Google Cloud Platform (GCP) authentication enables you to use short-lived Google credentials.
GCP with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys.
You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data.
ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers.
You can subscribe to the service directly from your AWS account.
After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools.
Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686
During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce
, which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity
standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688
There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308
In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop
messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176
In previous releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption.
For more information, see CVE-2023-39325 (Rapid Reset Attack)
For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira.
The CSI plugin errors on nil pointer when csiSnapshotTimeout
is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed
with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference
.
If any of the VolumeSnapshotContent
CRs have an error related to removing the VolumeSnapshotBeingCreated
annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed
phase. OADP-2871
When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update
. OADP-3071
Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC).
VSL backup PartiallyFailed
when GCP workload identity is configured on GCP.
For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira.
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. |
The Velero server has been updated from version 1.11 to 1.12.
OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover.
This changes the following:
The spec.features.dataMover
field and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from the DataProtectionApplication
(DPA) configuration.
The Volsync Operator is no longer required for Data Mover functionality, and you can remove it.
The custom resource definitions volumesnapshotbackups.datamover.oadp.openshift.io
and volumesnapshotrestores.datamover.oadp.openshift.io
are no longer required, and you can remove them.
The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them.
OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic.
To employ Kopia, use the new spec.configuration.nodeAgent
field as shown in the following example:
spec:
configuration:
nodeAgent:
enable: true
uploaderType: kopia
# ...
The spec.configuration.restic
field is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove the restic
key and its values, and use the following new syntax:
spec:
configuration:
nodeAgent:
enable: true
uploaderType: restic
# ...
In a future OADP release, it is planned that the |
OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3:
If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup.
If you require off cluster backups:
Back up the applications with a file system backup that uses the --default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackup
options.
Back up the applications with your object storage plugins, for example, velero-plugin-for-aws
.
The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours. |
To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2. |
You must back up your current DataProtectionApplication
(DPA) configuration.
Save your current DPA configuration by running the following command:
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator.
Change your subscription channel for the OADP Operator from stable-1.2
to stable-1.3
.
Allow time for the Operator and containers to update and restart.
If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication
(DPA) manifest as follows.
Click Operators → Installed Operators and select the OADP Operator.
In the Provided APIs section, click View more.
Click Create instance in the DataProtectionApplication box.
Click YAML View to display the current DPA parameters.
spec:
configuration:
features:
dataMover:
enable: true
credentialName: dm-credentials
velero:
defaultPlugins:
- vsm
- csi
- openshift
# ...
Update the DPA parameters:
Remove the features.dataMover
key and values from the DPA.
Remove the VolumeSnapshotMover (VSM) plugin.
Add the nodeAgent
key and values.
spec:
configuration:
nodeAgent:
enable: true
uploaderType: kopia
velero:
defaultPlugins:
- csi
- openshift
# ...
Wait for the DPA to reconcile successfully.
Use the following procedure to verify the upgrade.
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
$ oc get all -n openshift-adp
NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s
Verify that the DataProtectionApplication
(DPA) is reconciled by running the following command:
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
Verify the type
is set to Reconciled
.
Verify the backup storage location and confirm that the PHASE
is Available
by running the following command:
$ oc get backupStorageLocation -n openshift-adp
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true
In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication
(DPA) configuration.
$ velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
apiVersion: velero.io/v1
kind: Backup
metadata:
name: example-backup
namespace: openshift-adp
spec:
snapshotMoveData: true
includedNamespaces:
- mysql-persistent
storageLocation: dpa-sample-1
ttl: 720h0m0s
# ...