$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1) | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
You can migrate application workloads by adding your clusters and replication repository to the CAM web console. Then, you can create and run a migration plan.
If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.
If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: Certificate signed by unknown authority
.
You can create a custom CA certificate bundle file and upload it in the CAM web console when you add a cluster or a replication repository.
Download a CA certificate from a remote endpoint and save it as a CA bundle file:
$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1) | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
1 | Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . |
2 | Specify the name of the CA bundle file. |
You can add a cluster to the CAM web console.
If you are using Azure snapshots to copy data:
You must provide the Azure resource group name when you add the source cluster.
The source and target clusters must be in the same Azure resource group and in the same location.
Log in to the cluster.
Obtain the service account token:
$ oc sa get-token mig -n openshift-migration eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
Log in to the CAM web console.
In the Clusters section, click Add cluster.
Fill in the following fields:
Cluster name: May contain lower-case letters (a-z
) and numbers (0-9
). Must not contain spaces or international characters.
Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443
.
Service account token: String that you obtained from the source cluster.
Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
Azure resource group: This field appears if Azure cluster is checked.
If you use a custom CA bundle, click Browse and browse to the CA bundle file.
Click Add cluster.
The cluster appears in the Clusters section.
You can add an object storage bucket as a replication repository to the CAM web console.
You must configure an object storage bucket for migrating the data.
Log in to the CAM web console.
In the Replication repositories section, click Add repository.
Select a Storage provider type and fill in the following fields:
AWS for AWS S3, MCG, and generic S3 providers:
Replication repository name: Specify the replication repository name in the CAM web console.
S3 bucket name: Specify the name of the S3 bucket you created.
S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>
. Required for a generic S3 provider. You must use the https://
prefix.
S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY>
for AWS or the S3 provider access key for MCG.
S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID>
for AWS or the S3 provider secret access key for MCG.
Require SSL verification: Clear this check box if you are using a generic S3 provider.
If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
GCP:
Replication repository name: Specify the replication repository name in the CAM web console.
GCP bucket name: Specify the name of the GCP bucket.
GCP credential JSON blob: Specify the string in the credentials-velero
file.
Azure:
Replication repository name: Specify the replication repository name in the CAM web console.
Azure resource group: Specify the resource group of the Azure Blob storage.
Azure storage account name: Specify the Azure Blob storage account name.
Azure credentials - INI file contents: Specify the string in the credentials-velero
file.
Click Add repository and wait for connection validation.
Click Close.
The new repository appears in the Replication repositories section.
You can change the migration plan limits for large migrations.
Changes should first be tested in your environment to avoid a failed migration. |
A single migration plan has the following default limits:
10 namespaces
If this limit is exceeded, the CAM web console displays a Namespace limit exceeded error and you cannot create a migration plan.
100 Pods
If the Pod limit is exceeded, the CAM web console displays a warning message similar to the following example: Plan has been validated with warning condition(s). See warning message. Pod limit: 100 exceeded, found: 104.
100 persistent volumes
If the persistent volume limit is exceeded, the CAM web console displays a similar warning message.
Edit the Migration controller CR:
$ oc get migrationcontroller -n openshift-migration NAME AGE migration-controller 5d19h $ oc edit migrationcontroller -n openshift-migration
Update the following parameters:
...
migration_controller: true
# This configuration is loaded into mig-controller, and should be set on the
# cluster where `migration_controller: true`
mig_pv_limit: 100
mig_pod_limit: 100
mig_namespace_limit: 10
...
You can create a migration plan in the CAM web console.
The CAM web console must contain the following:
Source cluster
Target cluster, which is added automatically during the CAM tool installation
Replication repository
The source and target clusters must have network access to each other and to the replication repository.
If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and in the same region.
Log in to the CAM web console.
In the Plans section, click Add plan.
Enter the Plan name and click Next.
The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9
). It must not contain spaces or underscores (_
).
Select a Source cluster.
Select a Target cluster.
Select a Replication repository.
Select the projects to be migrated and click Next.
Select Copy or Move for the PVs:
Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
Optional: You can verify data copied with the filesystem method by selecting Verify copy. This option, which generates a checksum for each source file and checks it after restoration, significantly reduces performance.
Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
Click Next.
Select a Copy method for the PVs:
Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.
The storage and clusters must be in the same region and the storage class must be compatible. |
Filesystem copies the data files from the source disk to a newly created target disk.
Select a Storage class for the PVs.
If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.
Click Next.
If you want to add a migration hook, click Add Hook and perform the following steps:
Specify the name of the hook.
Select Ansible playbook to use your own playbook or Custom container image for a hook written in another language.
Click Browse to upload the playbook.
Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.
Specify the cluster on which you want the hook to run.
Specify the service account name.
Specify the namespace.
Select the migration step at which you want the hook to run:
PreBackup: Before backup tasks are started on the source cluster
PostBackup: After backup tasks are complete on the source cluster
PreRestore: Before restore tasks are started on the target cluster
PostRestore: After restore tasks are complete on the target cluster
Click Add.
You can add up to four hooks to a migration plan, assigning each hook to a different migration step.
Click Finish.
Click Close.
The migration plan appears in the Plans section.
You can stage or migrate applications and data with the migration plan you created in the CAM web console.
The CAM web console must contain the following:
Source cluster
Target cluster, which is added automatically during the CAM tool installation
Replication repository
Valid migration plan
Log in to the CAM web console on the target cluster.
Select a migration plan.
Click Stage to copy data from the source cluster to the target cluster without stopping the application.
You can run Stage multiple times to reduce the actual migration time.
When you are ready to migrate the application workload, click Migrate.
Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.
Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
Click Migrate.
Optional: To stop a migration in progress, click the Options menu and select Cancel.
When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:
Click Home → Projects.
Click the migrated project to view its status.
In the Routes section, click Location to verify that the application is functioning, if applicable.
Click Workloads → Pods to verify that the Pods are running in the migrated namespace.
Click Storage → Persistent volumes to verify that the migrated persistent volume is correctly provisioned.