$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1)
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or on the command line.
If you are using self-signed CA certificates to secure the clusters or the replication repository, you can create a CA certificate bundle file or disable SSL verification.
If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority
.
You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.
Download a CA certificate from a remote endpoint and save it as a CA bundle file:
$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1)
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
1 | Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . |
2 | Specify the name of the CA bundle file. |
You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan.
You can add a cluster to the Migration Toolkit for Containers (MTC) web console.
If you are using Azure snapshots to copy data:
You must provide the Azure resource group name when you add the source cluster.
The source and target clusters must be in the same Azure resource group and in the same location.
Log in to the cluster.
Obtain the service account token:
$ oc sa get-token migration-controller -n openshift-migration
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
In the MTC web console, click Clusters.
Click Add cluster.
Fill in the following fields:
Cluster name: May contain lower-case letters (a-z
) and numbers (0-9
). Must not contain spaces or international characters.
Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443
.
Service account token: String that you obtained from the source cluster.
Exposed route to image registry: Optional. You can specify a route to the image registry of your source cluster to enable direct migration for images, for example, docker-registry-default.apps.cluster.com
.
Direct migration is much faster than migration with a replication repository.
Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.
Azure resource group: This field appears if Azure cluster is checked.
If you use a custom CA bundle, click Browse and browse to the CA bundle file.
Click Add cluster.
The cluster appears in the Clusters list.
You can add an object storage bucket as a replication repository to the Migration Toolkit for Containers (MTC) web console.
You must configure an object storage bucket for migrating the data.
In the MTC web console, click Replication repositories.
Click Add repository.
Select a Storage provider type and fill in the following fields:
AWS for AWS S3, MCG, and generic S3 providers:
Replication repository name: Specify the replication repository name in the MTC web console.
S3 bucket name: Specify the name of the S3 bucket you created.
S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>
. Required for a generic S3 provider. You must use the https://
prefix.
S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY>
for AWS or the S3 provider access key for MCG.
S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID>
for AWS or the S3 provider secret access key for MCG.
Require SSL verification: Clear this check box if you are using a generic S3 provider.
If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
GCP:
Replication repository name: Specify the replication repository name in the MTC web console.
GCP bucket name: Specify the name of the GCP bucket.
GCP credential JSON blob: Specify the string in the credentials-velero
file.
Azure:
Replication repository name: Specify the replication repository name in the MTC web console.
Azure resource group: Specify the resource group of the Azure Blob storage.
Azure storage account name: Specify the Azure Blob storage account name.
Azure credentials - INI file contents: Specify the string in the credentials-velero
file.
Click Add repository and wait for connection validation.
Click Close.
The new repository appears in the Replication repositories list.
You can create a migration plan in the Migration Toolkit for Containers (MTC) web console.
You can use direct image migration and direct volume migration to migrate images or volumes directly from the source cluster to the target cluster. Direct migration improves performance significantly.
You must add source and target clusters and a replication repository to the MTC web console.
The clusters must have network access to each other.
The clusters must have network access to the replication repository.
The clusters must be able to communicate using OpenShift routes on port 443.
The clusters must have no Critical
conditions.
The clusters must be in a Ready
state.
The migration plan name must not exceed 253 lower-case alphanumeric characters (a-z, 0-9
) and must not contain spaces or underscores (_
).
PV Move
copy method: The clusters must have network access to the remote volume.
PV Snapshot
copy method:
The clusters must have the same cloud provider (AWS, GCP, or Azure).
The clusters must be located in the same geographic region.
The storage class must be the same on the source and target clusters.
Direct image migration:
The source cluster must have its internal registry exposed to external traffic.
The exposed registry route of the source cluster must be added to the cluster configuration using the MTC web console or with the exposedRegistryPath
parameter in the MigCluster
CR manifest.
Direct volume migration:
The PVs to be migrated must be valid.
The PVs must be in a bound
state.
The PV migration method must be Copy
and the copy method must be filesystem
.
In the MTC web console, click Migration plans.
Click Add migration plan.
Enter the Plan name and click Next.
Select a Source cluster.
Select a Target cluster.
Select a Replication repository.
Select the projects to be migrated and click Next.
Select a Source cluster, a Target cluster, and a Repository, and click Next.
In the Namespaces screen, select the projects to be migrated and click Next.
In the Persistent volumes screen, click a Migration type for each PV:
The Copy option copies the data from the PV of a source cluster to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
Click Next.
In the Copy options screen, select a Copy method for each PV:
Snapshot copy backs up and restores data using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem copy.
Filesystem copy backs up the files on the source cluster and restores them on the target cluster.
You can select Verify copy to verify data migrated with Filesystem copy. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance.
Select a Target storage class.
You can change the storage class of data migrated with Filesystem copy.
Click Next.
In the Migration options screen, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy.
The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster.
Click Next.
In the Hooks screen, click Add Hook to add a hook to the migration plan.
Enter the hook name.
If your hook is an Ansible playbook, click Browse to upload the playbook and update the Ansible runtime image field if you are using a custom Ansible image.
If your hook is not an Ansible playbook, click Custom container image and specify the image name and path.
Click Source cluster or Target cluster on which the hook should run.
Enter the Service account name and the Service account namespace of the cluster.
Select the migration step when the hook should run:
PreBackup: Before backup tasks are started on the source cluster
PostBackup: After backup tasks are complete on the source cluster
PreRestore: Before restore tasks are started on the target cluster
PostRestore: After restore tasks are complete on the target cluster
Click Add Hook and then click Close.
You can add up to four hooks to a single migration plan. Each hook runs during a different migration step.
Click Finish and then click Close.
The migration plan is displayed in the Migration plans list.
You can stage or migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console.
MTC sets the reclaim policy of migrated persistent volumes (PVs) to The |
The MTC web console must contain the following:
Source cluster in a Ready
state
Target cluster in a Ready
state
Replication repository
Valid migration plan
Log in to the source cluster.
Delete old images:
$ oc adm prune images
Log in to the MTC web console and click Migration plans.
Click the Options menu next to a migration plan and select Stage to copy data from the source cluster to the target cluster without stopping the application.
You can run Stage multiple times to reduce the actual migration time.
When you are ready to migrate the application workload, the Options menu beside a migration plan and select Migrate.
Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
Click Migrate.
When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:
Click Home → Projects.
Click the migrated project to view its status.
In the Routes section, click Location to verify that the application is functioning, if applicable.
Click Workloads → Pods to verify that the pods are running in the migrated namespace.
Click Storage → Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
You can migrate your applications on the command line by using the MTC custom resources (CRs).
You can migrate applications from a local cluster to a remote cluster, from a remote cluster to a local cluster, and between remote clusters.
The following terms are relevant for configuring clusters:
host
cluster:
The migration-controller
pod runs on the host
cluster.
A host
cluster does not require an exposed secure registry route for direct image migration.
Local cluster: The local cluster is often the same as the host
cluster but this is not a requirement.
Remote cluster:
A remote cluster must have an exposed secure registry route for direct image migration.
A remote cluster must have a Secret
CR containing the migration-controller
service account token.
The following terms are relevant for performing a migration:
Source cluster: Cluster from which the applications are migrated.
Destination cluster: Cluster to which the applications are migrated.
You can migrate your applications on the command line with the Migration Toolkit for Containers (MTC) API.
You can migrate applications from a local cluster to a remote cluster, from a remote cluster to a local cluster, and between remote clusters.
This procedure describes how to perform indirect migration and direct migration:
Indirect migration: Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster.
Direct migration: Images or volumes are copied directly from the source cluster to the destination cluster. Direct image migration and direct volume migration have significant performance benefits.
You create the following custom resources (CRs) to perform a migration:
MigCluster
CR: Defines a host
, local, or remote cluster
The migration-controller
pod runs on the host
cluster.
Secret
CR: Contains credentials for a remote cluster or storage
MigStorage
CR: Defines a replication repository
Different storage providers require different parameters in the MigStorage
CR manifest.
MigPlan
CR: Defines a migration plan
MigMigration
CR: Performs a migration defined in an associated MigPlan
You can create multiple MigMigration
CRs for a single MigPlan
CR for the following purposes:
To perform stage migrations, which copy most of the data without stopping the application, before running a migration. Stage migrations improve the performance of the migration.
To cancel a migration in progress
To roll back a completed migration
You must have cluster-admin
privileges for all clusters.
You must install the OpenShift Container Platform CLI (oc
).
You must install the Migration Toolkit for Containers Operator on all clusters.
The version of the installed Migration Toolkit for Containers Operator must be the same on all clusters.
You must configure an object storage as a replication repository.
If you are using direct image migration, you must expose a secure registry route on all remote clusters.
Create a MigCluster
CR manifest for the host
cluster called host-cluster.yaml
:
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: host
namespace: openshift-migration
spec:
isHostCluster: true
Create a MigCluster
CR for the host
cluster:
$ oc create -f host-cluster.yaml -n openshift-migration
Create a Secret
CR manifest for each remote cluster called cluster-secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
name: <cluster_secret>
namespace: openshift-config
type: Opaque
data:
saToken: <sa_token> (1)
1 | Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. |
You can obtain the SA token by running the following command:
$ oc sa get-token migration-controller -n openshift-migration | base64 -w 0
Create a Secret
CR for each remote cluster:
$ oc create -f cluster-secret.yaml
Create a MigCluster
CR manifest for each remote cluster called remote-cluster.yaml
:
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
name: <remote_cluster>
namespace: openshift-migration
spec:
exposedRegistryPath: <exposed_registry_route> (1)
insecure: false (2)
isHostCluster: false
serviceAccountSecretRef:
name: <remote_cluster_secret> (3)
namespace: openshift-config
url: <remote_cluster_url> (4)
1 | Optional: Specify the exposed registry route, for example, docker-registry-default.apps.example.com if you are using direct image migration. |
2 | SSL verification is enabled if false . CA certificates are not required or checked if true . |
3 | Specify the Secret CR of the remote cluster. |
4 | Specify the URL of the remote cluster. |
Create a MigCluster
CR for each remote cluster:
$ oc create -f remote-cluster.yaml -n openshift-migration
Verify that all clusters are in a Ready
state:
$ oc describe cluster <cluster_name>
Create a Secret
CR manifest for the replication repository called storage-secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-config
name: <migstorage_creds>
type: Opaque
data:
aws-access-key-id: <key_id_base64> (1)
aws-secret-access-key: <secret_key_base64> (2)
1 | Specify the key ID in base64 format. |
2 | Specify the secret key in base64 format. |
AWS credentials are base64-encoded by default. If you are using another storage provider, you must encode your credentials by running the following command with each key:
$ echo -n "<key>" | base64 -w 0 (1)
1 | Specify the key ID or the secret key. Both keys must be base64-encoded. |
Create the Secret
CR for the replication repository:
$ oc create -f storage-secret.yaml
Create a MigStorage
CR manifest for the replication repository called migstorage.yaml
:
apiVersion: migration.openshift.io/v1alpha1
kind: MigStorage
metadata:
name: <storage_name>
namespace: openshift-migration
spec:
backupStorageConfig:
awsBucketName: <bucket_name> (1)
credsSecretRef:
name: <storage_secret_ref> (2)
namespace: openshift-config
backupStorageProvider: <storage_provider_name> (3)
volumeSnapshotConfig:
credsSecretRef:
name: <storage_secret_ref> (4)
namespace: openshift-config
volumeSnapshotProvider: <storage_provider_name> (5)
1 | Specify the bucket name. |
2 | Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. |
3 | Specify the storage provider. |
4 | Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. |
5 | Optional: If you are copying data by using snapshots, specify the storage provider. |
Create the MigStorage
CR:
$ oc create -f migstorage.yaml -n openshift-migration
Verify that the MigStorage
CR is in a Ready
state:
$ oc describe migstorage <migstorage_name>
Create a MigPlan
CR manifest called migplan.yaml
:
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
name: <migration_plan>
namespace: openshift-migration
spec:
destMigClusterRef:
name: host
namespace: openshift-migration
indirectImageMigration: true (1)
indirectVolumeMigration: true (2)
migStorageRef:
name: <migstorage_ref> (3)
namespace: openshift-migration
namespaces:
- <application_namespace> (4)
srcMigClusterRef:
name: <remote_cluster_ref> (5)
namespace: openshift-migration
1 | Direct image migration is enabled if false . |
2 | Direct volume migration is enabled if false . |
3 | Specify the name of the MigStorage CR instance. |
4 | Specify one or more namespaces to be migrated. |
5 | Specify the name of the source cluster MigCluster instance. |
Create the MigPlan
CR:
$ oc create -f migplan.yaml -n openshift-migration
View the MigPlan
instance to verify that it is in a Ready
state:
$ oc describe migplan <migplan_name> -n openshift-migration
Create a MigMigration
CR manifest called migmigration.yaml
:
apiVersion: migration.openshift.io/v1alpha1
kind: MigMigration
metadata:
name: <migmigration_name>
namespace: openshift-migration
spec:
migPlanRef:
name: <migplan_name> (1)
namespace: openshift-migration
quiescePods: true (2)
stage: false (3)
rollback: false (4)
1 | Specify the MigPlan CR name. |
2 | The pods on the source cluster are stopped before migration if true . |
3 | A stage migration, which copies most of the data without stopping the application, is performed if true . |
4 | A completed migration is rolled back if true . |
Create the MigMigration
CR to start the migration defined in the MigPlan
CR:
$ oc create -f migmigration.yaml -n openshift-migration
Verify the progress of the migration by watching the MigMigration
CR:
$ oc watch migmigration <migmigration_name> -n openshift-migration
The output resembles the following:
Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc
Namespace: openshift-migration
Labels: migration.openshift.io/migplan-name=django
Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c
API Version: migration.openshift.io/v1alpha1
Kind: MigMigration
...
Spec:
Mig Plan Ref:
Name: my_application
Namespace: openshift-migration
Stage: false
Status:
Conditions:
Category: Advisory
Last Transition Time: 2021-02-02T15:04:09Z
Message: Step: 19/47
Reason: InitialBackupCreated
Status: True
Type: Running
Category: Required
Last Transition Time: 2021-02-02T15:03:19Z
Message: The migration is ready.
Status: True
Type: Ready
Category: Required
Durable: true
Last Transition Time: 2021-02-02T15:04:05Z
Message: The migration registries are healthy.
Status: True
Type: RegistriesHealthy
Itinerary: Final
Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5
Phase: InitialBackupCreated
Pipeline:
Completed: 2021-02-02T15:04:07Z
Message: Completed
Name: Prepare
Started: 2021-02-02T15:03:18Z
Message: Waiting for initial Velero backup to complete.
Name: Backup
Phase: InitialBackupCreated
Progress:
Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s)
Started: 2021-02-02T15:04:07Z
Message: Not started
Name: StageBackup
Message: Not started
Name: StageRestore
Message: Not started
Name: DirectImage
Message: Not started
Name: DirectVolume
Message: Not started
Name: Restore
Message: Not started
Name: Cleanup
Start Timestamp: 2021-02-02T15:03:18Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Running 57s migmigration_controller Step: 2/47
Normal Running 57s migmigration_controller Step: 3/47
Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47
Normal Running 54s migmigration_controller Step: 5/47
Normal Running 54s migmigration_controller Step: 6/47
Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47
Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47
Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready.
Normal Running 50s migmigration_controller Step: 9/47
Normal Running 50s migmigration_controller Step: 10/47
Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests to create CRs for migrating applications.
The DirectImageMigration
CR copies images directly from the source cluster to the destination cluster.
apiVersion: migration.openshift.io/v1alpha1
kind: DirectImageMigration
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: <directimagemigration_name>
spec:
srcMigClusterRef:
name: <source_cluster_ref> (1)
namespace: openshift-migration
destMigClusterRef:
name: <destination_cluster_ref> (2)
namespace: openshift-migration
namespaces:
- <namespace> (3)
1 | Specify the MigCluster CR name of the source cluster. |
2 | Specify the MigCluster CR name of the destination cluster. |
3 | Specify one or more namespaces containing images to be migrated. |
The DirectImageStreamMigration
CR copies image stream references directly from the source cluster to the destination cluster.
apiVersion: migration.openshift.io/v1alpha1
kind: DirectImageStreamMigration
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: directimagestreammigration_name
spec:
srcMigClusterRef:
name: <source_cluster_ref> (1)
namespace: openshift-migration
destMigClusterRef:
name: <destination_cluster_ref> (2)
namespace: openshift-migration
imageStreamRef:
name: <image_stream_name> (3)
namespace: <source_image_stream_namespace> (4)
destNamespace: <destination_image_stream_namespace> (5)
1 | Specify the MigCluster CR name of the source cluster. |
2 | Specify the MigCluster CR name of the destination cluster. |
3 | Specify the image stream name. |
4 | Specify the image stream namespace on the source cluster. |
5 | Specify the image stream namespace on the destination cluster. |
The DirectVolumeMigration
CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster.
apiVersion: migration.openshift.io/v1alpha1
kind: DirectVolumeMigration
metadata:
name: <directvolumemigration_name>
namespace: openshift-migration
spec:
createDestinationNamespaces: false (1)
deleteProgressReportingCRs: false (2)
destMigClusterRef:
name: host (3)
namespace: openshift-migration
persistentVolumeClaims:
- name: <pvc_name> (4)
namespace: <pvc_namespace> (5)
srcMigClusterRef:
name: <source_cluster_ref> (6)
namespace: openshift-migration
1 | Namespaces are created for the PVs on the destination cluster if true . |
2 | The DirectVolumeMigrationProgress CRs are deleted after migration if true . The default value is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. |
3 | Update the cluster name if the destination cluster is not the host cluster. |
4 | Specify one or more PVCs to be migrated with direct volume migration. |
5 | Specify the namespace of each PVC. |
6 | Specify the MigCluster CR name of the source cluster. |
The DirectVolumeMigrationProgress
CR shows the progress of the DirectVolumeMigration
CR.
apiVersion: migration.openshift.io/v1alpha1
kind: DirectVolumeMigrationProgress
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: directvolumemigrationprogress_name
spec:
clusterRef:
name: source_cluster
namespace: openshift-migration
podRef:
name: rsync_pod
namespace: openshift-migration
The MigAnalytic
CR collects the number of images, Kubernetes resources, and the PV capacity from an associated MigPlan
CR.
apiVersion: migration.openshift.io/v1alpha1
kind: MigAnalytic
metadata:
annotations:
migplan: <migplan_name> (1)
name: miganalytic_name
namespace: openshift-migration
labels:
migplan: <migplan_name> (2)
spec:
analyzeImageCount: true (3)
analyzeK8SResources: true (4)
analyzePVCapacity: true (5)
listImages: false (6)
listImagesLimit: 50 (7)
migPlanRef:
name: migplan_name (8)
namespace: openshift-migration
1 | Specify the MigPlan CR name associated with the MigAnalytic CR. |
2 | Specify the MigPlan CR name associated with the MigAnalytic CR. |
3 | Optional: The number of images is returned if true . |
4 | Optional: Returns the number, kind, and API version of the Kubernetes resources if true . |
5 | Optional: Returns the PV capacity if true . |
6 | Returns a list of image names if true . Default is false so that the output is not excessively long. |
7 | Optional: Specify the maximum number of image names to return if listImages is true . |
8 | Specify the MigPlan CR name associated with the MigAnalytic CR. |
MigCluster
The MigCluster
CR defines a host, local, or remote cluster.
apiVersion: migration.openshift.io/v1alpha1
kind: MigCluster
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: host (1)
namespace: openshift-migration
spec:
isHostCluster: true (2)
azureResourceGroup: <azure_resource_group> (3)
caBundle: <ca_bundle_base64> (4)
insecure: false (5)
refresh: false (6)
# The 'restartRestic' parameter is relevant for a source cluster.
# restartRestic: true (7)
# The following parameters are relevant for a remote cluster.
# isHostCluster: false
# exposedRegistryPath: (8)
# url: <destination_cluster_url> (9)
# serviceAccountSecretRef:
# name: <source_secret_ref> (10)
# namespace: openshift-config
1 | Optional: Update the cluster name if the migration-controller pod is not running on this cluster. |
2 | The migration-controller pod runs on this cluster if true . |
3 | Optional: If the storage provider is Microsoft Azure, specify the resource group. |
4 | Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. |
5 | SSL verification is enabled if false . |
6 | The cluster is validated if true . |
7 | The restic pods are restarted on the source cluster after the stage pods are created if true . |
8 | Optional: If you are using direct image migration, specify the exposed registry path of a remote cluster. |
9 | Specify the URL of the remote cluster. |
10 | Specify the name of the Secret CR for the remote cluster. |
The MigHook
CR defines an Ansible playbook or a custom image that runs tasks at a specified stage of the migration.
apiVersion: migration.openshift.io/v1alpha1
kind: MigHook
metadata:
generateName: <hook_name_prefix> (1)
name: <hook_name> (2)
namespace: openshift-migration
spec:
activeDeadlineSeconds: (3)
custom: false (4)
image: <hook_image> (5)
playbook: <ansible_playbook_base64> (6)
targetCluster: source (7)
1 | Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. |
2 | Specify the migration hook name, unless you specify the value of the generateName parameter. |
3 | Optional: Specify the maximum number of seconds that a hook can run. The default value is 1800 . |
4 | The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. |
5 | Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . |
6 | Specify the entire base64-encoded Ansible playbook. Required if custom is false . |
7 | Specify source or destination as the cluster on which the hook will run. |
The MigMigration
CR runs an associated MigPlan
CR.
You can create multiple MigMigration
CRs associated with the same MigPlan
CR for the following scenarios:
You can run multiple stage or incremental migrations to copy data without stopping the pods on the source cluster. Running stage migrations improves the performance of the actual migration.
You can cancel a migration in progress.
You can roll back a migration.
apiVersion: migration.openshift.io/v1alpha1
kind: MigMigration
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: migmigration_name
namespace: openshift-migration
spec:
canceled: false (1)
rollback: false (2)
stage: false (3)
quiescePods: true (4)
keepAnnotations: true (5)
verify: false (6)
migPlanRef:
name: <migplan_ref> (7)
namespace: openshift-migration
1 | A migration in progress is canceled if true . |
2 | A completed migration is rolled back if true . |
3 | Data is copied incrementally and the pods on the source cluster are not stopped if true . |
4 | The pods on the source cluster are scaled to 0 after the Backup stage of a migration if true . |
5 | The labels and annotations applied during the migration are retained if true . |
6 | The status of the migrated pods on the destination cluster are checked and the names of pods that are not in a Running state are returned if true . |
7 | migPlanRef.name : Specify the name of the associated MigPlan CR. |
The MigPlan
CR defines the parameters of a migration plan. It contains a group of virtual machines that are being migrated with the same parameters.
apiVersion: migration.openshift.io/v1alpha1
kind: MigPlan
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: migplan_name
namespace: openshift-migration
spec:
closed: false (1)
srcMigClusterRef:
name: <source_migcluster_ref> (2)
namespace: openshift-migration
destMigClusterRef:
name: <destination_migcluster_ref> (3)
namespace: openshift-migration
hooks: (4)
- executionNamespace: openshift-migration
phase: <migration_phase> (5)
reference:
name: <hook_name> (6)
namespace: openshift-migration
serviceAccount: migration-controller
indirectImageMigration: true (7)
indirectVolumeMigration: false (8)
migStorageRef:
name: <migstorage_name> (9)
namespace: openshift-migration
namespaces:
- <namespace> (10)
refresh: false (11)
1 | The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. |
2 | Specify the name of the source cluster MigCluster CR. |
3 | Specify the name of the destination cluster MigCluster CR. |
4 | Optional: You can specify up to four migration hooks. |
5 | Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. The expected values are PreBackup , PostBackup , PreRestore , and PostRestore . |
6 | Optional: Specify the name of the MigHook CR. |
7 | Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. |
8 | Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. |
9 | Specify the name of MigStorage CR. |
10 | Specify one or more namespaces. |
11 | The MigPlan CR is validated if true . |
The MigStorage
CR describes the object storage for the replication repository. You can configure Amazon Web Services, Microsoft Azure, Google Cloud Storage, and generic S3-compatible cloud storage, for example, Minio or NooBaa.
Different providers require different parameters.
apiVersion: migration.openshift.io/v1alpha1
kind: MigStorage
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: migstorage_name
namespace: openshift-migration
spec:
backupStorageProvider: <storage_provider> (1)
volumeSnapshotProvider: (2)
backupStorageConfig:
awsBucketName: (3)
awsRegion: (4)
credsSecretRef:
namespace: openshift-config
name: <storage_secret> (5)
awsKmsKeyId: (6)
awsPublicUrl: (7)
awsSignatureVersion: (8)
volumeSnapshotConfig:
awsRegion: (9)
credsSecretRef:
namespace: openshift-config
name: (10)
refresh: false (11)
1 | Specify the storage provider. |
2 | Optional: If you are using the snapshot copy method, specify the storage provider. |
3 | If you are using AWS, specify the bucket name. |
4 | If you are using AWS, specify the bucket region, for example, us-east-1 . |
5 | Specify the name of the Secret CR that you created for the MigStorage CR. |
6 | Optional: If you are using the AWS Key Management Service, specify the unique identifier of the key. |
7 | Optional: If you granted public access to the AWS bucket, specify the bucket URL. |
8 | Optional: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . |
9 | Optional: If you are using the snapshot copy method, specify the geographical region of the clusters. |
10 | Optional: If you are using the snapshot copy method, specify the name of the Secret CR that you created for the MigStorage CR. |
11 | The cluster is validated if true . |
You can configure a migration plan by increasing the number of objects migrated or excluding resources from migration.
You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).
You must test these changes before you perform a migration in a production environment. |
Edit the MigrationController
CR manifest:
$ oc edit migrationcontroller -n openshift-migration
Update the following parameters:
...
mig_controller_limits_cpu: "1" (1)
mig_controller_limits_memory: "10Gi" (2)
...
mig_controller_requests_cpu: "100m" (3)
mig_controller_requests_memory: "350Mi" (4)
...
mig_pv_limit: 100 (5)
mig_pod_limit: 100 (6)
mig_namespace_limit: 10 (7)
...
1 | Specifies the number of CPUs available to the MigrationController CR. |
2 | Specifies the amount of memory available to the MigrationController CR. |
3 | Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). |
4 | Specifies the amount of memory available for MigrationController CR requests. |
5 | Specifies the number of persistent volumes that can be migrated. |
6 | Specifies the number of pods that can be migrated. |
7 | Specifies the number of namespaces that can be migrated. |
Create a migration plan that uses the updated parameters to verify the changes.
If your migration plan exceeds the MigrationController
CR limits, the MTC console displays a warning message when you save the migration plan.
You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the resource load for migration or to migrate images or PVs with a different tool.
By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.
Edit the MigrationController
CR manifest:
$ oc edit migrationcontroller -n openshift-migration
Update the spec
section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources
parameter if it does not have its own exclusion parameter:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
disable_image_migration: true (1)
disable_pv_migration: true (2)
...
excluded_resources: (3)
- imagetags
- templateinstances
- clusterserviceversions
- packagemanifests
- subscriptions
- servicebrokers
- servicebindings
- serviceclasses
- serviceinstances
- serviceplans
- operatorgroups
- events
1 | Add disable_image_migration: true to exclude image streams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the MigrationController pod restarts. |
2 | Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. |
3 | You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete the default excluded resources. These resources are problematic to migrate and must be excluded. |
Wait two minutes for the MigrationController
pod to restart so that the changes are applied.
Verify that the resource is excluded:
$ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1
The output contains the excluded resources:
- name: EXCLUDED_RESOURCES
value:
imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims