You must add your clusters and a replication repository to the MTC web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1)
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2 Specify the name of the CA bundle file.

Configuring a migration plan

Increasing limits for large migrations

You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).

You must test these changes before you perform a migration in a production environment.

Procedure
  1. Edit the MigrationController CR manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" (1)
    mig_controller_limits_memory: "10Gi" (2)
    ...
    mig_controller_requests_cpu: "100m" (3)
    mig_controller_requests_memory: "350Mi" (4)
    ...
    mig_pv_limit: 100 (5)
    mig_pod_limit: 100 (6)
    mig_namespace_limit: 10 (7)
    ...
    1 Specifies the number of CPUs available to the MigrationController CR.
    2 Specifies the amount of memory available to the MigrationController CR.
    3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4 Specifies the amount of memory available for MigrationController CR requests.
    5 Specifies the number of persistent volumes that can be migrated.
    6 Specifies the number of pods that can be migrated.
    7 Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan.

Excluding resources from a migration plan

You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the load or to migrate images or PVs with a different tool.

Procedure
  1. Edit the MigrationController CR manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true (1)
      disable_pv_migration: true (2)
      ...
      excluded_resources: (3)
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1 Add disable_image_migration: true to exclude image streams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the MigrationController pod restarts.
    2 Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3 You can add OpenShift Container Platform resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the MigrationController pod to restart so that the changes are applied.

  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources:

    Example output
        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

Adding a cluster to the Migration Toolkit for Containers web console

You can add a cluster to the Migration Toolkit for Containers (MTC) web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.

  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure
  1. Log in to the cluster.

  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration
    Example output
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
  3. In the MTC web console, click Clusters.

  4. Click Add cluster.

  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.

    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.

    • Service account token: String that you obtained from the source cluster.

    • Exposed route to image registry: Optional. You can specify a route to the image registry of your source cluster to enable direct migration for images, for example, docker-registry-default.apps.cluster.com.

      Direct migration is much faster than migration with a replication repository.

    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.

    • Azure resource group: This field appears if Azure cluster is checked.

    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.

  6. Click Add cluster.

    The cluster appears in the Clusters list.

Adding a replication repository to the MTC web console

You can add an object storage bucket as a replication repository to the Migration Toolkit for Containers (MTC) web console.

Prerequisites
  • You must configure an object storage bucket for migrating the data.

Procedure
  1. In the MTC web console, click Replication repositories.

  2. Click Add repository.

  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the MTC web console.

      • S3 bucket name: Specify the name of the S3 bucket you created.

      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.

      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.

      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.

      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.

      • Require SSL verification: Clear this check box if you are using a generic S3 provider.

      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.

    • GCP:

      • Replication repository name: Specify the replication repository name in the MTC web console.

      • GCP bucket name: Specify the name of the GCP bucket.

      • GCP credential JSON blob: Specify the string in the credentials-velero file.

    • Azure:

      • Replication repository name: Specify the replication repository name in the MTC web console.

      • Azure resource group: Specify the resource group of the Azure Blob storage.

      • Azure storage account name: Specify the Azure Blob storage account name.

      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.

  4. Click Add repository and wait for connection validation.

  5. Click Close.

    The new repository appears in the Replication repositories list.

Creating a migration plan in the MTC web console

You can create a migration plan in the for the Migration Toolkit for Containers (MTC) web console.

Prerequisites
  • The source and target clusters must have network access to each other and to the replication repository.

  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.

Procedure
  1. In the MTC web console, click Migration plans.

  2. Click Add migration plan.

  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.

  5. Select a Target cluster.

  6. Select a Replication repository.

  7. Select the projects to be migrated and click Next.

  8. Select Copy or Move for the persistent volume (PV):

  9. Select a Source cluster, a Target cluster, and a Repository, and click Next.

  10. In the Namespaces screen, select the projects to be migrated and click Next.

  11. In the Persistent volumes screen, select Copy or Move for the PVs:

    • Copy copies the data from the PV of a source cluster to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      If you specified a route to an image registry when you added the source cluster to the web console, you can migrate images directly from the source cluster to the target cluster.

    • Move unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

  12. Click Next.

  13. In the Copy options screen, select a Copy method for the PVs:

    • Snapshot copy backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem copy.

      The storage and clusters must be in the same region and the storage classes must be compatible.

    • Filesystem copy backs up the files on the source cluster and restores them on the target cluster.

  14. You can select Verify copy to verify data migrated with Filesystem copy. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance.

  15. Select a Target storage class.

    If you selected Filesystem copy, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  16. Click Next.

  17. In the Migration options screen, the Use direct image migration and Use direct PV migration for filesystem copies options are selected if you specified an image registry route for the source cluster.

    Direct migration is much faster than migrating files and images with a replication repository.

  18. If you want to add a migration hook, click Add Hook and perform the following steps for each migration:

    1. Specify the name of the hook.

    2. Select an Ansible playbook or a Custom container image for a hook written in another language.

    3. Click Browse to upload the playbook.

    4. Optional: If you are not using the default Ansible runtime image, specify a custom Ansible image.

    5. Specify the cluster on which you want the hook to run, the service account name, and the namespace.

    6. Select the migration step at which you want the hook to run:

      • PreBackup: Before backup tasks are started on the source cluster

      • PostBackup: After backup tasks are complete on the source cluster

      • PreRestore: Before restore tasks are started on the target cluster

      • PostRestore: After restore tasks are complete on the target cluster

  19. Click Add.

    You can add up to four hooks to a migration plan, assigning each hook to a different migration step.

  20. Click Finish.

  21. Click Close.

    The migration plan appears in the Migration plans list.

Running a migration plan in the MTC web console

You can stage or migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console.

Prerequisites

The MTC web console must contain the following:

  • Source cluster

  • Target cluster

  • Replication repository

  • Valid migration plan

Procedure
  1. Log in to the source cluster.

  2. Delete old images:

    $ oc adm prune images
  3. Log in to the MTC web console and click Migration plans.

  4. Click the Options menu kebab beside a migration plan and select Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  5. When you are ready to migrate the application workload, the Options menu kebab beside a migration plan and select Migrate.

  6. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.

  7. Click Migrate.

  8. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:

    1. Click HomeProjects.

    2. Click the migrated project to view its status.

    3. In the Routes section, click Location to verify that the application is functioning, if applicable.

    4. Click WorkloadsPods to verify that the pods are running in the migrated namespace.

    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.