The Migration Toolkit for Containers (MTC) web console and API, based on Kubernetes custom resources, enable you to migrate stateful application workloads at the granularity of a namespace.
You can migrate from OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11 to 4.5. MTC enables you to control the migration and to minimize application downtime.
Before you begin your migration, be sure to review the differences between OpenShift Container Platform 3 and 4. |
The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on an OpenShift Container Platform 3 source cluster or on a remote cluster.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision
, deprovision
, or update
on these workloads after migration. The MTC console displays a message if the service catalog resources cannot be migrated.
You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.5 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API.
MTC migrates the following resources:
A namespace specified in a migration plan.
Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources.
For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volume claims that are linked to the persistent volumes of the namespace.
Cluster-scoped resources might have to be migrated manually, depending on the resource. |
Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level.
Migrating an application with the MTC web console involves the following steps:
Install the Migration Toolkit for Containers Operator on all clusters.
You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.
Configure the replication repository, an intermediate object storage that MTC uses to migrate data.
The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use Multi-Cloud Object Gateway (MCG). If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters.
Add the source cluster to the MTC web console.
Add the replication repository to the MTC web console.
Create a migration plan, with one of the following data migration options:
Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.
If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. |
Move: MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
Although the replication repository does not appear in this diagram, it is required for migration. |
Run the migration plan, with one of the following options:
Stage (optional) copies data to the target cluster without stopping the application.
Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the duration of the migration and the application downtime.
Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.
Benefits | Limitations |
---|---|
|
|
MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster.
AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.
Benefits | Limitations |
---|---|
|
|
You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster.
If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim.
DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync.
DIM and DVM have additional prerequisites.