You can migrate application workloads from OpenShift Container Platform 3.7, 3.9, 3.10, and 3.11 to OpenShift Container Platform 4.5 with the Migration Toolkit for Containers (MTC). MTC enables you to control the migration and to minimize application downtime.
The MTC web console and API, based on Kubernetes custom resources, enable you to migrate stateful application workloads at the granularity of a namespace.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.
The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as
The MTC web console displays a message if the service catalog resources cannot be migrated.
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane. The CPMA processes the OpenShift Container Platform 3 configuration files and generates custom resource manifest files, which are consumed by OpenShift Container Platform 4.5 Operators.
Before you begin your migration, be sure to review the information on planning your migration.
The Migration Toolkit for Containers (MTC) has the following prerequisites:
You must have
The source cluster must be OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.
You must upgrade the source cluster to the latest z-stream release.
You must have
cluster-admin privileges on all clusters.
The source and target clusters must have unrestricted network access to the replication repository.
The cluster on which the
MigrationController CR is installed must have unrestricted access to the other clusters.
If your application uses images from the
openshift namespace, the required versions of the images must be present on the target cluster.
If the required images are not present, you must update the image stream tag references to use an available version that is compatible with your application. If the image stream tags cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to use them.
The following image stream tags have been removed from OpenShift Container Platform 4.2:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.5 target cluster, using the MTC web console or the Kubernetes API.
Migrating an application with the MTC web console involves the following steps:
Install the Migration Toolkit for Containers Operator on all clusters.
You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.
Configure the replication repository, an intermediate object storage that MTC uses to migrate data.
The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters.
Add the source cluster to the MTC web console.
Add the replication repository to the MTC web console.
Create a migration plan, with one of the following data migration options:
Copy: MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.
Move: MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
Although the replication repository does not appear in this diagram, it is required for migration.
Run the migration plan, with one of the following options:
Stage (optional) copies data to the target cluster without stopping the application.
Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the duration of the migration and application downtime.
Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
MTC copies data files from the source cluster to the replication repository, and from there to the target cluster.
MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster.
AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.
You can use migration hooks to run Ansible playbooks at certain points during a migration with the Migration Toolkit for Containers (MTC). The hooks are added when you create a migration plan.
If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan.
Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.
A single migration hook runs on a source or target cluster at one of the following migration steps:
PreBackup: Before backup tasks are started on the source cluster
PostBackup: After backup tasks are complete on the source cluster
PreRestore: Before restore tasks are started on the target cluster
PostRestore: After restore tasks are complete on the target cluster
You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.
hook-runner image is
registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:v1.3.2. This image is based on Ansible Runner and includes
python-openshift for Ansible Kubernetes resources and an updated
oc binary. You can also create your own hook image with additional Ansible modules or tools.
The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job on a cluster with a specified service account and namespace. The job continues to run until it reaches the default backoff limit for retries,
6, or successful completion, even if the initial pod is evicted or killed.
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from OpenShift Container Platform 3.7 (or later) to 4.5 with the Migration Toolkit for Containers (MTC). CPMA processes the OpenShift Container Platform 3 configuration files and generates custom resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.5 Operators.
Because OpenShift Container Platform 3 and 4 have significant configuration differences, not all parameters are processed. CPMA can generate a report that describes whether features are supported fully, partially, or not at all.
CPMA uses the Kubernetes and OpenShift Container Platform APIs to access the following configuration files on an OpenShift Container Platform 3 cluster:
Master configuration file (default:
CRI-O configuration file (default:
etcd configuration file (default:
Image registries file (default:
Dependent configuration files:
Password files (for example, HTPasswd)
CPMA generates manifests for the following configurations:
API server CA certificate:
If you are using an unsigned API server CA certificate, you must add the certificate manually to the target cluster.
Cluster resource quota:
Project resource quota:
Portable image registry (
/etc/registries/registries.conf) and portable image policy (