You can migrate application workloads from OpenShift Container Platform 3.7, 3.9, 3.10, and 3.11 to OpenShift Container Platform 4.2 with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.
The CAM tool’s web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful application workloads at the granularity of a namespace.
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane. The CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.2 Operators.
Before you begin your migration, be sure to review the information on planning your migration. |
You must have podman
installed.
The source cluster must be OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.
You must upgrade the source cluster to the latest z-stream release.
You must have cluster-admin
privileges on all clusters.
The source and target clusters must have unrestricted network access to the replication repository.
The cluster on which the Migration controller is installed must have unrestricted access to the other clusters.
If your application uses images from the openshift
namespace, the required versions of the images must be present on the target cluster.
If the required images are not present, you must update the imagestreamtags
references to use an available version that is compatible with your application. If the imagestreamtags
cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.
The following imagestreamtags
have been removed from OpenShift Container Platform 4.2:
dotnet:1.0
, dotnet:1.1
, dotnet:2.0
dotnet-runtime:2.0
mariadb:10.1
mongodb:2.4
, mongodb:2.6
mysql:5.5
, mysql:5.6
nginx:1.8
nodejs:0.10
, nodejs:4
, nodejs:6
perl:5.16
, perl:5.20
php:5.5
, php:5.6
postgresql:9.2
, postgresql:9.4
, postgresql:9.5
python:3.3
, python:3.4
ruby:2.0
, ruby:2.2
The Cluster Application Migration (CAM) tool enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform 4.2 target cluster, using the CAM web console or the Kubernetes API.
Migrating an application with the CAM web console involves the following steps:
Install the Cluster Application Migration Operator on all clusters.
You can install the Cluster Application Migration Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.
Configure the replication repository, an intermediate object storage that the CAM tool uses to migrate data.
The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you use a proxy server, you must ensure that replication repository is whitelisted.
Add the source cluster to the CAM web console.
Add the replication repository to the CAM web console.
Create a migration plan, with one of the following data migration options:
Copy: The CAM tool copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.
Move: The CAM tool unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
Although the replication repository does not appear in this diagram, it is required for the actual migration. |
Run the migration plan, with one of the following options:
Stage (optional) copies data to the target cluster without stopping the application.
Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.
Migrate stops the application on the source cluster and recreates its resources on the target cluster. Optionally, you can migrate the workload without stopping the application.
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The CAM tool copies data files from the source cluster to the replication repository, and from there to the target cluster.
Benefits | Limitations |
---|---|
|
|
The CAM tool copies a snapshot of the source cluster’s data to a cloud provider’s object storage, configured as a replication repository. The data is restored on the target cluster.
AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.
Benefits | Limitations |
---|---|
|
|
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.
If you do not want to use Ansible playbooks, you can create a custom container image and add it to a migration plan. |
Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.
A single migration hook runs on a source or target cluster at one of the following migration steps:
PreBackup: Before backup tasks are started on the source cluster
PostBackup: After backup tasks are complete on the source cluster
PreRestore: Before restore tasks are started on the target cluster
PostRestore: After restore tasks are complete on the target cluster
You can assign one hook to each migration step, up to a maximum of four hooks for a single migration plan.
The default hook-runner
image is registry.redhat.io/rhcam-1-2/openshift-migration-hook-runner-rhel7
. This image is based on Ansible Runner and includes python-openshift
for Ansible Kubernetes resources and an updated oc
binary. You can also create your own hook image with additional Ansible modules or tools.
The Ansible playbook is mounted on a hook container as a ConfigMap. The hook container runs as a Job on a cluster with a specified service account and namespace. The Job runs, even if the initial Pod is evicted or killed, until it reaches the default backoffLimit
(6
) or successful completion.
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from OpenShift Container Platform 3.7 (or later) to 4.2. The CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.2 Operators.
Because OpenShift Container Platform 3 and 4 have significant configuration differences, not all parameters are processed. The CPMA can generate a report that describes whether features are supported fully, partially, or not at all.
CPMA uses the Kubernetes and OpenShift Container Platform APIs to access the following configuration files on an OpenShift Container Platform 3 cluster:
Master configuration file (default: /etc/origin/master/master-config.yaml
)
CRI-O configuration file (default: /etc/crio/crio.conf
)
etcd configuration file (default: /etc/etcd/etcd.conf
)
Image registries file (default: /etc/containers/registries.conf
)
Dependent configuration files:
Password files (for example, HTPasswd)
ConfigMaps
Secrets
CPMA generates CR manifests for the following configurations:
API server CA certificate: 100_CPMA-cluster-config-APISecret.yaml
If you are using an unsigned API server CA certificate, you must add the certificate manually to the target cluster. |
CRI-O: 100_CPMA-crio-config.yaml
Cluster resource quota: 100_CPMA-cluster-quota-resource-x.yaml
Project resource quota: 100_CPMA-resource-quota-x.yaml
Portable image registry (/etc/registries/registries.conf
) and portable image policy (etc/origin/master/master-config.yam
): 100_CPMA-cluster-config-image.yaml
OAuth providers: 100_CPMA-cluster-config-oauth.yaml
Project configuration: 100_CPMA-cluster-config-project.yaml
Scheduler: 100_CPMA-cluster-config-scheduler.yaml
SDN: 100_CPMA-cluster-config-sdn.yaml