You can migrate application workloads from OpenShift Container Platform 3.7 (to 3.11) to OpenShift Container Platform 4.2 with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.

The CAM tool’s web console and API, based on Kubernetes custom resources, enable you to migrate stateful application workloads at the granularity of a namespace.

You can migrate data to a different storage class, for example, from Red Hat Gluster Storage or NFS storage on an OpenShift Container Platform 3.x cluster to Red Hat Ceph Storage on an OpenShift Container Platform 4.2 cluster.

Optionally, you can use the Control Plane Migration Assistant (CPMA) to assist you in migrating control plane settings.

Before you begin your migration, be sure to review the information on planning your migration.

Prerequisites
  • The source cluster must be OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.

  • The target cluster must be OpenShift Container Platform 4.2.

  • You must have cluster-admin privileges on all clusters.

  • You must have podman installed.

  • You must have a replication repository that supports the S3 API and is accessible to the source and target clusters.

  • If your application uses images from the openshift namespace, the required versions of the images must be present on the target cluster. If not, you must update the imagestreamtags references to use an available version that is compatible with your application.

    If the imagestreamtags cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.

    The following imagestreamtags have been removed from OpenShift Container Platform 4.2:

    • dotnet:1.0, dotnet:1.1, dotnet:2.0

    • dotnet-runtime:2.0

    • mariadb:10.1

    • mongodb:2.4, mongodb:2.6

    • mysql:5.5, mysql:5.6

    • nginx:1.8

    • nodejs:0.10, nodejs:4, nodejs:6

    • perl:5.16, perl:5.20

    • php:5.5, php:5.6

    • postgresql:9.2, postgresql:9.4, postgresql:9.5

    • python:3.3, python:3.4

    • ruby:2.0, ruby:2.2

Understanding the Cluster Application Migration tool

The Cluster Application Migration (CAM) tool enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an OpenShift Container Platform source cluster to an OpenShift Container Platform target cluster, using the CAM web console or the Kubernetes API.

Migrating an application with the CAM web console involves the following steps:

  1. Installing the CAM Operator on the source and target clusters

    The CAM Operator installs the CAM web console on the target cluster by default.

  2. Configuring cross-origin resource sharing on the source cluster

  3. Adding the source cluster to the CAM web console

  4. Adding an S3 replication repository to the CAM web console

  5. Creating a migration plan, with one of the following options:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster. You can change the storage class during migration.

      migration PV copy
    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

      Although the replication repository does not appear in this diagram, it is required for the actual migration.

      migration PV move
  6. Running the migration plan, with one of the following options:

    • Stage (optional) copies data to the target cluster without stopping the application.

      Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.

    • Migrate stops the application workload on the source cluster and recreates its resources on the target cluster. Optionally, you can choose to keep the application running when you migrate the workload.

OCP 3 to 4 App migration

Deploying the Cluster Application Migration tool

Deploying the CAM tool involves the following steps:

  1. Installing the CAM Operator on all clusters

    The CAM Operator installs the following:

    • Source cluster(s): Restic and Velero

    • Target cluster: Migration controller, CAM web console, Restic, and Velero

  2. Configuring cross-origin resource sharing on the source cluster(s) to enable communication with the CAM tool

Installing the CAM Operator on an OpenShift Container Platform 3 source cluster

You can install the CAM Operator manually on an OpenShift Container Platform 3.7 (or later) source cluster, which does not support OLM.

The CAM Operator installs Restic and Velero.

Prerequisites

If your system is configured for rootless Podman containers, sudo is not required for this procedure.

Procedure
  1. Log in to the Red Hat Container Registry with your Red Hat Customer Portal credentials:

    $ sudo podman login registry.redhat.io
  2. Download the operator.yml file:

    $ sudo podman cp $(podman create registry.redhat.io/rhcam-1-0/openshift-migration-rhel7-operator:v1.0 ):/operator.yml ./
  3. Download the controller-3.yml file:

    $ sudo podman cp $(podman create registry.redhat.io/rhcam-1-0/openshift-migration-rhel7-operator:v1.0 ):/controller-3.yml ./
  4. Create the CAM Operator CR object:

    $ oc create -f operator.yml
    namespace/openshift-migration created
    rolebinding.rbac.authorization.k8s.io/system:deployers created
    serviceaccount/migration-operator created
    customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    role.rbac.authorization.k8s.io/migration-operator created
    rolebinding.rbac.authorization.k8s.io/migration-operator created
    clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    deployment.apps/migration-operator created
    Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
    Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists (1)
    1 You can ignore the Error from server (AlreadyExists) messages. They are caused by the CAM Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases.
  5. Create the Migration controller CR object:

    $ oc create -f controller-3.yml
  6. Verify that the Velero and Restic Pods are running:

    $ oc get pods -n openshift-migration
Procedure
  1. In the OpenShift Container Platform web console, click AdministrationNamespaces.

  2. On the Namespaces page:

    1. Click Create Namespace.

    2. Enter openshift-migration in the Name field and click Create.

  3. Click OperatorsOperatorHub.

  4. On the OperatorHub page:

    1. Scroll or type a keyword into the Filter by keyword field (in this case, Migration) to find the Cluster Application Migration Operator.

    2. Select the Cluster Application Migration Operator and click Install.

  5. On the Create Operator Subscription page:

    1. Select the openshift-migration namespace if it is not already selected.

    2. Select an Automatic or Manual approval strategy.

    3. Click Subscribe.

  6. Click OperatorsInstalled Operators.

    The Cluster Application Migration Operator is listed in the openshift-migration project with the status InstallSucceeded.

  7. On the Installed Operators page:

    1. Under Provided APIs, click View 12 more…​.

    2. Click Create NewMigrationController.

    3. Click Create.

Configuring cross-origin resource sharing on an OpenShift Container Platform 3 source cluster

You must configure cross-origin resource sharing on an OpenShift Container Platform 3 source cluster to enable communication between the source cluster’s API server and the CAM tool.

Procedure
  1. Log in to the cluster on which the CAM tool is installed.

  2. Obtain the value for your CORS configuration:

    $ oc get -n openshift-migration route/migration -o go-template='(?i)//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
  3. Log in to the OpenShift Container Platform 3 source cluster.

  4. Add the CORS configuration value to the corsAllowedOrigins stanza in the /etc/origin/master/master-config.yaml configuration file:

    corsAllowedOrigins:
    - (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) (1)
    - (?i)//openshift\.default\.svc(:|\z)
    - (?i)//kubernetes\.default(:|\z)
    1 Specify your CORS configuration.
  5. Restart the API server and controller manager to apply the changes:

    • In OpenShift Container Platform 3.7 and 3.9, these components run as stand-alone host processes managed by systemd and are restarted by running the following command:

      $ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
    • In OpenShift Container Platform 3.10 and 3.11, these components run in static Pods managed by a kubelet and are restarted by running the following commands:

      $ /usr/local/bin/master-restart api
      $ /usr/local/bin/master-restart controllers
  6. Verify the configuration:

    $ curl -v -k -X OPTIONS \
    "<cluster_url>/apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migclusters" \ (1)
    -H "Access-Control-Request-Method: GET" \
    -H "Access-Control-Request-Headers: authorization" \
    -H "Origin: https://<CAM_web_console_url>" (2)
    1 Specify the URL of your CORS configured cluster.
    2 Specify the URL of your CAM web console. The URL is based on the CORS configuration value, for example, https://migration-openshift-migration.apps.cluster.

    The output appears similar to the following:

    < HTTP/2 204
    < access-control-allow-credentials: true
    < access-control-allow-headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-Requested-With, If-Modified-Since
    < access-control-allow-methods: POST, GET, OPTIONS, PUT, DELETE, PATCH
    < access-control-allow-origin: https://migration-openshift-migration.apps.cluster
    < access-control-expose-headers: Date
    < cache-control: no-store

Migrating applications with the CAM web console

Launching the CAM web console

You can launch the CAM web console that is installed on the target cluster.

Procedure
  1. Log in to the target cluster.

  2. Obtain the CAM web console URL:

    $ oc get -n openshift-migration route/migration -o go-template='(?i}//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
  3. Launch a browser and navigate to the CAM web console.

    If you try to access the CAM web console immediately after installing the CAM Operator, the console may not load because the Operator is still configuring the cluster and enabling cross-origin resource sharing. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page will guide you through the process of accepting the remaining certificates.

  5. Log in with your OpenShift Container Platform username and password.

Adding a cluster to the CAM web console

You can add a source cluster to the CAM web console.

Prerequisites
  • Cross-origin resource sharing must be configured on the source cluster.

Procedure
  1. Log in to the source cluster.

  2. Obtain the service account token:

    $ oc sa get-token mig -n openshift-migration
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
  3. Log in to the CAM web console.

  4. In the Clusters section, click Add cluster.

  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.

    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.

    • Service account token: String that you obtained from the source cluster.

  6. Click Add cluster.

    The cluster appears in the Clusters section.

Adding a replication repository to the CAM web console

You can add an replication repository to the CAM web console.

Prerequisites
  • The replication repository must support the S3 API.

    You can deploy a local S3 object storage with the upstream NooBaa project or AWS S3.

  • The replication repository must be accessible to the source and target clusters.

Procedure
  1. Log in to the CAM web console.

  2. In the Replication repositories section, click Add replication repository.

  3. Fill in the following fields:

    • Replication repository name

    • S3 bucket name

    • S3 bucket region: Required for AWS S3 if the bucket region is not us-east-1. Optional for a generic S3 repository.

    • S3 endpoint: Required for a generic S3 repository. This is the URL of the S3 service, not the bucket, for example, http://s3-noobaa.apps.cluster.com.

      Currently, https:// is supported only for AWS. For other providers, use http://.

    • S3 provider access key

    • S3 provider secret access key

  4. Click Add replication repository and wait for connection validation.

  5. Click Close.

    The replication repository appears in the Replication repositories section.

Changing migration plan limits for large migrations

You can change the migration plan limits for large migrations.

Changes should first be tested in your environment to avoid a failed migration.

A single migration plan has the following default limits:

  • 10 namespaces

    If this limit is exceeded, the CAM web console displays a Namespace limit exceeded error and you cannot create a migration plan.

  • 100 Pods

    If the Pod limit is exceeded, the CAM web console displays a warning message similar to the following example: Plan has been validated with warning condition(s). See warning message. Pod limit: 100 exceeded, found: 104.

  • 100 persistent volumes

    If the persistent volume limit is exceeded, the CAM web console displays a similar warning message.

Procedure
  1. Edit the Migration controller CR:

    $ oc get migrationcontroller -n openshift-migration
    NAME AGE
    migration-controller 5d19h
    
    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    [...]
    migration_controller: true
    
    # This configuration is loaded into mig-controller, and should be set on the
    # cluster where `migration_controller: true`
    mig_pv_limit: 100
    mig_pod_limit: 100
    mig_namespace_limit: 10
    [...]

Creating a migration plan in the CAM web console

You can create a migration plan in the CAM web console.

Prerequisites

The CAM web console must contain the following:

  • Source cluster

  • Target cluster, which is added automatically during the CAM tool installation

  • Replication repository

Procedure
  1. Log in to the CAM web console.

  2. In the Plans section, click Add plan.

  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.

  5. Select a Target cluster.

  6. Select a Replication repository.

  7. Select the projects to be migrated and click Next.

  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

  9. Click Next.

  10. Select a Storage class for the PVs.

    You can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  11. Click Next.

  12. Click Close.

    The migration plan appears in the Plans section.

Running a migration plan in the CAM web console

You can stage or migrate applications and data with the migration plan you created in the CAM web console.

Prerequisites

The CAM web console must contain the following:

  • Source cluster

  • Target cluster, which is added automatically during the CAM tool installation

  • Replication repository

  • Valid migration plan

Procedure
  1. Log in to the CAM web console on the OpenShift Container Platform 4 cluster.

  2. Select a migration plan.

  3. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  4. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  5. Optionally, in the Migrate window, you can select Do not stop applications on the source cluster during migration.

  6. Click Migrate.

  7. When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform 4.2 web console:

    1. Click HomeProjects.

    2. Click the migrated project to view its status.

    3. In the Routes section, click Location to verify that the application is functioning.

    4. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.

Migrating control plane settings with the Control Plane Migration Assistant

Understanding the Control Plane Migration Assistant

The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from OpenShift Container Platform 3.7 (or later) to OpenShift Container Platform 4.2. The CPMA processes the OpenShift Container Platform 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by OpenShift Container Platform 4.2 Operators.

Because OpenShift Container Platform 3 and 4 have significant configuration differences, not all parameters are processed. The CPMA can generate a report that describes whether features are supported fully, partially, or not at all.

Configuration files

CPMA uses the Kubernetes and OpenShift Container Platform APIs to access the following configuration files on an OpenShift Container Platform 3 cluster:

  • Master configuration file (default: /etc/origin/master/master-config.yaml)

  • CRI-O configuration file (default: /etc/crio/crio.conf)

  • etcd configuration file (default: /etc/etcd/etcd.conf)

  • Image registries file (default: /etc/containers/registries.conf)

  • Dependent configuration files:

    • Password files (for example, HTPasswd)

    • ConfigMaps

    • Secrets

CR Manifests

CPMA generates CR manifests for the following configurations:

  • API server CA certificate: 100_CPMA-cluster-config-APISecret.yaml

    If you are using an unsigned API server CA certificate, you must add the certificate manually to the target cluster.

  • CRI-O: 100_CPMA-crio-config.yaml

  • Cluster resource quota: 100_CPMA-cluster-quota-resource-x.yaml

  • Project resource quota: 100_CPMA-resource-quota-x.yaml

  • Portable image registry (/etc/registries/registries.conf) and portable image policy (etc/origin/master/master-config.yam): 100_CPMA-cluster-config-image.yaml

  • OAuth providers: 100_CPMA-cluster-config-oauth.yaml

  • Project configuration: 100_CPMA-cluster-config-project.yaml

  • Scheduler: 100_CPMA-cluster-config-scheduler.yaml

  • SDN: 100_CPMA-cluster-config-sdn.yaml

Installing the Control Plane Migration Assistant

You can download the Control Plane Migration Assistant (CPMA) binary file from the Red Hat Customer Portal and install it on Linux, MacOSX, or Windows operating systems.

Procedure
  1. In the Red Hat Customer Portal, navigate to DownloadsRed Hat OpenShift Container Platform.

  2. On the Download Red Hat OpenShift Container Platform page, select Red Hat OpenShift Container Platform from the Product Variant list.

  3. Select CPMA 1.0 for RHEL 7 from the Version list. This binary works on RHEL 7 and RHEL 8.

  4. Click Download Now to download cpma for Linux or MacOSX or cpma.exe for Windows.

  5. Save the file in a directory defined as $PATH for Linux or MacOSX or %PATH% for Windows.

  6. For Linux, make the file executable:

    $ sudo chmod +x cpma

Using the Control Plane Migration Assistant

The Control Plane Migration Assistant (CPMA) generates CR manifests, which are consumed by OpenShift Container Platform 4.2 Operators, and a report that indicates which OpenShift Container Platform 3 features are supported fully, partially, or not at all.

The CPMA can run in remote mode, retrieving the configuration files from the source cluster using SSH, or in local mode, using local copies of the source cluster’s configuration files.

Prerequisites
  • The source cluster must be OpenShift Container Platform 3.7 or later.

  • The source cluster must be updated to the latest synchronous release.

  • An environment health check must be run on the source cluster to confirm that there are no diagnostic errors or warnings.

  • The CPMA binary must be executable.

  • You must have cluster-admin privileges for the source cluster.

Procedure
  1. Log in to the OpenShift Container Platform 3 cluster:

    $ oc login https://<master1.example.com> (1)
    1 OpenShift Container Platform 3 master node. You must be logged in to the cluster to receive a token for the Kubernetes and OpenShift Container Platform APIs.
  2. Run the CPMA. Each prompt requires you to provide input, as in the following example:

    $ cpma --manifests=false (1)
    ? Do you wish to save configuration for future use? true
    ? What will be the source for OCP3 config files? Remote host (2)
    ? Path to crio config file /etc/crio/crio.conf
    ? Path to etcd config file /etc/etcd/etcd.conf
    ? Path to master config file /etc/origin/master/master-config.yaml
    ? Path to node config file /etc/origin/node/node-config.yaml
    ? Path to registries config file /etc/containers/registries.conf
    ? Do wish to find source cluster using KUBECONFIG or prompt it? KUBECONFIG
    ? Select cluster obtained from KUBECONFIG contexts master1-example-com:443
    ? Select master node master1.example.com
    ? SSH login root (3)
    ? SSH Port 22
    ? Path to private SSH key /home/user/.ssh/openshift_key
    ? Path to application data, skip to use current directory .
    INFO[29 Aug 19 00:07 UTC] Starting manifest and report generation
    INFO[29 Aug 19 00:07 UTC] Transform:Starting for - API
    INFO[29 Aug 19 00:07 UTC] APITransform::Extract
    INFO[29 Aug 19 00:07 UTC] APITransform::Transform:Reports
    INFO[29 Aug 19 00:07 UTC] Transform:Starting for - Cluster
    INFO[29 Aug 19 00:08 UTC] ClusterTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportQuotas
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportPVs
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNamespaces
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNodes
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportRBAC
    INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportStorageClasses
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Crio
    INFO[29 Aug 19 00:08 UTC] CrioTransform::Extract
    WARN[29 Aug 19 00:08 UTC] Skipping Crio: No configuration file available
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Docker
    INFO[29 Aug 19 00:08 UTC] DockerTransform::Extract
    INFO[29 Aug 19 00:08 UTC] DockerTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - ETCD
    INFO[29 Aug 19 00:08 UTC] ETCDTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ETCDTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - OAuth
    INFO[29 Aug 19 00:08 UTC] OAuthTransform::Extract
    INFO[29 Aug 19 00:08 UTC] OAuthTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - SDN
    INFO[29 Aug 19 00:08 UTC] SDNTransform::Extract
    INFO[29 Aug 19 00:08 UTC] SDNTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Image
    INFO[29 Aug 19 00:08 UTC] ImageTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ImageTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Project
    INFO[29 Aug 19 00:08 UTC] ProjectTransform::Extract
    INFO[29 Aug 19 00:08 UTC] ProjectTransform::Transform:Reports
    INFO[29 Aug 19 00:08 UTC] Flushing reports to disk
    INFO[29 Aug 19 00:08 UTC] Report:Added: report.json
    INFO[29 Aug 19 00:08 UTC] Report:Added: report.html
    INFO[29 Aug 19 00:08 UTC] Successfully finished transformations
    1 --manifests=false: Without generating CR manifests
    2 Remote host: Remote mode
    3 SSH login: The SSH user must have sudo permissions on the OpenShift Container Platform 3 cluster in order to access the configuration files.

    The CPMA creates the following files and directory in the current directory if you did not specify an output directory:

    • cpma.yaml file: Configuration options that you provided when you ran the CPMA

    • master1.example.com/: Configuration files from the master node

    • report.json: JSON-encoded report

    • report.html: HTML-encoded report

  3. Open the report.html file in a browser to view the CPMA report.

  4. If you generate CR manifests, apply the CR manifests to the OpenShift Container Platform 4 cluster, as in the following example:

    $ oc apply -f 100_CPMA-cluster-config-secret-htpasswd-secret.yaml

Troubleshooting a failed migration

You can view the migration custom resources (CRs) and download logs to troubleshoot a failed migration.

Understanding the migration custom resources

The CAM tool creates the following custom resources (CRs) for migration:

migration architecture diagram

20 MigCluster (configuration, CAM cluster): Cluster definition

20 MigStorage (configuration, CAM cluster): Storage definition

20 MigPlan (configuration, CAM cluster): Migration plan

The MigPlan CR describes the source and target clusters, repository, and namespace(s) being migrated. It is associated with 0, 1, or many MigMigration CRs.

Deleting a MigPlan CR deletes the associated MigMigration CRs.

20 BackupStorageLocation (configuration, CAM cluster): Location of Velero backup objects

20 VolumeSnapshotLocation (configuration, CAM cluster): Location of Velero volume snapshots

20 MigMigration (action, CAM cluster): Migration, created during migration

A MigMigration CR is created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

20 Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

  • Backup CR #1 for Kubernetes objects

  • Backup CR #2 for PV data

20 Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

  • Restore CR #1 (using Backup CR #2) for PV data

  • Restore CR #2 (using Backup CR #1) for Kubernetes objects

Viewing migration custom resources

To view a migration custom resource:

$ oc get <custom_resource> -n openshift-migration
NAME                                   AGE
88435fe0-c9f8-11e9-85e6-5d593ce65e10   6m42s

$ oc describe <custom_resource> 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration
MigMigration example
$ oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration
Name:         88435fe0-c9f8-11e9-85e6-5d593ce65e10
Namespace:    openshift-migration
Labels:       <none>
Annotations:  touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
API Version:  migration.openshift.io/v1alpha1
Kind:         MigMigration
Metadata:
  Creation Timestamp:  2019-08-29T01:01:29Z
  Generation:          20
  Resource Version:    88179
  Self Link:           /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
  UID:                 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
Spec:
  Mig Plan Ref:
    Name:        socks-shop-mig-plan
    Namespace:   openshift-migration
  Quiesce Pods:  true
  Stage:         false
Status:
  Conditions:
    Category:              Advisory
    Durable:               true
    Last Transition Time:  2019-08-29T01:03:40Z
    Message:               The migration has completed successfully.
    Reason:                Completed
    Status:                True
    Type:                  Succeeded
  Phase:                   Completed
  Start Timestamp:         2019-08-29T01:01:29Z
Events:                    <none>
Velero backup CR #2 example (PV data)
apiVersion: velero.io/v1
kind: Backup
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.105.179:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
  creationTimestamp: "2019-08-29T01:03:15Z"
  generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
  generation: 1
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
    velero.io/storage-location: myrepo-vpzq9
  name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  namespace: openshift-migration
  resourceVersion: "87313"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
  uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
  excludedNamespaces: []
  excludedResources: []
  hooks:
    resources: []
  includeClusterResources: null
  includedNamespaces:
  - sock-shop
  includedResources:
  - persistentvolumes
  - persistentvolumeclaims
  - namespaces
  - imagestreams
  - imagestreamtags
  - secrets
  - configmaps
  - pods
  labelSelector:
    matchLabels:
      migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
  storageLocation: myrepo-vpzq9
  ttl: 720h0m0s
  volumeSnapshotLocations:
  - myrepo-wv6fx
status:
  completionTimestamp: "2019-08-29T01:02:36Z"
  errors: 0
  expiration: "2019-09-28T01:02:35Z"
  phase: Completed
  startTimestamp: "2019-08-29T01:02:35Z"
  validationErrors: null
  version: 1
  volumeSnapshotsAttempted: 0
  volumeSnapshotsCompleted: 0
  warnings: 0
Velero restore CR #2 example (Kubernetes resources)
apiVersion: velero.io/v1
kind: Restore
metadata:
  annotations:
    openshift.io/migrate-copy-phase: final
    openshift.io/migrate-quiesce-pods: "true"
    openshift.io/migration-registry: 172.30.90.187:5000
    openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
  creationTimestamp: "2019-08-28T00:09:49Z"
  generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
  generation: 3
  labels:
    app.kubernetes.io/part-of: migration
    migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
    migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
  name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  namespace: openshift-migration
  resourceVersion: "82329"
  selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
  uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
  backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
  excludedNamespaces: null
  excludedResources:
  - nodes
  - events
  - events.events.k8s.io
  - backups.velero.io
  - restores.velero.io
  - resticrepositories.velero.io
  includedNamespaces: null
  includedResources: null
  namespaceMapping: null
  restorePVs: true
status:
  errors: 0
  failureReason: ""
  phase: Completed
  validationErrors: null
  warnings: 15

Downloading migration logs

You can download the Velero, Restic, and Migration controller logs in the CAM web console to troubleshoot a failed migration.

Procedure
  1. Click the Options menu {kebab} of a migration plan and select Logs.

  2. To download a specific log, select the following:

    • Cluster: Source or target cluster

    • Log source: Velero, Restic, or Migration controller

    • Pod source: For example, velero-7659c69dd7-ctb5x

  3. Click Download all logs to download the Migration controller log and the Velero and Restic logs of the source and target clusters.

Optionally, you can access the logs by using the CLI, as in the following example:

$ oc get pods -n openshift-migration | grep controller
controller-manager-78c469849c-v6wcf           1/1     Running     0          4h49m

$ oc logs controller-manager-78c469849c-v6wcf -f -n openshift-migration

Restic timeout error

If a migration fails because Restic times out, the following error appears in the Velero log:

level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1

The default value of restic_timeout is one hour. You can increase this for large migrations, keeping in mind that a higher value may delay the return of error messages.

Procedure
  1. In the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.

  2. Click Cluster Application Migration Operator.

  3. In the MigrationController tab, click migration-controller.

  4. In the YAML tab, update the following parameter value:

    spec:
      restic_timeout: 1h (1)
    1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s.
  5. Click Save.

Known issues

This release has the following known issues:

  • During migration, the CAM tool preserves the following namespace annotations:

    • openshift.io/sa.scc.mcs

    • openshift.io/sa.scc.supplemental-groups

    • openshift.io/sa.scc.uid-range

      These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (BZ#1748440)

  • When adding an S3 endpoint to the CAM web console, https:// is supported only for AWS. For other S3 providers, use http://.

  • If an AWS bucket is added to the CAM web console and then deleted, its status remains True because the MigStorage CR is not updated. (BZ#1738564)

  • Migration fails if the Migration controller is running on a cluster other than the target cluster. The EnsureCloudSecretPropagated phase is skipped with a logged warning. (BZ#1757571)

  • Cluster-scoped resources, including Cluster Role Bindings and Security Context Constraints, are not yet handled by the CAM. If your applications require cluster-scoped resources, you must create them manually on the target cluster. (BZ#1759804)