×

Terminology

Table 1. MTC terminology
Term Definition

Source cluster

Cluster from which the applications are migrated.

Destination cluster[1]

Cluster to which the applications are migrated.

Replication repository

Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration.

The replication repository must be accessible to all clusters.

Host cluster

Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required.

The host cluster does not require an exposed registry route for direct image migration.

Remote cluster

A remote cluster is usually the source cluster but this is not required.

A remote cluster requires a Secret custom resource that contains the migration-controller service account token.

A remote cluster requires an exposed secure registry route for direct image migration.

Indirect migration

Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster.

Direct volume migration

Persistent volumes are copied directly from the source cluster to the destination cluster.

Direct image migration

Images are copied directly from the source cluster to the destination cluster.

Stage migration

Data is copied to the destination cluster without stopping the application.

Running a stage migration multiple times reduces the duration of the cutover migration.

Cutover migration

The application is stopped on the source cluster and its resources are migrated to the destination cluster.

State migration

Application state is migrated by copying specific persistent volume claims to the destination cluster.

Rollback migration

Rollback migration rolls back a completed migration.

1 Called the target cluster in the MTC web console.

Migrating an application from on-premises to a cloud-based cluster

You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster.

A service created on the destination cluster exposes the source cluster’s API to MTC, which is running on the destination cluster.

Prerequisites
  • The system that creates the VPN tunnel must have access and be logged in to both clusters.

  • It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible.

  • Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names.

  • When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace.

  • OpenVPN server is installed on the destination cluster.

  • OpenVPN client is installed on the source cluster.

  • When configuring the source cluster in MTC, the API URL takes the form of https://proxied-cluster.<namespace>.svc.cluster.local:8443.

    • If you use the API, see Create a MigCluster CR manifest for each remote cluster.

    • If you use the MTC web console, see Migrating your applications using the MTC web console.

  • The MTC web console and Migration Controller must be installed on the target cluster.

Procedure
  1. Install the crane utility:

    $ podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7):/crane ./
  2. Log in remotely to a node on the source cluster and a node on the destination cluster.

  3. Obtain the cluster context for both clusters after logging in:

    $ oc config view
  4. Establish a tunnel by entering the following command on the command system:

    $ crane tunnel-api [--namespace <namespace>] \
          --destination-context <destination-cluster> \
          --source-context <source-cluster>

    If you don’t specify a namespace, the command uses the default value openvpn.

    For example:

    $ crane tunnel-api --namespace my_tunnel \
          --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \
          --source-context default/192-168-122-171-nip-io:8443/admin

    See all available parameters for the crane tunnel-api command by entering crane tunnel-api --help.

    The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes.

    The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster.

    After a few minutes, the load balancer resolves on the source node.

    You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges:

    # oc get po -n <namespace>
    Example output
    NAME            READY     STATUS      RESTARTS    AGE
    <pod_name>    2/2       Running     0           44s
    # oc logs -f -n <namespace> <pod_name> -c openvpn

    When the address of the load balancer is resolved, the message Initialization Sequence Completed appears at the end of the log.

  5. On the OpenVPN server, which is on a destination control node, verify that the openvpn service and the proxied-cluster service are running:

    $ oc get service -n <namespace>
  6. On the source node, get the service account (SA) token for the migration controller:

    # oc sa get-token -n openshift-migration migration-controller
  7. Open the MTC web console and add the source cluster, using the following values:

    • Cluster name: The source cluster name.

    • URL: proxied-cluster.<namespace>.svc.cluster.local:8443. If you did not define a value for <namespace>, use openvpn.

    • Service account token: The token of the migration controller service account.

    • Exposed route host to image registry: proxied-cluster.<namespace>.svc.cluster.local:5000. If you did not define a value for <namespace>, use openvpn.

After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces.

Additional resources

Migrating applications by using the command line

You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration.

Migration prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Direct image migration
  • You must ensure that the secure internal registry of the source cluster is exposed.

  • You must create a route to the exposed registry.

Direct volume migration
  • If your clusters use proxies, you must configure an Stunnel TCP proxy.

Internal images
  • If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster.

    You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.11 cluster.

Clusters
  • The source cluster must be upgraded to the latest MTC z-stream release.

  • The MTC version must be the same on all clusters.

Network
  • The clusters have unrestricted network access to each other and to the replication repository.

  • If you copy the persistent volumes with move, the clusters must have unrestricted network access to the remote volumes.

  • You must enable the following ports on an OpenShift Container Platform 3 cluster:

    • 8443 (API server)

    • 443 (routes)

    • 53 (DNS)

  • You must enable the following ports on an OpenShift Container Platform 4 cluster:

    • 6443 (API server)

    • 443 (routes)

    • 53 (DNS)

  • You must enable port 443 on the replication repository if you are using TLS.

Persistent volumes (PVs)
  • The PVs must be valid.

  • The PVs must be bound to persistent volume claims.

  • If you use snapshots to copy the PVs, the following additional prerequisites apply:

    • The cloud provider must support snapshots.

    • The PVs must have the same cloud provider.

    • The PVs must be located in the same geographic region.

    • The PVs must have the same storage class.

Creating a registry route for direct image migration

For direct image migration, you must create a route to the exposed internal registry on all remote clusters.

Prerequisites
  • The internal registry must be exposed to external traffic on all remote clusters.

    The OpenShift Container Platform 4 registry is exposed by default.

    The OpenShift Container Platform 3 registry must be exposed manually.

Procedure
  • To create a route to an OpenShift Container Platform 3 registry, run the following command:

    $ oc create route passthrough --service=docker-registry -n default
  • To create a route to an OpenShift Container Platform 4 registry, run the following command:

    $ oc create route passthrough --service=image-registry -n openshift-image-registry

Proxy configuration

For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

Direct volume migration

Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.

TCP proxy setup for DVM

You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy:

apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
  name: migration-controller
  namespace: openshift-migration
spec:
  [...]
  stunnel_tcp_proxy: http://username:password@ip:port

Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.

Why use a TCP proxy instead of an HTTP/HTTPS proxy?

You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.

Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.

Known issue
Migration fails with error Upgrade request required

The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required. Workaround: Use a proxy that supports the SPDY protocol.

In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required. Workaround: Ensure that the proxy forwards the Upgrade header.

Tuning network policies for migrations

OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.

Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.

NetworkPolicy configuration