$ sudo podman login registry.redhat.io
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster and an OpenShift Container Platform 3 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.
Optional: You can configure the Cluster Application Migration Operator to install the CAM tool on an OpenShift Container Platform 3 cluster or on a remote cluster. |
In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.
You can install the Cluster Application Migration Operator with the Operation Lifecycle Manager (OLM) on an OpenShift Container Platform 4.2 target cluster and manually on an OpenShift Container Platform 3 source cluster.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).
The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Click Create.
Click Workloads → Pods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.
You can install the Cluster Application Migration Operator manually on an OpenShift Container Platform 3 source cluster.
Access to registry.redhat.io
OpenShift Container Platform 3 cluster configured to pull images from registry.redhat.io
To pull images, you must create an imagestreamsecret
and copy it to each node in your cluster.
Log in to registry.redhat.io
with your Red Hat Customer Portal credentials:
$ sudo podman login registry.redhat.io
If your system is configured for rootless Podman containers, |
Download the operator.yml
file:
$ sudo podman cp $(sudo podman create registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2):/operator.yml ./
Download the controller-3.yml
file:
$ sudo podman cp $(sudo podman create registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2):/controller-3.yml ./
Log in to your OpenShift Container Platform 3 cluster.
Verify that the cluster can authenticate with registry.redhat.io
:
$ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
Create the Cluster Application Migration Operator CR object:
$ oc create -f operator.yml
The output resembles the following:
namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1) Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
1 | You can ignore Error from server (AlreadyExists) messages. They are caused by the Cluster Application Migration Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases. |
Create the Migration controller CR object:
$ oc create -f controller-3.yml
Verify that the Velero and Restic Pods are running:
$ oc get pods -n openshift-migration
You can install the Cluster Application Migration Operator with the Operation Lifecycle Manager (OLM) on an OpenShift Container Platform 4.2 target cluster and manually on an OpenShift Container Platform 3 source cluster.
For OpenShift Container Platform 4.2, you can build a custom Operator catalog image, push it to a local mirror image registry, and configure OLM to install the Cluster Application Migration Operator from the local registry. A mapping.txt
file is created when you run the oc adm catalog mirror
command.
On the OpenShift Container Platform 3 cluster, you can create a manifest file based on the Operator image and edit the file to point to your local image registry. The image
value in the manifest file uses the sha256
value from the mapping.txt
file. Then, you can use the local image to create the Cluster Application Migration Operator.
Cluster administrators can configure OLM and OperatorHub to use local content in restricted network environments.
Cluster administrator access to an OpenShift Container Platform cluster and its internal registry.
Separate workstation without network restrictions.
If pushing images to the OpenShift Container Platform cluster’s internal registry, the registry must be exposed with a route.
podman
version 1.4.4+
Disable the default OperatorSources.
Add disableAllDefaultSources: true
to the spec:
$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
This disables the default OperatorSources that are configured by default during an OpenShift Container Platform installation.
Retrieve package lists.
To get the list of packages that are available for the default OperatorSources,
run the following curl
commands from your workstation without network
restrictions:
$ curl https://quay.io/cnr/api/v1/packages?namespace=redhat-operators > packages.txt $ curl https://quay.io/cnr/api/v1/packages?namespace=community-operators >> packages.txt $ curl https://quay.io/cnr/api/v1/packages?namespace=certified-operators >> packages.txt
Each package in the new packages.txt
is an Operator that you could add to your
restricted network catalog. From this list, you could either pull every Operator
or a subset that you would like to expose to users.
Pull Operator content.
For a given Operator in the package list, you must pull the latest released content:
$ curl https://quay.io/cnr/api/v1/packages/<namespace>/<operator_name>/<release>
This example uses the etcd Operator:
Retrieve the digest:
$ curl https://quay.io/cnr/api/v1/packages/community-operators/etcd/0.0.12
From that JSON, take the digest and use it to pull the gzipped archive:
$ curl -XGET https://quay.io/cnr/api/v1/packages/community-operators/etcd/blobs/sha256/8108475ee5e83a0187d6d0a729451ef1ce6d34c44a868a200151c36f3232822b \ -o etcd.tar.gz
To pull the information out, you must untar the archive into a
manifests/<operator_name>/
directory with all the other Operators that you
want. For example, to untar to an existing directory called manifests/etcd/
:
$ mkdir -p manifests/etcd/ (1) $ tar -xf etcd.tar.gz -C manifests/etcd/
1 | Create different subdirectories for each extracted archive so that files are not overwritten by subsequent extractions for other Operators. |
Break apart bundle.yaml
content, if necessary.
In your new manifests/<operator_name>
directory, the goal is to get your bundle in the following directory structure:
manifests/ └── etcd ├── 0.0.12 │ ├── clusterserviceversion.yaml │ └── customresourcedefinition.yaml └── package.yaml
If you see files already in this structure, you can skip this step. However, if
you instead see only a single file called bundle.yaml
, you must first break
this file up to conform to the required structure.
You must separate the CSV content under data.clusterServiceVersion
(each file
in the list), the CRD content under data.customResourceDefinition
(each file
in the list), and the package content under data.Package
into their own files.
For the CSV file creation, find the following lines in the bundle.yaml
file:
data:
clusterServiceVersions: |
Omit those lines, but save a new file consisting of the full CSV resource
content beginning with the following lines, removing the prepended -
character:
clusterserviceversion.yaml
file snippetapiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
[...]
For the CRD file creation, find the following line in the bundle.yaml
file:
customResourceDefinitions: |
Omit this line, but save new files consisting of each, full CRD resource content
beginning with the following lines, removing the prepended -
character:
customresourcedefinition.yaml
file snippetapiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
[...]
For the package file creation, find the following line in the bundle.yaml
file:
packages: |
Omit this line, but save a new file consisting of the package content beginning
with the following lines, removing the prepended -
character, and ending with
a packageName
entry:
package.yaml
filechannels:
- currentCSV: etcdoperator.v0.9.4
name: singlenamespace-alpha
- currentCSV: etcdoperator.v0.9.4-clusterwide
name: clusterwide-alpha
defaultChannel: singlenamespace-alpha
packageName: etcd
Identify images required by the Operators you want to use.
Inspect the CSV files of each Operator for image:
fields to identify the pull
specs for any images required by the Operator, and note them for use in a later
step.
For example, in the following deployments
spec of an etcd Operator CSV:
spec:
serviceAccountName: etcd-operator
containers:
- name: etcd-operator
command:
- etcd-operator
- --create-crd=false
image: quay.io/coreos/etcd-operator@sha256:bd944a211eaf8f31da5e6d69e8541e7cada8f16a9f7a5a570b22478997819943 (1)
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
1 | Image required by Operator. |
Create an Operator catalog image.
Save the following to a Dockerfile, for example named
custom-registry.Dockerfile
:
FROM registry.redhat.io/openshift4/ose-operator-registry:v4.2.24 AS builder
COPY manifests manifests
RUN /bin/initializer -o ./bundles.db
FROM registry.access.redhat.com/ubi7/ubi
COPY --from=builder /registry/bundles.db /bundles.db
COPY --from=builder /usr/bin/registry-server /registry-server
COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
EXPOSE 50051
ENTRYPOINT ["/registry-server"]
CMD ["--database", "bundles.db"]
Use the podman
command to create and tag the container image from the
Dockerfile:
$ podman build -f custom-registry.Dockerfile \ -t <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry (1)
1 | Tag the image for the internal registry of the restricted network OpenShift Container Platform cluster and any namespace. |
Push the Operator catalog image to a registry.
Your new Operator catalog image must be pushed to a registry that the restricted network OpenShift Container Platform cluster can access. This can be the internal registry of the cluster itself or another registry that the cluster has network access to, such as an on-premise Quay Enterprise registry.
For this example, login and push the image to the internal registry OpenShift Container Platform cluster:
$ podman push <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry
Create a CatalogSource pointing to the new Operator catalog image.
Save the following to a file, for example my-operator-catalog.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog
namespace: openshift-marketplace
spec:
displayName: My Operator Catalog
sourceType: grpc
image: <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry:latest
Create the CatalogSource resource:
$ oc create -f my-operator-catalog.yaml
Verify the CatalogSource and package manifest are created successfully:
# oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h # oc get catalogsource -n openshift-marketplace NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s # oc get packagemanifest -n openshift-marketplace NAME CATALOG AGE etcd My Operator Catalog 34s
You should also be able to view them from the OperatorHub page in the web console.
Mirror the images required by the Operators you want to use.
Determine the images defined by the Operator(s) that you are expecting. This
example uses the etcd Operator, requiring the quay.io/coreos/etcd-operator
image.
This procedure only shows mirroring Operator images themselves and not Operand images, which are the components that an Operator manages. Operand images must be mirrored as well; see each Operator’s documentation to identify the required Operand images. |
To use mirrored images, you must first create an ImageContentSourcePolicy for each image to change the source location of the Operator catalog image. For example:
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: etcd-operator
spec:
repositoryDigestMirrors:
- mirrors:
- <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator
source: quay.io/coreos/etcd-operator
Use the oc image mirror
command from your workstation without network
restrictions to pull the image from the source registry and push to the internal
registry without being stored locally:
$ oc image mirror quay.io/coreos/etcd-operator \ <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator
You can now install the Operator from the OperatorHub on your restricted network OpenShift Container Platform cluster.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).
The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.
You created a custom Operator catalog and pushed it to a mirror registry.
You configured OLM to install the Cluster Application Migration Operator from the mirror registry.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Click Create.
Click Workloads → Pods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.
You can create a manifest file based on the Cluster Application Migration Operator image and edit the manifest to point to your local image registry. Then, you can use the local image to create the Cluster Application Migration Operator on an OpenShift Container Platform 3 source cluster.
Access to registry.redhat.io
Linux workstation with unrestricted network access
Mirror registry that supports Docker v2-2
Custom Operator catalog pushed to a mirror registry
On the workstation with unrestricted network access, log in to registry.redhat.io
with your Red Hat Customer Portal credentials:
$ sudo podman login registry.redhat.io
If your system is configured for rootless Podman containers, |
Download the operator.yml
file:
$ sudo podman cp $(sudo podman create registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2):/operator.yml ./
Download the controller-3.yml
file:
$ sudo podman cp $(sudo podman create registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2):/controller-3.yml ./
Obtain the Operator image value from the mapping.txt
file that was created when you ran the oc adm catalog mirror
on the OpenShift Container Platform 4 cluster:
$ grep openshift-migration-rhel7-operator ./mapping.txt | grep rhcam-1-2
The output shows the mapping between the registry.redhat.io
image and your mirror registry image:
registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhcam-1-2/openshift-migration-rhel7-operator
Update the image
and REGISTRY
values in the operator.yml
file:
containers:
- name: ansible
image: <registry.apps.example.com>/rhcam-1-2/openshift-migration-rhel7-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
...
- name: operator
image: <registry.apps.example.com>/rhcam-1-2/openshift-migration-rhel7-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
...
env:
- name: REGISTRY
value: <registry.apps.example.com> (2)
1 | Specify your mirror registry and the sha256 value of the Operator image in the mapping.txt file. |
2 | Specify your mirror registry. |
Log in to your OpenShift Container Platform 3 cluster.
Create the Cluster Application Migration Operator CR object:
$ oc create -f operator.yml
The output resembles the following:
namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1) Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
1 | You can ignore Error from server (AlreadyExists) messages. They are caused by the Cluster Application Migration Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases. |
Create the Migration controller CR object:
$ oc create -f controller-3.yml
Verify that the Velero and Restic Pods are running:
$ oc get pods -n openshift-migration
You can launch the CAM web console in a browser.
Log in to the OpenShift Container Platform cluster on which you have installed the CAM tool.
Obtain the CAM web console URL by entering the following command:
$ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'
The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com
.
Launch a browser and navigate to the CAM web console.
If you try to access the CAM web console immediately after installing the Cluster Application Migration Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry. |
If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.
Log in with your OpenShift Container Platform username and password.