spec:
...
migration_controller: false
migration_ui: false
...
deprecated_cors_configuration: true
You can install the Cluster Application Migration Operator on your OpenShift Container Platform 4.2 target cluster and 4.1 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.
Optional: You can configure the Cluster Application Migration Operator to install the CAM tool on an OpenShift Container Platform 3 cluster or on a remote cluster. |
In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.
You can install the Cluster Application Migration Operator with the Operation Lifecycle Manager (OLM) on an OpenShift Container Platform 4.2 target cluster and on an OpenShift Container Platform 4.1 source cluster.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).
The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Click Create.
Click Workloads → Pods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4 source cluster with the Operation Lifecycle Manager (OLM).
In the OpenShift Container Platform web console, click Catalog → OperatorHub.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Update the migration_controller
and migration_ui
parameters and add the deprecated_cors_configuration
parameter to the spec
stanza:
spec:
...
migration_controller: false
migration_ui: false
...
deprecated_cors_configuration: true
Click Create.
Click Workloads → Pods to verify that the Restic and Velero Pods are running.
You can build a custom Operator catalog image for OpenShift Container Platform 4, push it to a local mirror image registry, and configure the Operation Lifecycle Manager to install the Cluster Application Migration Operator from the local registry.
Cluster administrators can configure OLM and OperatorHub to use local content in restricted network environments.
Cluster administrator access to an OpenShift Container Platform cluster and its internal registry.
Separate workstation without network restrictions.
If pushing images to the OpenShift Container Platform cluster’s internal registry, the registry must be exposed with a route.
podman
version 1.4.4+
Disable the default OperatorSources.
Add disableAllDefaultSources: true
to the spec:
$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
This disables the default OperatorSources that are configured by default during an OpenShift Container Platform installation.
Retrieve package lists.
To get the list of packages that are available for the default OperatorSources,
run the following curl
commands from your workstation without network
restrictions:
$ curl https://quay.io/cnr/api/v1/packages?namespace=redhat-operators > packages.txt $ curl https://quay.io/cnr/api/v1/packages?namespace=community-operators >> packages.txt $ curl https://quay.io/cnr/api/v1/packages?namespace=certified-operators >> packages.txt
Each package in the new packages.txt
is an Operator that you could add to your
restricted network catalog. From this list, you could either pull every Operator
or a subset that you would like to expose to users.
Pull Operator content.
For a given Operator in the package list, you must pull the latest released content:
$ curl https://quay.io/cnr/api/v1/packages/<namespace>/<operator_name>/<release>
This example uses the etcd Operator:
Retrieve the digest:
$ curl https://quay.io/cnr/api/v1/packages/community-operators/etcd/0.0.12
From that JSON, take the digest and use it to pull the gzipped archive:
$ curl -XGET https://quay.io/cnr/api/v1/packages/community-operators/etcd/blobs/sha256/8108475ee5e83a0187d6d0a729451ef1ce6d34c44a868a200151c36f3232822b \ -o etcd.tar.gz
To pull the information out, you must untar the archive into a
manifests/<operator_name>/
directory with all the other Operators that you
want. For example, to untar to an existing directory called manifests/etcd/
:
$ mkdir -p manifests/etcd/ (1) $ tar -xf etcd.tar.gz -C manifests/etcd/
1 | Create different subdirectories for each extracted archive so that files are not overwritten by subsequent extractions for other Operators. |
Break apart bundle.yaml
content, if necessary.
In your new manifests/<operator_name>
directory, the goal is to get your bundle in the following directory structure:
manifests/ └── etcd ├── 0.0.12 │ ├── clusterserviceversion.yaml │ └── customresourcedefinition.yaml └── package.yaml
If you see files already in this structure, you can skip this step. However, if
you instead see only a single file called bundle.yaml
, you must first break
this file up to conform to the required structure.
You must separate the CSV content under data.clusterServiceVersion
(each file
in the list), the CRD content under data.customResourceDefinition
(each file
in the list), and the package content under data.Package
into their own files.
For the CSV file creation, find the following lines in the bundle.yaml
file:
data:
clusterServiceVersions: |
Omit those lines, but save a new file consisting of the full CSV resource
content beginning with the following lines, removing the prepended -
character:
clusterserviceversion.yaml
file snippetapiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
[...]
For the CRD file creation, find the following line in the bundle.yaml
file:
customResourceDefinitions: |
Omit this line, but save new files consisting of each, full CRD resource content
beginning with the following lines, removing the prepended -
character:
customresourcedefinition.yaml
file snippetapiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
[...]
For the package file creation, find the following line in the bundle.yaml
file:
packages: |
Omit this line, but save a new file consisting of the package content beginning
with the following lines, removing the prepended -
character, and ending with
a packageName
entry:
package.yaml
filechannels:
- currentCSV: etcdoperator.v0.9.4
name: singlenamespace-alpha
- currentCSV: etcdoperator.v0.9.4-clusterwide
name: clusterwide-alpha
defaultChannel: singlenamespace-alpha
packageName: etcd
Identify images required by the Operators you want to use.
Inspect the CSV files of each Operator for image:
fields to identify the pull
specs for any images required by the Operator, and note them for use in a later
step.
For example, in the following deployments
spec of an etcd Operator CSV:
spec:
serviceAccountName: etcd-operator
containers:
- name: etcd-operator
command:
- etcd-operator
- --create-crd=false
image: quay.io/coreos/etcd-operator@sha256:bd944a211eaf8f31da5e6d69e8541e7cada8f16a9f7a5a570b22478997819943 (1)
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
1 | Image required by Operator. |
Create an Operator catalog image.
Save the following to a Dockerfile, for example named
custom-registry.Dockerfile
:
FROM registry.redhat.io/openshift4/ose-operator-registry:v4.2.24 AS builder
COPY manifests manifests
RUN /bin/initializer -o ./bundles.db
FROM registry.access.redhat.com/ubi7/ubi
COPY --from=builder /registry/bundles.db /bundles.db
COPY --from=builder /usr/bin/registry-server /registry-server
COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
EXPOSE 50051
ENTRYPOINT ["/registry-server"]
CMD ["--database", "bundles.db"]
Use the podman
command to create and tag the container image from the
Dockerfile:
$ podman build -f custom-registry.Dockerfile \ -t <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry (1)
1 | Tag the image for the internal registry of the restricted network OpenShift Container Platform cluster and any namespace. |
Push the Operator catalog image to a registry.
Your new Operator catalog image must be pushed to a registry that the restricted network OpenShift Container Platform cluster can access. This can be the internal registry of the cluster itself or another registry that the cluster has network access to, such as an on-premise Quay Enterprise registry.
For this example, login and push the image to the internal registry OpenShift Container Platform cluster:
$ podman push <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry
Create a CatalogSource pointing to the new Operator catalog image.
Save the following to a file, for example my-operator-catalog.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog
namespace: openshift-marketplace
spec:
displayName: My Operator Catalog
sourceType: grpc
image: <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry:latest
Create the CatalogSource resource:
$ oc create -f my-operator-catalog.yaml
Verify the CatalogSource and package manifest are created successfully:
# oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h # oc get catalogsource -n openshift-marketplace NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s # oc get packagemanifest -n openshift-marketplace NAME CATALOG AGE etcd My Operator Catalog 34s
You should also be able to view them from the OperatorHub page in the web console.
Mirror the images required by the Operators you want to use.
Determine the images defined by the Operator(s) that you are expecting. This
example uses the etcd Operator, requiring the quay.io/coreos/etcd-operator
image.
This procedure only shows mirroring Operator images themselves and not Operand images, which are the components that an Operator manages. Operand images must be mirrored as well; see each Operator’s documentation to identify the required Operand images. |
To use mirrored images, you must first create an ImageContentSourcePolicy for each image to change the source location of the Operator catalog image. For example:
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: etcd-operator
spec:
repositoryDigestMirrors:
- mirrors:
- <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator
source: quay.io/coreos/etcd-operator
Use the oc image mirror
command from your workstation without network
restrictions to pull the image from the source registry and push to the internal
registry without being stored locally:
$ oc image mirror quay.io/coreos/etcd-operator \ <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator
You can now install the Operator from the OperatorHub on your restricted network OpenShift Container Platform cluster.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).
The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.
You created a custom Operator catalog and pushed it to a mirror registry.
You configured OLM to install the Cluster Application Migration Operator from the mirror registry.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Click Create.
Click Workloads → Pods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.
You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4 source cluster with the Operation Lifecycle Manager (OLM).
You created a custom Operator catalog and pushed it to a mirror registry.
You configured OLM to install the Cluster Application Migration Operator from the mirror registry.
Use the Filter by keyword field (in this case, Migration
) to find the Cluster Application Migration Operator.
Select the Cluster Application Migration Operator and click Install.
On the Create Operator Subscription page, select the openshift-migration
namespace, and specify an approval strategy.
Click Subscribe.
On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.
Under Provided APIs, click View 12 more….
Click Create New → MigrationController.
Click Create.
You can launch the CAM web console in a browser.
Log in to the OpenShift Container Platform cluster on which you have installed the CAM tool.
Obtain the CAM web console URL by entering the following command:
$ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'
The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com
.
Launch a browser and navigate to the CAM web console.
If you try to access the CAM web console immediately after installing the Cluster Application Migration Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry. |
If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.
Log in with your OpenShift Container Platform username and password.