×

Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.

If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.

Additional resources

Prerequisites

File-based catalogs

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.

As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.

The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.

Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format and Mirroring images for a disconnected installation using the oc-mirror plugin.

Creating a file-based catalog image

You can use the opm CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format.

Prerequisites
  • opm

  • podman version 1.9.3+

  • A bundle image built and pushed to a registry that supports Docker v2-2

Procedure
  1. Initialize the catalog:

    1. Create a directory for the catalog by running the following command:

      $ mkdir <catalog_dir>
    2. Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command:

      $ opm generate dockerfile <catalog_dir> \
          -i registry.redhat.io/openshift4/ose-operator-registry:v4.12 (1)
      1 Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image.

      The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:

      Example directory structure
      . (1)
      ├── <catalog_dir> (2)
      └── <catalog_dir>.Dockerfile (3)
      1 Parent directory
      2 Catalog directory
      3 Dockerfile generated by the opm generate dockerfile command
    3. Populate the catalog with the package definition for your Operator by running the opm init command:

      $ opm init <operator_name> \ (1)
          --default-channel=preview \ (2)
          --description=./README.md \ (3)
          --icon=./operator-icon.svg \ (4)
          --output yaml \ (5)
          > <catalog_dir>/index.yaml (6)
      1 Operator, or package, name
      2 Channel that subscriptions default to if unspecified
      3 Path to the Operator’s README.md or other documentation
      4 Path to the Operator’s icon
      5 Output format: JSON or YAML
      6 Path for creating the catalog configuration file

      This command generates an olm.package declarative config blob in the specified catalog configuration file.

  2. Add a bundle to the catalog by running the opm render command:

    $ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ (1)
        --output=yaml \
        >> <catalog_dir>/index.yaml (2)
    1 Pull spec for the bundle image
    2 Path to the catalog configuration file

    Channels must contain at least one bundle.

  3. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <catalog_dir>/index.yaml file:

    Example channel entry
    ---
    schema: olm.channel
    package: <operator_name>
    name: preview
    entries:
      - name: <operator_name>.v0.1.0 (1)
    1 Ensure that you include the period (.) after <operator_name> but before the v in the version. Otherwise, the entry fails to pass the opm validate command.
  4. Validate the file-based catalog:

    1. Run the opm validate command against the catalog directory:

      $ opm validate <catalog_dir>
    2. Check that the error code is 0:

      $ echo $?
      Example output
      0
  5. Build the catalog image by running the podman build command:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the catalog image to a registry:

    1. If required, authenticate with your target registry by running the podman login command:

      $ podman login <registry>
    2. Push the catalog image by running the podman push command:

      $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
Additional resources

Updating or filtering a file-based catalog image

You can use the opm CLI to update or filter (also known as prune) a catalog image that uses the file-based catalog format. By extracting and modifying the contents of an existing catalog image, you can update, add, or remove one or more Operator package entries from the catalog. You can then rebuild the image as an updated version of the catalog.

Alternatively, if you already have a catalog image on a mirror registry, you can use the oc-mirror CLI plugin to automatically prune any removed images from an updated source version of that catalog image while mirroring it to the target registry.

For more information about the oc-mirror plugin and this use case, see the "Keeping your mirror registry content updated" section, and specifically the "Pruning images" subsection, of "Mirroring images for a disconnected installation using the oc-mirror plugin".

Prerequisites
  • opm CLI.

  • podman version 1.9.3+.

  • A file-based catalog image.

  • A catalog directory structure recently initialized on your workstation related to this catalog.

    If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.

Procedure
  1. Extract the contents of the catalog image in YAML format to an index.yaml file in your catalog directory:

    $ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \
        -o yaml > <catalog_dir>/index.yaml

    Alternatively, you can use the -o json flag to output in JSON format.

  2. Modify the contents of the resulting index.yaml file to your specifications by updating, adding, or removing one or more Operator package entries.

    After a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.

    For example, if you wanted to remove an Operator package, the following example lists a set of olm.package, olm.channel, and olm.bundle blobs which must be deleted to remove the package from the catalog:

    Example removed entries
    ---
    defaultChannel: release-2.7
    icon:
      base64data: <base64_string>
      mediatype: image/svg+xml
    name: example-operator
    schema: olm.package
    ---
    entries:
    - name: example-operator.v2.7.0
      skipRange: '>=2.6.0 <2.7.0'
    - name: example-operator.v2.7.1
      replaces: example-operator.v2.7.0
      skipRange: '>=2.6.0 <2.7.1'
    - name: example-operator.v2.7.2
      replaces: example-operator.v2.7.1
      skipRange: '>=2.6.0 <2.7.2'
    - name: example-operator.v2.7.3
      replaces: example-operator.v2.7.2
      skipRange: '>=2.6.0 <2.7.3'
    - name: example-operator.v2.7.4
      replaces: example-operator.v2.7.3
      skipRange: '>=2.6.0 <2.7.4'
    name: release-2.7
    package: example-operator
    schema: olm.channel
    ---
    image: example.com/example-inc/example-operator-bundle@sha256:<digest>
    name: example-operator.v2.7.0
    package: example-operator
    properties:
    - type: olm.gvk
      value:
        group: example-group.example.io
        kind: MyObject
        version: v1alpha1
    - type: olm.gvk
      value:
        group: example-group.example.io
        kind: MyOtherObject
        version: v1beta1
    - type: olm.package
      value:
        packageName: example-operator
        version: 2.7.0
    - type: olm.bundle.object
      value:
        data: <base64_string>
    - type: olm.bundle.object
      value:
        data: <base64_string>
    relatedImages:
    - image: example.com/example-inc/example-related-image@sha256:<digest>
      name: example-related-image
    schema: olm.bundle
    ---
  3. Save your changes to the index.yaml file.

  4. Validate the catalog:

    $ opm validate <catalog_dir>
  5. Rebuild the catalog:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the updated catalog image to a registry:

    $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
Verification
  1. In the web console, navigate to the OperatorHub configuration resource in the AdministrationCluster SettingsConfiguration page.

  2. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.

    For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.

  3. After the catalog source is in a READY state, navigate to the OperatorsOperatorHub page and check that the changes you made are reflected in the list of Operators.

SQLite-based catalogs

The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

Creating a SQLite-based index image

You can create an index image based on the SQLite database format by using the opm CLI.

Prerequisites
  • opm

  • podman version 1.9.3+

  • A bundle image built and pushed to a registry that supports Docker v2-2

Procedure
  1. Start a new index:

    $ opm index add \
        --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \(1)
        --tag <registry>/<namespace>/<index_image_name>:<tag> \(2)
        [--binary-image <registry_base_image>] (3)
    1 Comma-separated list of bundle images to add to the index.
    2 The image tag that you want the index image to have.
    3 Optional: An alternative registry base image to use for serving the catalog.
  2. Push the index image to a registry.

    1. If required, authenticate with your target registry:

      $ podman login <registry>
    2. Push the index image:

      $ podman push <registry>/<namespace>/<index_image_name>:<tag>

Updating a SQLite-based index image

After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.

You can update an existing index image using the opm index add command.

Prerequisites
  • opm

  • podman version 1.9.3+

  • An index image built and pushed to a registry.

  • An existing catalog source referencing the index image.

Procedure
  1. Update the existing index by adding bundle images:

    $ opm index add \
        --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \(1)
        --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \(2)
        --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \(3)
        --pull-tool podman (4)
    1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index.
    2 The --from-index flag specifies the previously pushed index.
    3 The --tag flag specifies the image tag to apply to the updated index image.
    4 The --pull-tool flag specifies the tool used to pull container images.

    where:

    <registry>

    Specifies the hostname of the registry, such as quay.io or mirror.example.com.

    <namespace>

    Specifies the namespace of the registry, such as ocs-dev or abc.

    <new_bundle_image>

    Specifies the new bundle image to add to the registry, such as ocs-operator.

    <digest>

    Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41.

    <existing_index_image>

    Specifies the previously pushed image, such as abc-redhat-operator-index.

    <existing_tag>

    Specifies a previously pushed image tag, such as 4.12.

    <updated_tag>

    Specifies the image tag to apply to the updated index image, such as 4.12.1.

    Example command
    $ opm index add \
        --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \
        --from-index mirror.example.com/abc/abc-redhat-operator-index:4.12 \
        --tag mirror.example.com/abc/abc-redhat-operator-index:4.12.1 \
        --pull-tool podman
  2. Push the updated index image:

    $ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
  3. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:

    $ oc get packagemanifests -n openshift-marketplace

Filtering a SQLite-based index image

An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.

Prerequisites
  • podman version 1.9.3+

  • grpcurl (third-party command-line tool)

  • opm

  • Access to a registry that supports Docker v2-2

Procedure
  1. Authenticate with your target registry:

    $ podman login <target_registry>
  2. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.12
      Example output
      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.12...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051
    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list
      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...
    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.

  3. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.12 \(1)
        -p advanced-cluster-management,jaeger-product,quay-operator \(2)
        [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \(3)
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.12 (4)
    1 Index to prune.
    2 Comma-separated list of packages to keep.
    3 Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version.
    4 Custom tag for new index image being built.
  4. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.12

    where <namespace> is any existing namespace on the registry.

Catalog sources and pod security admission

Pod security admission was introduced in OpenShift Container Platform 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the opm CLI tool released before OpenShift Container Platform 4.11 cannot run under restricted pod security enforcement.

In OpenShift Container Platform 4.12, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy.

Default restricted enforcement for all namespaces is planned for inclusion in a future OpenShift Container Platform release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.

If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in OpenShift Container Platform 4.12.

However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future OpenShift Container Platform releases.

As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:

  • Migrate your catalog to the file-based catalog format.

  • Update your catalog image with a version of the opm CLI tool released with OpenShift Container Platform 4.11 or later.

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.

Migrating SQLite database catalogs to the file-based catalog format

You can update your deprecated SQLite database format catalogs to the file-based catalog format.

Prerequisites
  • SQLite database catalog source

  • Cluster administrator permissions

  • Latest version of the opm CLI tool released with OpenShift Container Platform 4.12 on workstation

Procedure
  1. Migrate your SQLite database catalog to a file-based catalog by running the following command:

    $ opm migrate <registry_image> <fbc_directory>
  2. Generate a Dockerfile for your file-based catalog by running the following command:

    $ opm generate dockerfile <fbc_directory> \
      --binary-image \
      registry.redhat.io/openshift4/ose-operator-registry:v4.12
Next steps
  • The generated Dockerfile can be built, tagged, and pushed to your registry.

Rebuilding SQLite database catalog images

You can rebuild your SQLite database catalog image with the latest version of the opm CLI tool that is released with your version of OpenShift Container Platform.

Prerequisites
  • SQLite database catalog source

  • Cluster administrator permissions

  • Latest version of the opm CLI tool released with OpenShift Container Platform 4.12 on workstation

Procedure
  • Run the following command to rebuild your catalog with a more recent version of the opm CLI tool:

    $ opm index add --binary-image \
      registry.redhat.io/openshift4/ose-operator-registry:v4.12 \
      --from-index <your_registry_image> \
      --bundles "" -t \<your_registry_image>

Configuring catalogs to run with elevated permissions

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:

  • Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.

  • Label the catalog source namespace for baseline or privileged pod security enforcement.

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

Prerequisites
  • SQLite database catalog source

  • Cluster administrator permissions

  • Target namespace that supports running pods with the elevated pod security admission standard of baseline or privileged

Procedure
  1. Edit the CatalogSource definition by setting the spec.grpcPodConfig.securityContextConfig label to legacy, as shown in the following example:

    Example CatalogSource definition
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: my-catsrc
      namespace: my-ns
    spec:
      sourceType: grpc
      grpcPodConfig:
        securityContextConfig: legacy
      image: my-image:latest

    In OpenShift Container Platform 4.12, the spec.grpcPodConfig.securityContextConfig field is set to legacy by default. In a future release of OpenShift Container Platform, it is planned that the default setting will change to restricted. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field to legacy.

  2. Edit your <namespace>.yaml file to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:

    Example <namespace>.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
    ...
      labels:
        security.openshift.io/scc.podSecurityLabelSync: "false" (1)
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: baseline (2)
      name: "<namespace_name>"
    1 Turn off pod security label synchronization by adding the security.openshift.io/scc.podSecurityLabelSync=false label to the namespace.
    2 Apply the pod security admission pod-security.kubernetes.io/enforce label. Set the label to baseline or privileged. Use the baseline pod security profile unless other workloads in the namespace require a privileged profile.

Adding a catalog source to a cluster

Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.

Alternatively, you can use the web console to manage catalog sources. From the AdministrationCluster SettingsConfigurationOperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.

Prerequisites
  • An index image built and pushed to a registry.

Procedure
  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogSource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace (1)
        annotations:
          olm.catalogImageTemplate: (2)
            "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}"
      spec:
        sourceType: grpc
        grpcPodConfig:
          securityContextConfig: <security_mode> (3)
        image: <registry>/<namespace>/<index_image_name>:<tag> (4)
        displayName: My Operator Catalog
        publisher: <publisher_name> (5)
        updateStrategy:
          registryPoll: (6)
            interval: 30m
      1 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace.
      2 Optional: Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag.
      3 Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OpenShift Container Platform release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
      4 Specify your index image. If you specify a tag after the image name, for example :v4.12, the catalog source pod uses an image pull policy of Always, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id>, the image pull policy is IfNotPresent, meaning the pod pulls the image only if it does not already exist on the node.
      5 Specify your name or an organization name publishing the catalog.
      6 Catalog sources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc apply -f catalogSource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace
      Example output
      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h
    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace
      Example output
      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s
    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace
      Example output
      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

Accessing images for Operators from private registries

If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation.

Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access.

The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access:

Index images

A CatalogSource object can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access.

Bundle images

Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access.

Operator and Operand images

If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed.

Instead, the authentication details can be added to the global cluster pull secret in the openshift-config namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to the default service accounts of the target tenant namespaces.

Prerequisites
  • At least one of the following hosted in a private registry:

    • An index image or catalog image.

    • An Operator bundle image.

    • An Operator or Operand image.

Procedure
  1. Create a secret for each required private registry.

    1. Log in to the private registry to create or update your registry credentials file:

      $ podman login <registry>:<port>

      The file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the podman CLI, the default location is ${XDG_RUNTIME_DIR}/containers/auth.json. For the docker CLI, the default location is /root/.docker/config.json.

    2. It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a CatalogSource object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull.

      A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example:

      File storing credentials for multiple registries
      {
          "auths": {
              "registry.redhat.io": {
                  "auth": "FrNHNydQXdzclNqdg=="
              },
              "quay.io": {
                  "auth": "fegdsRib21iMQ=="
              },
              "https://quay.io/my-namespace/my-user/my-image": {
                  "auth": "eWfjwsDdfsa221=="
              },
              "https://quay.io/my-namespace/my-user": {
                  "auth": "feFweDdscw34rR=="
              },
              "https://quay.io/my-namespace": {
                  "auth": "frwEews4fescyq=="
              }
          }
      }

      Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods:

      • Use the podman logout <registry> command to remove credentials for additional registries until only the one registry you want remains.

      • Edit your registry credentials file and separate the registry details to be stored in multiple files. For example:

        File storing credentials for one registry
        {
                "auths": {
                        "registry.redhat.io": {
                                "auth": "FrNHNydQXdzclNqdg=="
                        }
                }
        }
        File storing credentials for another registry
        {
                "auths": {
                        "quay.io": {
                                "auth": "Xd2lhdsbnRib21iMQ=="
                        }
                }
        }
    3. Create a secret in the openshift-marketplace namespace that contains the authentication credentials for a private registry:

      $ oc create secret generic <secret_name> \
          -n openshift-marketplace \
          --from-file=.dockerconfigjson=<path/to/registry/credentials> \
          --type=kubernetes.io/dockerconfigjson

      Repeat this step to create additional secrets for any other required private registries, updating the --from-file flag to specify another registry credentials file path.

  2. Create or update an existing CatalogSource object to reference one or more secrets:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: my-operator-catalog
      namespace: openshift-marketplace
    spec:
      sourceType: grpc
      secrets: (1)
      - "<secret_name_1>"
      - "<secret_name_2>"
      grpcPodConfig:
        securityContextConfig: <security_mode> (2)
      image: <registry>:<port>/<namespace>/<image>:<tag>
      displayName: My Operator Catalog
      publisher: <publisher_name>
      updateStrategy:
        registryPoll:
          interval: 30m
    1 Add a spec.secrets section and specify any required secrets.
    2 Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OpenShift Container Platform release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
  3. If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces.

    • To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the openshift-config namespace.

      Cluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster.

      1. Extract the .dockerconfigjson file from the global pull secret:

        $ oc extract secret/pull-secret -n openshift-config --confirm
      2. Update the .dockerconfigjson file with your authentication credentials for the required private registry or registries and save it as a new file:

        $ cat .dockerconfigjson | \
            jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \(1)
            > new_dockerconfigjson
        1 Replace <registry>:<port>/<namespace> with the private registry details and <token> with your authentication credentials.
      3. Update the global pull secret with the new file:

        $ oc set data secret/pull-secret -n openshift-config \
            --from-file=.dockerconfigjson=new_dockerconfigjson
    • To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace.

      1. Recreate the secret that you created for the openshift-marketplace in the tenant namespace:

        $ oc create secret generic <secret_name> \
            -n <tenant_namespace> \
            --from-file=.dockerconfigjson=<path/to/registry/credentials> \
            --type=kubernetes.io/dockerconfigjson
      2. Verify the name of the service account for the Operator by searching the tenant namespace:

        $ oc get sa -n <tenant_namespace> (1)
        1 If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the openshift-operators namespace.
        Example output
        NAME            SECRETS   AGE
        builder         2         6m1s
        default         2         6m1s
        deployer        2         6m1s
        etcd-operator   2         5m18s (1)
        
        1 Service account for an installed etcd Operator.
      3. Link the secret to the service account for the Operator:

        $ oc secrets link <operator_sa> \
            -n <tenant_namespace> \
             <secret_name> \
            --for=pull
Additional resources

Disabling the default OperatorHub catalog sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs.

Procedure
  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

Alternatively, you can use the web console to manage catalog sources. From the AdministrationCluster SettingsConfigurationOperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.

Removing custom catalogs

As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.

Procedure
  1. In the Administrator perspective of the web console, navigate to AdministrationCluster Settings.

  2. Click the Configuration tab, and then click OperatorHub.

  3. Click the Sources tab.

  4. Select the Options menu kebab for the catalog that you want to remove, and then click Delete CatalogSource.