$ oc get packagemanifests -n openshift-marketplace
Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OpenShift Container Platform cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation. |
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced
to find the Advanced Cluster Management for Kubernetes Operator.
You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. |
Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
All namespaces on the cluster (default) installs the Operator in the default openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
Select an Update channel (if more than one is available).
Select Automatic or Manual approval strategy, as described earlier.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
For the All namespaces… installation mode, the status resolves to InstallSucceeded in the |
If it does not:
Check the logs in any pods in the openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc
command to create or update a Subscription
object.
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
You have installed the OpenShift CLI (oc
).
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
NAME CATALOG AGE
3scale-operator Red Hat Operators 91m
advanced-cluster-management Red Hat Operators 91m
amq7-cert-manager Red Hat Operators 91m
...
couchbase-enterprise-certified Certified Operators 91m
crunchy-postgres-operator Certified Operators 91m
mongodb-enterprise Certified Operators 91m
...
etcd Community Operators 91m
jaeger Community Operators 91m
kubefed Community Operators 91m
...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.
The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces
or SingleNamespace
mode. If the Operator you intend to install uses the AllNamespaces
mode, the openshift-operators
namespace already has the appropriate global-operators
Operator group in place.
However, if the Operator uses the SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one.
|
Create an OperatorGroup
object YAML file, for example operatorgroup.yaml
:
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: <operatorgroup_name>
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. |
Create the OperatorGroup
object:
$ oc apply -f operatorgroup.yaml
Create a Subscription
object YAML file to subscribe a namespace to an Operator, for example sub.yaml
:
Subscription
objectapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: <subscription_name>
namespace: openshift-operators (1)
spec:
channel: <channel_name> (2)
name: <operator_name> (3)
source: redhat-operators (4)
sourceNamespace: openshift-marketplace (5)
config:
env: (6)
- name: ARGS
value: "-v=10"
envFrom: (7)
- secretRef:
name: license-secret
volumes: (8)
- name: <volume_name>
configMap:
name: <configmap_name>
volumeMounts: (9)
- mountPath: <directory_name>
name: <volume_name>
tolerations: (10)
- operator: "Exists"
resources: (11)
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
nodeSelector: (12)
foo: bar
1 | For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. |
2 | Name of the channel to subscribe to. |
3 | Name of the Operator to subscribe to. |
4 | Name of the catalog source that provides the Operator. |
5 | Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. |
6 | The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. |
7 | The envFrom parameter defines a list of sources to populate Environment Variables in the container. |
8 | The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. |
9 | The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. |
10 | The tolerations parameter defines a list of Tolerations for the pod created by OLM. |
11 | The resources parameter defines resource constraints for all the containers in the pod created by OLM. |
12 | The nodeSelector parameter defines a NodeSelector for the pod created by OLM. |
Create the Subscription
object:
$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription
object.
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions
You have installed the OpenShift CLI (oc
).
Look up the available versions and channels of the Operator you want to install by running the following command:
$ oc describe packagemanifests <operator_name> -n <catalog_namespace>
For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub:
$ oc describe packagemanifests quay-operator -n openshift-marketplace
Name: quay-operator
Namespace: operator-marketplace
Labels: catalog=redhat-operators
catalog-namespace=openshift-marketplace
hypershift.openshift.io/managed=true
operatorframework.io/arch.amd64=supported
operatorframework.io/os.linux=supported
provider=Red Hat
provider-url=
Annotations: <none>
API Version: packages.operators.coreos.com/v1
Kind: PackageManifest
...
Current CSV: quay-operator.v3.7.11
...
Entries:
Name: quay-operator.v3.7.11
Version: 3.7.11
Name: quay-operator.v3.7.10
Version: 3.7.10
Name: quay-operator.v3.7.9
Version: 3.7.9
Name: quay-operator.v3.7.8
Version: 3.7.8
Name: quay-operator.v3.7.7
Version: 3.7.7
Name: quay-operator.v3.7.6
Version: 3.7.6
Name: quay-operator.v3.7.5
Version: 3.7.5
Name: quay-operator.v3.7.4
Version: 3.7.4
Name: quay-operator.v3.7.3
Version: 3.7.3
Name: quay-operator.v3.7.2
Version: 3.7.2
Name: quay-operator.v3.7.1
Version: 3.7.1
Name: quay-operator.v3.7.0
Version: 3.7.0
Name: stable-3.7
...
Current CSV: quay-operator.v3.8.5
...
Entries:
Name: quay-operator.v3.8.5
Version: 3.8.5
Name: quay-operator.v3.8.4
Version: 3.8.4
Name: quay-operator.v3.8.3
Version: 3.8.3
Name: quay-operator.v3.8.2
Version: 3.8.2
Name: quay-operator.v3.8.1
Version: 3.8.1
Name: quay-operator.v3.8.0
Version: 3.8.0
Name: stable-3.8
Default Channel: stable-3.8
Package Name: quay-operator
You can print an Operator’s version and channel information in the YAML format by running the following command:
|
If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
$ oc get packagemanifest \
--selector=catalog=<catalogsource_name> \
--field-selector metadata.name=<operator_name> \
-n <catalog_namespace> -o yaml
If you do not specify the Operator’s catalog, running the
|
An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group.
The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces
or SingleNamespace
mode. If the Operator you intend to install uses the AllNamespaces
mode, then the openshift-operators
namespace already has an appropriate Operator group in place.
However, if the Operator uses the SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one:
Create an OperatorGroup
object YAML file, for example operatorgroup.yaml
:
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: <operatorgroup_name>
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. |
Create the OperatorGroup
object:
$ oc apply -f operatorgroup.yaml
Create a Subscription
object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV
field. Set the installPlanApproval
field to Manual
to prevent the Operator from automatically upgrading if a later version exists in the catalog.
For example, the following sub.yaml
file can be used to install the Red Hat Quay Operator specifically to version 3.7.10:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: quay-operator
namespace: quay
spec:
channel: stable-3.7
installPlanApproval: Manual (1)
name: quay-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: quay-operator.v3.7.10 (2)
1 | Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. |
2 | Set a specific version of an Operator CSV. |
Create the Subscription
object:
$ oc apply -f sub.yaml
Manually approve the pending install plan to complete the Operator installation.
As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".
In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.
All instances of the Operator you want to install must be the same version across a given cluster.
For more information on this and other limitations, see "Operators in multitenant clusters". |
Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is team1
, you might create a team1-operator
namespace:
Define a Namespace
resource and save the YAML file, for example, team1-operator.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: team1-operator
Create the namespace by running the following command:
$ oc create -f team1-operator.yaml
Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the spec.targetNamespaces
list:
Define an OperatorGroup
resource and save the YAML file, for example, team1-operatorgroup.yaml
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: team1-operatorgroup
namespace: team1-operator
spec:
targetNamespaces:
- team1 (1)
1 | Define only the tenant’s namespace in the spec.targetNamespaces list. |
Create the Operator group by running the following command:
$ oc create -f team1-operatorgroup.yaml
Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.
After completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant. |
When installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators
global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see "Multitenancy and Operator colocation".
As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.
You have access to the cluster as a user with the cluster-admin
role.
Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:
Define a Namespace
resource and save the YAML file, for example, global-operators.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: global-operators
Create the namespace by running the following command:
$ oc create -f global-operators.yaml
Create a custom global Operator group, which is an Operator group that watches all namespaces:
Define an OperatorGroup
resource and save the YAML file, for example, global-operatorgroup.yaml
. Omit both the spec.selector
and spec.targetNamespaces
fields to make it a global Operator group, which selects all namespaces:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: global-operatorgroup
namespace: global-operators
The |
Create the Operator group by running the following command:
$ oc create -f global-operatorgroup.yaml
Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (oc
). For a detailed procedure, see Installing from OperatorHub using the CLI.
When you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. |
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app
, that identifies the node or nodes. Otherwise, add a label, such as myoperator
, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.
If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule
taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.
Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
Administrators can create a Subscription
object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes.
Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
Adding taints and tolerations manually to nodes or with compute machine sets
By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.
The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:
If an Operator requires a particular platform, such as amd64
or arm64
If an Operator requires a particular operating system, such as Linux or Windows
If you want Operators that work together scheduled on the same host or on hosts located on the same rack
If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues
You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription
object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.
The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-163-94.us-west-2.compute.internal
#...
1 | A node affinity that requires the Operator’s pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal . |
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- key: kubernetes.io/os
operator: In
values:
- linux
#...
1 | A node affinity that requires the Operator’s pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels. |
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test
topologyKey: kubernetes.io/hostname
#...
1 | A pod affinity that places the Operator’s pod on a node that has pods with the app=test label. |
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAntiAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cpu
operator: In
values:
- high
topologyKey: kubernetes.io/hostname
#...
1 | A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label. |
To control the placement of an Operator pod, complete the following steps:
Install the Operator as usual.
If needed, ensure that your nodes are labeled to properly respond to the affinity.
Edit the Operator Subscription
object to add an affinity:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity: (1)
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-185-229.ec2.internal
#...
1 | Add a nodeAffinity , podAffinity , or podAntiAffinity . See the Additional resources section that follows for information about creating the affinity. |
To ensure that the pod is deployed on the specific node, run the following command:
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>