metadata.name
A cluster service version (CSV), defined by a ClusterServiceVersion
object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework). |
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle
subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.
Typically, the generate kustomize manifests
subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle
subcommand. However, the Operator SDK provides the make bundle
command, which automates several tasks, including running the following subcommands in order:
generate kustomize manifests
generate bundle
bundle validate
See Bundling an Operator for a full procedure that includes generating a bundle and CSV.
The make bundle
command creates the following files and directories in your Operator project:
A bundle manifests directory named bundle/manifests
that contains a ClusterServiceVersion
(CSV) object
A bundle metadata directory named bundle/metadata
All custom resource definitions (CRDs) in a config/crd
directory
A Dockerfile bundle.Dockerfile
The following resources are typically included in a CSV:
Defines Operator permissions within a namespace.
Defines cluster-wide Operator permissions.
Defines how an Operand of an Operator is run in pods.
Defines custom resources that your Operator reconciles.
Examples of resources adhering to the spec of a particular CRD.
The --version
flag for the generate bundle
subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.
By setting the VERSION
variable in your Makefile
, the --version
flag is automatically invoked using that value when the generate bundle
subcommand is run by the make bundle
command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
Field | Description |
---|---|
|
A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example |
|
The capability level according to the Operator maturity model. Options include |
|
A public name to identify the Operator. |
|
A short description of the functionality of the Operator. |
|
Keywords describing the Operator. |
|
Human or organizational entities maintaining the Operator, with a |
|
The provider of the Operator (usually an organization), with a |
|
Key-value pairs to be used by Operator internals. |
|
Semantic version of the Operator, for example |
|
Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
Field | Description |
---|---|
|
The name of the CSV being replaced by this CSV. |
|
URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a |
|
Selectors by which the Operator can pair resources in a cluster. |
|
A base64-encoded icon unique to the Operator, set in a |
|
The level of maturity the software has achieved at this version. Options include |
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code. |
Operator developers can set certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub or the Red Hat Ecosystem Catalog. Operator metadata annotations are manually defined by setting the metadata.annotations
field in the CSV YAML file.
Annotations in the features.operators.openshift.io
group detail the infrastructure features that an Operator might support, specified by setting a "true"
or "false"
value. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog. These annotations are supported in OpenShift Container Platform 4.10 and later.
The |
Annotation | Description | Valid values[1] |
---|---|---|
|
Specify whether an Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. The Operator leverages the |
|
|
Specify whether an Operator accepts the FIPS-140 configuration of the underlying platform and works on nodes that are booted into FIPS mode. In this mode, the Operator and any workloads it manages (operands) are solely calling the Red Hat Enterprise Linux (RHEL) cryptographic library submitted for FIPS-140 validation. |
|
|
Specify whether an Operator supports running on a cluster behind a proxy by accepting the standard |
|
|
Specify whether an Operator implements well-known tunables to modify the TLS cipher suite used by the Operator and, if applicable, any of the workloads it manages (operands). |
|
|
Specify whether an Operator supports configuration for tokenized authentication with AWS APIs via AWS Secure Token Service (STS) by using the Cloud Credential Operator (CCO). |
|
|
Specify whether an Operator supports configuration for tokenized authentication with Azure APIs via Azure Managed Identity by using the Cloud Credential Operator (CCO). |
|
|
Specify whether an Operator supports configuration for tokenized authentication with Google Cloud APIs via GCP Workload Identity Foundation (WIF) by using the Cloud Credential Operator (CCO). |
|
|
Specify whether an Operator provides a Cloud-Native Network Function (CNF) Kubernetes plugin. |
|
|
Specify whether an Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
|
|
Specify whether an Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
|
Valid values are shown intentionally with double quotes, because Kubernetes annotations must be strings.
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
features.operators.openshift.io/disconnected: "true"
features.operators.openshift.io/fips-compliant: "false"
features.operators.openshift.io/proxy-aware: "false"
features.operators.openshift.io/tls-profiles: "false"
features.operators.openshift.io/token-auth-aws: "false"
features.operators.openshift.io/token-auth-azure: "false"
features.operators.openshift.io/token-auth-gcp: "false"
Starting in OpenShift Container Platform 4.14, the operators.openshift.io/infrastructure-features
group of annotations are deprecated by the group of annotations with the features.operators.openshift.io
namespace. While you are encouraged to use the newer annotations, both groups are currently accepted when used in parallel.
These annotations detail the infrastructure features that an Operator supports. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog.
Valid annotation values | Description | ||
---|---|---|---|
|
Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator. |
||
|
Operator provides a Cloud-native Network Functions (CNF) Kubernetes plugin. |
||
|
Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
||
|
Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
||
|
Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode.
|
||
|
Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables |
disconnected
and proxy-aware
supportapiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
The following Operator annotations are optional.
Annotation | Description |
---|---|
|
Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
Specify a single required custom resource by adding |
|
Set a suggested namespace where the Operator should be deployed. |
|
Set a manifest for a |
|
Free-form array for listing any specific subscriptions that are required to use the Operator. For example, |
|
Hides CRDs in the UI that are not meant for user manipulation. |
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Replace hard-coded image references with environment variables.
In the cluster service version (CSV) of your Operator:
List any related images, or other container images that your Operator might require to perform their functions.
Reference all specified images by a digest (SHA) and not by a tag.
All dependencies of your Operator must also support running in a disconnected mode.
Your Operator must not require any off-cluster resources.
An Operator project with a CSV. The following procedure uses the Memcached Operator as an example for Go-, Ansible-, and Helm-based projects.
Set an environment variable for the additional image references used by the Operator in the config/manager/manager.yaml
file:
config/manager/manager.yaml
file...
spec:
...
spec:
...
containers:
- command:
- /manager
...
env:
- name: <related_image_environment_variable> (1)
value: "<related_image_reference_with_tag>" (2)
1 | Define the environment variable, such as RELATED_IMAGE_MEMCACHED . |
2 | Set the related image reference and tag, such as docker.io/memcached:1.4.36-alpine . |
Replace hard-coded image references with environment variables in the relevant file for your Operator project type:
For Go-based Operator projects, add the environment variable to the controllers/memcached_controller.go
file as shown in the following example:
controllers/memcached_controller.go
file // deploymentForMemcached returns a memcached Deployment object
...
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
- Image: "memcached:1.4.36-alpine", (1)
+ Image: os.Getenv("<related_image_environment_variable>"), (2)
Name: "memcached",
Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
Ports: []corev1.ContainerPort{{
...
1 | Delete the image reference and tag. |
2 | Use the os.Getenv function to call the <related_image_environment_variable> . |
The |
For Ansible-based Operator projects, add the environment variable to the roles/memcached/tasks/main.yml
file as shown in the following example:
roles/memcached/tasks/main.yml
filespec:
containers:
- name: memcached
command:
- memcached
- -m=64
- -o
- modern
- -v
- image: "docker.io/memcached:1.4.36-alpine" (1)
+ image: "{{ lookup('env', '<related_image_environment_variable>') }}" (2)
ports:
- containerPort: 11211
...
1 | Delete the image reference and tag. |
2 | Use the lookup function to call the <related_image_environment_variable> . |
For Helm-based Operator projects, add the overrideValues
field to the watches.yaml
file as shown in the following example:
watches.yaml
file...
- group: demo.example.com
version: v1alpha1
kind: Memcached
chart: helm-charts/memcached
overrideValues: (1)
relatedImage: ${<related_image_environment_variable>} (2)
1 | Add the overrideValues field. |
2 | Define the overrideValues field by using the <related_image_environment_variable> , such as RELATED_IMAGE_MEMCACHED . |
Add the value of the overrideValues
field to the helm-charts/memchached/values.yaml
file as shown in the following example:
helm-charts/memchached/values.yaml
file...
relatedImage: ""
Edit the chart template in the helm-charts/memcached/templates/deployment.yaml
file as shown in the following example:
helm-charts/memcached/templates/deployment.yaml
filecontainers:
- name: {{ .Chart.Name }}
securityContext:
- toYaml {{ .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.pullPolicy }}
env: (1)
- name: related_image (2)
value: "{{ .Values.relatedImage }}" (3)
1 | Add the env field. |
2 | Name the environment variable. |
3 | Define the value of the environment variable. |
Add the BUNDLE_GEN_FLAGS
variable definition to your Makefile
with the following changes:
Makefile
BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)
# USE_IMAGE_DIGESTS defines if images are resolved via tags or digests
# You can enable this value if you would like to use SHA Based Digests
# To enable set flag to true
USE_IMAGE_DIGESTS ?= false
ifeq ($(USE_IMAGE_DIGESTS), true)
BUNDLE_GEN_FLAGS += --use-image-digests
endif
...
- $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) (1)
+ $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS) (2)
...
1 | Delete this line in the Makefile . |
2 | Replace the line above with this line. |
To update your Operator image to use a digest (SHA) and not a tag, run the make bundle
command and set USE_IMAGE_DIGESTS
to true
:
$ make bundle USE_IMAGE_DIGESTS=true
Add the disconnected
annotation, which indicates that the Operator works in a disconnected environment:
metadata:
annotations:
operators.openshift.io/infrastructure-features: '["disconnected"]'
Operators can be filtered in OperatorHub by this infrastructure feature.
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported (1)
operatorframework.io/os.<os>: supported (2)
1 | Set <arch> to a supported string. |
2 | Set <os> to a supported string. |
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the |
If a CSV does not include an os
label, it is treated as if it has the following Linux support label by default:
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an arch
label, it is treated as if it has the following AMD64 support label by default:
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
An Operator project with a CSV.
To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Add a label in the metadata.labels
of your CSV for each supported architecture and operating system that your Operator supports:
labels:
operatorframework.io/arch.s390x: supported
operatorframework.io/os.zos: supported
operatorframework.io/os.linux: supported (1)
operatorframework.io/arch.amd64: supported (1)
1 | After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly. |
See the Image Manifest V 2, Schema 2 specification for more information on manifest lists.
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
Architecture | String |
---|---|
AMD64 |
|
ARM64 |
|
IBM Power® |
|
IBM Z® |
|
Operating system | String |
---|---|
Linux |
|
z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems. |
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
In your CSV, set the operatorframework.io/suggested-namespace
annotation to your suggested namespace:
metadata:
annotations:
operatorframework.io/suggested-namespace: <namespace> (1)
1 | Set your suggested namespace. |
Some Operators expect to run only on control plane nodes, which can be done by setting a nodeSelector
in the Pod
spec by the Operator itself.
To avoid getting duplicated and potentially conflicting cluster-wide default nodeSelector
, you can set a default node selector on the namespace where the Operator runs. The default node selector will take precedence over the cluster default so the cluster default will not be applied to the pods in the Operators namespace.
When adding the Operator to a cluster using OperatorHub, the web console auto-populates the suggested namespace for the cluster administrator during the installation process. The suggested namespace is created using the namespace manifest in YAML which is included in the cluster service version (CSV).
In your CSV, set the operatorframework.io/suggested-namespace-template
with a manifest for a Namespace
object. The following sample is a manifest for an example Namespace
with the namespace default node selector specified:
metadata:
annotations:
operatorframework.io/suggested-namespace-template: (1)
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "vertical-pod-autoscaler-suggested-template",
"annotations": {
"openshift.io/node-selector": ""
}
}
}
1 | Set your suggested namespace. |
If both |
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition
custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition
custom resource (CR), the behavior of OLM changes accordingly.
To support Operator conditions, an Operator must be able to read the OperatorCondition
CR created by OLM and have the ability to complete the following tasks:
Get the specific condition.
Set the status of a specific condition.
This can be accomplished by using the operator-lib
library. An Operator author can provide a controller-runtime
client in their Operator for the library to access the OperatorCondition
CR owned by the Operator in the cluster.
The library provides a generic Conditions
interface, which has the following methods to Get
and Set
a conditionType
in the OperatorCondition
CR:
Get
To get the specific condition, the library uses the client.Get
function from controller-runtime
, which requires an ObjectKey
of type types.NamespacedName
present in conditionAccessor
.
Set
To update the status of the specific condition, the library uses the client.Update
function from controller-runtime
. An error occurs if the conditionType
is not present in the CRD.
The Operator is allowed to modify only the status
subresource of the CR. Operators can either delete or update the status.conditions
array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream Condition GoDocs.
Operator SDK 1.36.1 supports |
An Operator project generated using the Operator SDK.
To enable Operator conditions in your Operator project:
In the go.mod
file of your Operator project, add operator-framework/operator-lib
as a required library:
module github.com/example-inc/memcached-operator
go 1.19
require (
k8s.io/apimachinery v0.26.0
k8s.io/client-go v0.26.0
sigs.k8s.io/controller-runtime v0.14.1
operator-framework/operator-lib v0.11.0
)
Write your own constructor in your Operator logic that will result in the following outcomes:
Accepts a controller-runtime
client.
Accepts a conditionType
.
Returns a Condition
interface to update or add conditions.
Because OLM currently supports the Upgradeable
condition, you can create an interface that has methods to access the Upgradeable
condition. For example:
import (
...
apiv1 "github.com/operator-framework/api/pkg/operators/v1"
)
func NewUpgradeable(cl client.Client) (Condition, error) {
return NewCondition(cl, "apiv1.OperatorUpgradeable")
}
cond, err := NewUpgradeable(cl);
In this example, the NewUpgradeable
constructor is further used to create a variable cond
of type Condition
. The cond
variable would in turn have Get
and Set
methods, which can be used for handling the OLM Upgradeable
condition.
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a webhookdefinitions
section to define the following types of webhooks:
Admission webhooks (validating and mutating)
Conversion webhooks
Add a webhookdefinitions
section to the spec
section of the CSV of your Operator and include any webhook definitions using a type
of ValidatingAdmissionWebhook
, MutatingAdmissionWebhook
, or ConversionWebhook
. The following example contains all three types of webhooks:
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
name: webhook-operator.v0.0.1
spec:
customresourcedefinitions:
owned:
- kind: WebhookTest
name: webhooktests.webhook.operators.coreos.io (1)
version: v1
install:
spec:
deployments:
- name: webhook-operator-webhook
...
...
...
strategy: deployment
installModes:
- supported: false
type: OwnNamespace
- supported: false
type: SingleNamespace
- supported: false
type: MultiNamespace
- supported: true
type: AllNamespaces
webhookdefinitions:
- type: ValidatingAdmissionWebhook (2)
admissionReviewVersions:
- v1beta1
- v1
containerPort: 443
targetPort: 4343
deploymentName: webhook-operator-webhook
failurePolicy: Fail
generateName: vwebhooktest.kb.io
rules:
- apiGroups:
- webhook.operators.coreos.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- webhooktests
sideEffects: None
webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest
- type: MutatingAdmissionWebhook (3)
admissionReviewVersions:
- v1beta1
- v1
containerPort: 443
targetPort: 4343
deploymentName: webhook-operator-webhook
failurePolicy: Fail
generateName: mwebhooktest.kb.io
rules:
- apiGroups:
- webhook.operators.coreos.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- webhooktests
sideEffects: None
webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest
- type: ConversionWebhook (4)
admissionReviewVersions:
- v1beta1
- v1
containerPort: 443
targetPort: 4343
deploymentName: webhook-operator-webhook
generateName: cwebhooktest.kb.io
sideEffects: None
webhookPath: /convert
conversionCRDs:
- webhooktests.webhook.operators.coreos.io (5)
...
1 | The CRDs targeted by the conversion webhook must exist here. |
2 | A validating admission webhook. |
3 | A mutating admission webhook. |
4 | A conversion webhook. |
5 | The spec.PreserveUnknownFields property of each CRD must be set to false or nil . |
Kubernetes documentation:
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
The type
field must be set to either ValidatingAdmissionWebhook
, MutatingAdmissionWebhook
, or ConversionWebhook
, or the CSV will be placed in a failed phase.
The CSV must contain a deployment whose name is equivalent to the value supplied in the deploymentName
field of the webhookdefinition
.
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
The TLS certificate file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.crt
.
The TLS key file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.key
.
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
Requests that target all groups
Requests that target the operators.coreos.com
group
Requests that target the ValidatingWebhookConfigurations
or MutatingWebhookConfigurations
resources
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
CSVs featuring a conversion webhook can only support the AllNamespaces
install mode.
The CRD targeted by the conversion webhook must have its
spec.preserveUnknownFields
field set to false
or nil
.
The conversion webhook defined in the CSV must target an owned CRD.
There can only be one conversion webhook on the entire cluster for a given CRD.
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
Field | Description | Required/optional |
---|---|---|
|
The full name of your CRD. |
Required |
|
The version of that object API. |
Required |
|
The machine readable name of your CRD. |
Required |
|
A human readable version of your CRD name, for example |
Required |
|
A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. |
Required |
|
The API group that this CRD belongs to, for example |
Optional |
|
Your CRDs own one or more types of Kubernetes objects. These are listed in the It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. |
Optional |
|
These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. |
Optional |
The following example depicts a MongoDB Standalone
CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:
- displayName: MongoDB Standalone
group: mongodb.com
kind: MongoDbStandalone
name: mongodbstandalones.mongodb.com
resources:
- kind: Service
name: ''
version: v1
- kind: StatefulSet
name: ''
version: v1beta2
- kind: Pod
name: ''
version: v1
- kind: ConfigMap
name: ''
version: v1
specDescriptors:
- description: Credentials for Ops Manager or Cloud Manager.
displayName: Credentials
path: credentials
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
- description: Project this deployment belongs to.
displayName: Project
path: project
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
- description: MongoDB version to be installed.
displayName: Version
path: version
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:label'
statusDescriptors:
- description: The status of each of the pods for the MongoDB cluster.
displayName: Pod Status
path: pods
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
version: v1
description: >-
MongoDB Deployment consisting of only one host. No replication of
data.
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
Field | Description | Required/optional |
---|---|---|
|
The full name of the CRD you require. |
Required |
|
The version of that object API. |
Required |
|
The Kubernetes object kind. |
Required |
|
A human readable version of the CRD. |
Required |
|
A summary of how the component fits in your larger architecture. |
Required |
required:
- name: etcdclusters.etcd.database.coreos.com
version: v1beta2
kind: EtcdCluster
displayName: etcd Cluster
description: Represents a cluster of etcd nodes.
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
All existing serving versions in the current CRD are present in the new CRD.
All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the versions
section of your CSV.
For example, if the current CRD has a version v1alpha1
and you want to add a new version v1beta1
and mark it as the new storage version, add a new entry for v1beta1
:
versions:
- name: v1alpha1
served: true
storage: false
- name: v1beta1 (1)
served: true
storage: true
1 | New entry. |
Ensure the referencing version of the CRD in the owned
section of your CSV is updated if the CSV intends to use the new version:
customresourcedefinitions:
owned:
- name: cluster.example.com
version: v1beta1 (1)
kind: cluster
displayName: Cluster
1 | Update the version . |
Push the updated CRD and CSV to your bundle.
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served
field in the CRD to false
. Then, the non-serving version can be removed on the subsequent CRD upgrade.
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions:
- name: v1alpha1
served: false (1)
storage: true
1 | Set to false . |
Switch the storage
version to a serving version if the version to be deprecated is currently the storage
version. For example:
versions:
- name: v1alpha1
served: false
storage: false (1)
- name: v1beta1
served: true
storage: true (1)
1 | Update the storage fields accordingly. |
To remove a specific version that is or was the |
Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions:
- name: v1beta1
served: true
storage: true
Ensure the referencing CRD version in the owned
section of your CSV is updated accordingly if that version is removed from the CRD.
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples
. Compatible UIs will pre-fill this template for users to further customize.
The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata
and spec
of the Kubernetes object.
The following full example provides templates for EtcdCluster
, EtcdBackup
and EtcdRestore
:
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"<operator_namespace>"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication
CRD that is created whenever a user creates a Database object with replication: true
.
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects
annotation to the cluster service version (CSV) of your Operator.
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec
block of your CR, if applicable to your Operator.
Add the operators.operatorframework.io/internal-objects
annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
name: my-operator-v1.2.3
annotations:
operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' (1)
...
1 | Set any internal CRDs as an array of strings. |
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding operatorframework.io/initialization-resource
to the cluster service version (CSV) during Operator installation. You are then prompted to create the custom resource through a template that is provided in the CSV.
The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Add the operatorframework.io/initialization-resource
annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of a StorageCluster
resource and provides a full YAML definition:
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
name: my-operator-v1.2.3
annotations:
operatorframework.io/initialization-resource: |-
{
"apiVersion": "ocs.openshift.io/v1",
"kind": "StorageCluster",
"metadata": {
"name": "example-storagecluster"
},
"spec": {
"manageNodes": false,
"monPVCTemplate": {
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
},
"storageClassName": "gp2"
}
},
"storageDeviceSets": [
{
"count": 3,
"dataPVCTemplate": {
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Ti"
}
},
"storageClassName": "gp2",
"volumeMode": "Block"
}
},
"name": "example-deviceset",
"placement": {},
"portable": true,
"resources": {}
}
]
}
}
...
As with CRDs, there are two types of API services that your Operator may use: owned and required.
When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server
that backs it and the group/version/kind (GVK) it provides.
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
Field | Description | Required/optional |
---|---|---|
|
Group that the API service provides, for example |
Required |
|
Version of the API service, for example |
Required |
|
A kind that the API service is expected to provide. |
Required |
|
The plural name for the API service provided. |
Required |
|
Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the |
Required |
|
A human readable version of your API service name, for example |
Required |
|
A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. |
Required |
|
Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. |
Optional |
|
Essentially the same as for owned CRDs. |
Optional |
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
Service pod selectors are copied from the CSV deployment matching the DeploymentName
field of the API service description.
A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service
resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.
The certificate is stored as a type kubernetes.io/tls
secret in the deployment namespace, and a volume named apiservice-cert
is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName
field of the API service description.
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates
and any existing volume mounts with the same path are replaced.
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
Field | Description | Required/optional |
---|---|---|
|
Group that the API service provides, for example |
Required |
|
Version of the API service, for example |
Required |
|
A kind that the API service is expected to provide. |
Required |
|
A human readable version of your API service name, for example |
Required |
|
A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. |
Required |