$ operator-sdk generate csv --csv-version <version>
A ClusterServiceVersion (CSV) is a YAML manifest created from Operator metadata that assists the Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information like its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which Custom Resources (CRs) it manages or depends on.
The Operator SDK includes the generate csv
subcommand to generate a
ClusterServiceVersion (CSV) for the current Operator project customized using
information contained in manually-defined YAML manifests and Operator source
files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
The CSV version is the same as the Operator’s, and a new CSV is generated when
upgrading Operator versions. Operator authors can use the --csv-version
flag
to have their Operators' state encapsulated in a CSV with the supplied semantic version:
$ operator-sdk generate csv --csv-version <version>
This action is idempotent and only updates the CSV file when a new version is
supplied, or a YAML manifest or source file is changed. Operator authors should
not have to directly modify most fields in a CSV manifest. Those that require
modification are defined in this guide. For example, the CSV version must be
included in metadata.name
.
An Operator project’s deploy/
directory is the standard location for all
manifests required to deploy an Operator. The Operator SDK can use data from
manifests in deploy/
to write a CSV. The following command:
$ operator-sdk generate csv --csv-version <version>
writes a CSV YAML file to the deploy/olm-catalog/
directory by default.
Exactly three types of manifests are required to generate a CSV:
operator.yaml
*_{crd,cr}.yaml
RBAC role files, for example role.yaml
Operator authors may have different versioning requirements for these files and
can configure which specific files are included in the
deploy/olm-catalog/csv-config.yaml
file.
Depending on whether an existing CSV is detected, and assuming all configuration
defaults are used, the generate csv
subcommand either:
Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.
The update mechanism checks for an existing CSV in deploy/
. When one is not
found, it creates a ClusterServiceVersion object, referred to here as a cache,
and populates fields easily derived from Operator metadata, such as Kubernetes
API ObjectMeta
.
The update mechanism searches deploy/
for manifests that contain data a CSV
uses, such as a Deployment resource, and sets the appropriate CSV fields in the
cache with this data.
After the search completes, every cache field populated is written back to a CSV YAML file.
or:
Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.
The update mechanism checks for an existing CSV in deploy/
. When one is
found, the CSV YAML file contents are marshaled into a ClusterServiceVersion
cache.
The update mechanism searches deploy/
for manifests that contain data a CSV
uses, such as a Deployment resource, and sets the appropriate CSV fields in the
cache with this data.
After the search completes, every cache field populated is written back to a CSV YAML file.
Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved. |
Operator authors can configure CSV composition by populating several fields in
the deploy/olm-catalog/csv-config.yaml
file:
Field | Description |
---|---|
|
The Operator resource manifest file path. Defaults to |
|
A list of CRD and CR manifest file paths. Defaults to |
|
A list of RBAC role manifest file paths. Defaults to |
Many CSV fields cannot be populated using generated, non-SDK-specific manifests. These fields are mostly human-written, English metadata about the Operator and various Custom Resource Definitions (CRDs).
Operator authors must directly modify their CSV YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning CSV generation when a lack of data in any of the required fields is detected.
Field | Description |
---|---|
|
A unique name for this CSV. Operator version should be included in the name to
ensure uniqueness, for example |
|
The Operator’s capability level according to the Operator maturity model.
Options include |
|
A public name to identify the Operator. |
|
A short description of the Operator’s functionality. |
|
Keywords describing the operator. |
|
Human or organizational entities maintaining the Operator, with a |
|
The Operators' provider (usually an organization), with a |
|
Key-value pairs to be used by Operator internals. |
|
Semantic version of the Operator, for example |
|
Any CRDs the Operator uses. This field is populated automatically by the
Operator SDK if any CRD YAML files are present in
|
Field | Description |
---|---|
|
The name of the CSV being replaced by this CSV. |
|
URLs (for example, websites and documentation) pertaining to the Operator or
application being managed, each with a |
|
Selectors by which the Operator can pair resources in a cluster. |
|
A base64-encoded icon unique to the Operator, set in a |
|
The level of maturity the software has achieved at this version. Options include
|
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code; such Operator SDK functionality will be addressed in a future design document. |
An Operator project generated using the Operator SDK
In your Operator project, configure your CSV composition by modifying the deploy/olm-catalog/csv-config.yaml
file, if desired.
Generate the CSV:
$ operator-sdk generate csv --csv-version <version>
In the new CSV generated in the deploy/olm-catalog/
directory, ensure all
required, manually-defined fields are set appropriately.
As an Operator author, your CSV must meet the following additional requirements for your Operator to run properly in a restricted network environment:
List any related images, or other container images that your Operator might require to perform their functions.
Reference all specified images by a digest (SHA) and not by a tag.
You must use SHA references to related images in two places in the Operator’s CSV:
in spec.relatedImages
:
...
spec:
relatedImages: (1)
- name: etcd-operator (2)
image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
- name: etcd-image
image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68
...
1 | Create a relatedImages section and set the list of related images. |
2 | Specify a unique identifier for the image. |
3 | Specify each image by a digest (SHA), not by an image tag. |
in the env
section of the Operators Deployments when declaring environment
variables that inject the image that the Operator should use:
spec:
install:
spec:
deployments:
- name: etcd-operator-v3.1.1
spec:
replicas: 1
selector:
matchLabels:
name: etcd-operator
strategy:
type: Recreate
template:
metadata:
labels:
name: etcd-operator
spec:
containers:
- args:
- /opt/etcd/bin/etcd_operator_run.sh
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.annotations['olm.targetNamespaces']
- name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE (1)
value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 (2)
- name: ETCD_LOG_LEVEL
value: INFO
image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
name: etcd-operator
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
resources: {}
serviceAccountName: etcd-operator
strategy: deployment
1 | Inject the images referenced by the Operator via environment variables. |
2 | Specify each image by a digest (SHA), not by an image tag. |
3 | Also reference the Operator container image by a digest (SHA), not by an image tag. |
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the CSV that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported (1)
operatorframework.io/os.<os>: supported (2)
1 | Set <arch> to a supported string. |
2 | Set <os> to a supported string. |
Only the labels on the channel head of the default channel are considered for filtering PackageManifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API. |
If a CSV does not include an os
label, it is treated as if it has the
following Linux support label by default:
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an arch
label, it is treated as if it has the
following AMD64 support label by default:
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
An Operator project with a CSV.
To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Add a label in your CSV’s metadata.labels
for each supported architecture and
operating system that your Operator supports:
labels:
operatorframework.io/arch.s390x: supported
operatorframework.io/os.zos: supported
operatorframework.io/os.linux: supported (1)
operatorframework.io/arch.amd64: supported (1)
1 | After you add a new architecture or operating system, you must also now include
the default os.linux and arch.amd64 variants explicitly. |
See the Image Manifest V 2, Schema 2 specification for more information on manifest lists.
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
Architecture | String |
---|---|
AMD64 |
|
64-bit PowerPC little-endian |
|
IBM Z |
|
Operating system | String |
---|---|
Linux |
|
z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems. |
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, in order to work properly. If resolved from a Subscription, OLM defaults the namespaced resources of an Operator to the namespace of its Subscription.
As an Operator author, you can instead express a desired target namespace as part of your CSV to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
In your CSV, set the operatorframework.io/suggested-namespace
annotation to
your suggested namespace:
metadata:
annotations:
operatorframework.io/suggested-namespace: <namespace> (1)
1 | Set your suggested namespace. |
There are two types of Custom Resource Definitions (CRDs) that your Operator may use: ones that are owned by it and ones that it depends on, which are required.
The CRDs owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of ReplicaSets in another. Each one should be listed out in the CSV file.
Field | Description | Required/Optional |
---|---|---|
|
The full name of your CRD. |
Required |
|
The version of that object API. |
Required |
|
The machine readable name of your CRD. |
Required |
|
A human readable version of your CRD name, for example |
Required |
|
A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. |
Required |
|
The API group that this CRD belongs to, for example |
Optional |
|
Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here. |
Optional |
|
These Descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a Secret or ConfigMap that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All Descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. |
Optional |
The following example depicts a MongoDB Standalone
CRD that requires some user
input in the form of a Secret and ConfigMap, and orchestrates Services,
StatefulSets, pods and ConfigMaps:
- displayName: MongoDB Standalone
group: mongodb.com
kind: MongoDbStandalone
name: mongodbstandalones.mongodb.com
resources:
- kind: Service
name: ''
version: v1
- kind: StatefulSet
name: ''
version: v1beta2
- kind: Pod
name: ''
version: v1
- kind: ConfigMap
name: ''
version: v1
specDescriptors:
- description: Credentials for Ops Manager or Cloud Manager.
displayName: Credentials
path: credentials
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
- description: Project this deployment belongs to.
displayName: Project
path: project
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
- description: MongoDB version to be installed.
displayName: Version
path: version
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:label'
statusDescriptors:
- description: The status of each of the pods for the MongoDB cluster.
displayName: Pod Status
path: pods
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
version: v1
description: >-
MongoDB Deployment consisting of only one host. No replication of
data.
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
The Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a Service Account created for each Operator to create, watch, and modify the Kubernetes resources required.
Field | Description | Required/Optional |
---|---|---|
|
The full name of the CRD you require. |
Required |
|
The version of that object API. |
Required |
|
The Kubernetes object kind. |
Required |
|
A human readable version of the CRD. |
Required |
|
A summary of how the component fits in your larger architecture. |
Required |
required:
- name: etcdclusters.etcd.database.coreos.com
version: v1beta2
kind: EtcdCluster
displayName: etcd Cluster
description: Represents a cluster of etcd nodes.
Users of your Operator will need to be aware of which options are required
versus optional. You can provide templates for each of your Custom Resource
Definitions (CRDs) with a minimum set of configuration as an annotation named
alm-examples
. Compatible UIs will pre-fill this template for users to further
customize.
The annotation consists of a list of the kind
, for example, the CRD name and
the corresponding metadata
and spec
of the Kubernetes object.
The following full example provides templates for EtcdCluster
, EtcdBackup
and EtcdRestore
:
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
It is common practice for Operators to use Custom Resource Definitions (CRDs)
internally to accomplish a task. These objects are not meant for users to
manipulate and can be confusing to users of the Operator. For example, a
database Operator might have a Replication CRD that is created whenever a user
creates a Database object with replication: true
.
If any CRDs are not meant for manipulation by users, they can be hidden in the
user interface using the operators.operatorframework.io/internal-objects
annotation in the Operator’s ClusterServiceVersion (CSV):
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
name: my-operator-v1.2.3
annotations:
operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' (1)
...
1 | Set any internal CRDs as an array of strings. |
Before marking one of your CRDs as internal, make sure that any debugging
information or configuration that might be required to manage the application is
reflected on the CR’s status or spec
block, if applicable to your Operator.
As with CRDs, there are two types of APIServices that your Operator may use: owned and required.
When a CSV owns an APIService, it is responsible for describing the deployment
of the extension api-server
that backs it and the group-version-kinds
it
provides.
An APIService is uniquely identified by the group-version
it provides and can
be listed multiple times to denote the different kinds it is expected to
provide.
Field | Description | Required/Optional |
---|---|---|
|
Group that the APIService provides, for example |
Required |
|
Version of the APIService, for example |
Required |
|
A kind that the APIService is expected to provide. |
Required |
|
The plural name for the APIService provided |
Required |
|
Name of the deployment defined by your CSV that corresponds to your APIService (required for owned APIServices). During the CSV pending phase, the OLM Operator searches your CSV’s InstallStrategy for a deployment spec with a matching name, and if not found, does not transition the CSV to the install ready phase. |
Required |
|
A human readable version of your APIService name, for example |
Required |
|
A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService. |
Required |
|
Your APIServices own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here. |
Optional |
|
Essentially the same as for owned CRDs. |
Optional |
The Operator Lifecycle Manager (OLM) is responsible for creating or replacing the Service and APIService resources for each unique owned APIService:
Service Pod selectors are copied from the CSV deployment matching the
APIServiceDescription’s DeploymentName
.
A new CA key/cert pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective APIService resource.
The OLM handles generating a serving key/cert pair whenever an owned APIService is being installed. The serving certificate has a CN containing the host name of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding APIService resource.
The cert is stored as a type kubernetes.io/tls
Secret in the deployment
namespace, and a Volume named apiservice-cert
is automatically appended to the
Volumes section of the deployment in the CSV matching the
APIServiceDescription’s DeploymentName
field.
If one does not already exist, a VolumeMount with a matching name is also
appended to all containers of that deployment. This allows users to define a
VolumeMount with the expected name to accommodate any custom path requirements.
The generated VolumeMount’s path defaults to
/apiserver.local.config/certificates
and any existing VolumeMounts with the
same path are replaced.
The OLM ensures all required CSVs have an APIService that is available and all
expected group-version-kinds
are discoverable before attempting installation.
This allows a CSV to rely on specific kinds provided by APIServices it does not
own.
Field | Description | Required/Optional |
---|---|---|
|
Group that the APIService provides, for example |
Required |
|
Version of the APIService, for example |
Required |
|
A kind that the APIService is expected to provide. |
Required |
|
A human readable version of your APIService name, for example |
Required |
|
A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService. |
Required |