The operator-sdk
CLI can generate, or scaffold, a number of packages and files for each Operator project.
The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.17 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework). |
Ansible-based Operator projects generated using the operator-sdk init --plugins ansible
command contain the following directories and files:
File or directory | Purpose |
---|---|
|
Dockerfile for building the container image for the Operator. |
|
Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
|
YAML file containing metadata information for the Operator. |
|
Base CRD files and the |
|
Collects all Operator manifests for deployment. Use by the |
|
Controller manager deployment. |
|
|
|
Role and role binding for leader election and authentication proxy. |
|
Sample resources created for the CRDs. |
|
Sample configurations for testing. |
|
A subdirectory for the playbooks to run. |
|
Subdirectory for the roles tree to run. |
|
Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the |
|
YAML file containing the Ansible collections and role dependencies to install during a build. |
|
Molecule scenarios for end-to-end testing of your role and Operator. |