$ grep -r "ztp-deploy-wave" out/source-crs
You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The zero touch priovisioning (ZTP) pipeline performs the cluster installations. ZTP can be used in a disconnected environment.
GitOps zero touch provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM.
By default, GitOps ZTP creates all policies with a remediation action of
inform. These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the ZTP process, after OpenShift installation, the TALM steps through the created
inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM.
To automate the initial configuration of newly deployed clusters, TALM monitors the state of all
ManagedCluster CRs on the hub cluster. Any
ManagedCluster CR that does not have a
ztp-done label applied, including newly created
ManagedCluster CRs, causes the TALM to automatically create a
ClusterGroupUpgrade CR with the following characteristics:
ClusterGroupUpgrade CR is created and enabled in the
ClusterGroupUpgrade CR has the same name as the
The cluster selector includes only the cluster associated with that
The set of managed policies includes all policies that RHACM has bound to the cluster at the time the
ClusterGroupUpgrade is created.
Pre-caching is disabled.
Timeout set to 4 hours (240 minutes).
The automatic creation of an enabled
ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a
ClusterGroupUpgrade CR for any
ManagedCluster without the
ztp-done label allows a failed ZTP installation to be restarted by simply deleting the
ClusterGroupUpgrade CR for the cluster.
Each policy generated from a
PolicyGenTemplate CR includes a
ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated
ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated
All CRs in the same policy must have the same setting for the
The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the next policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the
CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account.
Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves.
To check the default wave value in each source CR, run the following command against the
out/source-crs directory that is extracted from the
ztp-site-generate container image:
$ grep -r "ztp-deploy-wave" out/source-crs
ClusterGroupUpgrade CR is automatically created and includes directives to annotate the
ManagedCluster CR with labels at the start and end of the ZTP process.
When ZTP configuration post-installation commences, the
ManagedCluster has the
ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the
ztp-running label and apply the
For deployments that make use of the
informDuValidator policy, the
ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the ZTP applied configuration CRs. The
ztp-done label affects automatic
ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial ZTP installation of the cluster.
The automatically created
ClusterGroupUpgrade CR has the owner reference set as the
ManagedCluster from which it was derived. This reference ensures that deleting the
ManagedCluster CR causes the instance of the
ClusterGroupUpgrade to be deleted along with any supporting resources.
Red Hat Advanced Cluster Management (RHACM) uses zero touch provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters.
The deployment of the clusters includes:
Installing the host operating system (RHCOS) on a blank server
Deploying OpenShift Container Platform
Creating cluster policies and site subscriptions
Making the necessary network configurations to the server operating system
Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV
After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically:
A Discovery image ISO file is generated and booted on the target host.
When the ISO file successfully boots on the target host it reports the host hardware information to RHACM.
After all hosts are discovered, OpenShift Container Platform is installed.
When OpenShift Container Platform finishes installing, the hub installs the
klusterlet service on the target cluster.
The requested add-on services are installed on the target cluster.
The Discovery image ISO process is complete when the
Agent CR for the managed cluster is created on the hub cluster.
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads.
Add the required
Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the ZTP pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file
apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno (1) data: (2) password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno (3) data: .dockerconfigjson: <pull_secret> (4) type: kubernetes.io/dockerconfigjson
|1||Must match the namespace configured in the related
|2||Base64-encoded values for
|3||Must match the namespace configured in the related
|4||Base64-encoded pull secret|
Add the relative path to
example-sno-secret.yaml to the
kustomization.yaml file that you use to install the cluster.
Use the following procedure to create a
SiteConfig custom resource (CR) and related files and initiate the zero touch provisioning (ZTP) cluster deployment.
You have installed the OpenShift CLI (
You have logged in to the hub cluster as a user with
You configured the hub cluster for generating the required installation and policy CRs.
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information.
When you create the source repository, ensure that you patch the ArgoCD application with the
To be ready for provisioning managed clusters, you require the following for each bare-metal host:
Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host.
ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the
ManagedCluster CRs on the hub cluster based on the
SiteConfig CR in your site Git repo. You create individual
BMCSecret CRs for each host manually.
Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in
out/argocd/example/siteconfig/example-sno.yaml, the cluster name and namespace is
Export the cluster namespace by running the following command:
$ export CLUSTERNS=example-sno
Create the namespace:
$ oc create namespace $CLUSTERNS
Create pull secret and BMC
Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information.
The secrets are referenced from the
SiteConfig CR for your cluster in your local clone of the Git repository:
Choose the appropriate example for your CR from the
The folder includes example files for single node, three-node, and standard clusters:
Change the cluster and host details in the example file to match the type of cluster you want. For example:
apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "<site_name>" namespace: "<site_name>" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" (1) clusterImageSetNameRef: "openshift-4.11" (2) sshPublicKey: "ssh-rsa AAAA..." (3) clusters: - clusterName: "<site_name>" networkType: "OVNKubernetes" clusterLabels: (4) common: true group-du-sno: "" sites : "<site_name>" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" (5) nodes: - hostName: "example-node.example.com" (6) role: "master" bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ (7) bmcCredentialsName: name: "bmh-secret" (8) bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" (9) rootDeviceHints: wwn: "0x11111000000asd123" cpuset: "