$ mkdir -p ./out
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the |
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads. |
Use the generator
entrypoint for the ztp-site-generate
container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig
and PolicyGenTemplate
CRs.
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
Create an output folder by running the following command:
$ mkdir -p ./out
Export the argocd
directory from the ztp-site-generate
container image:
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.11.1 extract /home/ztp --tar | tar x -C ./out
The ./out
directory contains the reference PolicyGenTemplate
and SiteConfig
CRs in the out/argocd/example/
folder.
out
└── argocd
└── example
├── policygentemplates
│ ├── common-ranGen.yaml
│ ├── example-sno-site.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ ├── kustomization.yaml
│ └── ns.yaml
└── siteconfig
├── example-sno.yaml
├── KlusterletAddonConfigOverride.yaml
└── kustomization.yaml
Create an output folder for the site installation CRs:
$ mkdir -p ./site-install
Modify the example SiteConfig
CR for the cluster type that you want to install. Copy example-sno.yaml
to site-1-sno.yaml
and modify the CR to match the details of the site and bare-metal host that you want to install, for example:
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "<site_name>"
namespace: "<site_name>"
spec:
baseDomain: "example.com"
pullSecretRef:
name: "assisted-deployment-pull-secret" (1)
clusterImageSetNameRef: "openshift-4.11" (2)
sshPublicKey: "ssh-rsa AAAA..." (3)
clusters:
- clusterName: "<site_name>"
networkType: "OVNKubernetes"
clusterLabels: (4)
common: true
group-du-sno: ""
sites : "<site_name>"
clusterNetwork:
- cidr: 1001:1::/48
hostPrefix: 64
machineNetwork:
- cidr: 1111:2222:3333:4444::/64
serviceNetwork:
- 1001:2::/112
additionalNTPSources:
- 1111:2222:3333:4444::2
#crTemplates:
# KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" (5)
nodes:
- hostName: "example-node.example.com" (6)
role: "master"
bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ (7)
bmcCredentialsName:
name: "bmh-secret" (8)
bootMACAddress: "AA:BB:CC:DD:EE:11"
bootMode: "UEFI" (9)
rootDeviceHints:
wwn: "0x11111000000asd123"
cpuset: "0-1,52-53" (10)
nodeNetwork: (11)
interfaces:
- name: eno1
macAddress: "AA:BB:CC:DD:EE:11"
config:
interfaces:
- name: eno1
type: ethernet
state: up
macAddress: "AA:BB:CC:DD:EE:11"
ipv4:
enabled: false
ipv6: (12)
enabled: true
address:
- ip: 1111:2222:3333:4444::aaaa:1
prefix-length: 64
dns-resolver:
config:
search:
- example.com
server:
- 1111:2222:3333:4444::2
routes:
config:
- destination: ::/0
next-hop-interface: eno1
next-hop-address: 1111:2222:3333:4444::1
table-id: 254
1 | Create the assisted-deployment-pull-secret CR with the same namespace as the SiteConfig CR. |
2 | clusterImageSetNameRef defines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . |
3 | Configure the SSH public key used to access the cluster. |
4 | Cluster labels must correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. |
5 | Optional. The CR specifed under KlusterletAddonConfig is used to override the default KlusterletAddonConfig that is created for the cluster. |
6 | For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . |
7 | BMC address that you use to access the host. Applies to all cluster types. ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. |
8 | Name of the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. |
9 | Configures the boot mode for the host. The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. |
10 | cpuset must match the value set in the cluster PerformanceProfile CR spec.cpu.reserved field for workload partitioning. |
11 | Specifies the network settings for the node. |
12 | Configures the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. |
Generate the day-0 installation CRs by processing the modified SiteConfig
CR site-1-sno.yaml
by running the following command:
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.11.1 generator install site-1-sno.yaml /output
site-install
└── site-1-sno
├── site-1_agentclusterinstall_example-sno.yaml
├── site-1-sno_baremetalhost_example-node1.example.com.yaml
├── site-1-sno_clusterdeployment_example-sno.yaml
├── site-1-sno_configmap_example-sno.yaml
├── site-1-sno_infraenv_example-sno.yaml
├── site-1-sno_klusterletaddonconfig_example-sno.yaml
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
├── site-1-sno_managedcluster_example-sno.yaml
├── site-1-sno_namespace_example-sno.yaml
└── site-1-sno_nmstateconfig_example-node1.example.com.yaml
Optional: Generate just the day-0 MachineConfig
installation CRs for a particular cluster type by processing the reference SiteConfig
CR with the -E
option. For example, run the following commands:
Create an output folder for the MachineConfig
CRs:
$ mkdir -p ./site-machineconfig
Generate the MachineConfig
installation CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.11.1 generator install -E site-1-sno.yaml /output
site-machineconfig
└── site-1-sno
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
└── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
Generate and export the day-2 configuration CRs using the reference PolicyGenTemplate
CRs from the previous step. Run the following commands:
Create an output folder for the day-2 CRs:
$ mkdir -p ./ref
Generate and export the day-2 configuration CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.11.1 generator config -N . /output
The command generates example group and site-specific PolicyGenTemplate
CRs for single-node OpenShift, three-node clusters, and standard clusters in the ./ref
folder.
ref
└── customResource
├── common
├── example-multinode-site
├── example-sno
├── group-du-3node
├── group-du-3node-validator
│ └── Multiple-validatorCRs
├── group-du-sno
├── group-du-sno-validator
├── group-du-standard
└── group-du-standard-validator
└── Multiple-validatorCRs
Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete.
Add the required Secret
custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the ZTP pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the |
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file example-sno-secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
name: example-sno-bmc-secret
namespace: example-sno (1)
data: (2)
password: <base64_password>
username: <base64_username>
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: pull-secret