$ sudo dnf install /usr/bin/nmstatectl -y
Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer.
The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements.
See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer.
You reviewed details about the OpenShift Container Platform installation and update processes.
Use this procedure to download the Agent-based Installer and the CLI needed for your installation.
Log in to the OpenShift Container Platform web console using your login credentials.
Navigate to Datacenter.
Click Run Agent-based Installer locally.
Select the operating system and architecture for the OpenShift Installer and Command line interface.
Click Download Installer to download and extract the install program.
Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
Click Download command-line tools and place the openshift-install
binary in a directory that is on your PATH
.
Use this procedure to create the preferred configuration inputs used to create the PXE files.
Install nmstate
dependency by running the following command:
$ sudo dnf install /usr/bin/nmstatectl -y
Place the openshift-install
binary in a directory that is on your PATH.
Create a directory to store the install configuration by running the following command:
$ mkdir ~/<directory_name>
This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. |
Create the install-config.yaml
file by running the following command:
$ cat << EOF > ./<directory_name>/install-config.yaml
apiVersion: v1
baseDomain: test.example.com
compute:
- architecture: amd64 (1)
hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: sno-cluster (2)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.0.0/16
networkType: OVNKubernetes (3)
serviceNetwork:
- 172.30.0.0/16
platform: (4)
none: {}
pullSecret: '<pull_secret>' (5)
sshKey: '<ssh_pub_key>' (6)
EOF
1 | Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x .
If you are using the release image with the |
||
2 | Required. Specify your cluster name. | ||
3 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
||
4 | Specify your platform.
|
||
5 | Specify your pull secret. | ||
6 | Specify your SSH public key. |
If you set the platform to
IPv6 is supported only on bare metal platforms. |
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
platform:
baremetal:
apiVIPs:
- 192.168.11.3
- 2001:DB8::4
ingressVIPs:
- 192.168.11.4
- 2001:DB8::5
When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the |
Create the agent-config.yaml
file by running the following command:
$ cat > agent-config.yaml << EOF
apiVersion: v1beta1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80 (1)
hosts: (2)
- hostname: master-0 (3)
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a5
rootDeviceHints: (4)
deviceName: /dev/sdb
networkConfig: (5)
interfaces:
- name: eno1
type: ethernet
state: up
mac-address: 00:ef:44:21:e6:a5
ipv4:
enabled: true
address:
- ip: 192.168.111.80
prefix-length: 23
dhcp: false
dns-resolver:
config:
server:
- 192.168.111.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.111.2
next-hop-interface: eno1
table-id: 254
EOF
1 | This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component.
You must provide the rendezvous IP address when you do not specify at least one host’s IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . |
2 | Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. |
3 | Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. |
4 | Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. |
5 | Optional: Configures the network interface of a host in NMState format. |
Optional: To create an iPXE script, add the bootArtifactsBaseURL
to the agent-config.yaml
file:
apiVersion: v1beta1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80
bootArtifactsBaseURL: <asset_server_URL>
Where <asset_server_URL>
is the URL of the server you will upload the PXE assets to.
See Configuring a three-node cluster to deploy three-node clusters in bare metal environments.
Use the following procedure to create the assets and optional script to implement in your PXE infrastructure.
Create the PXE assets by running the following command:
$ openshift-install agent create pxe-files
The generated PXE assets and optional iPXE script can be found in the boot-artifacts
directory.
boot-artifacts
├─ agent.x86_64-initrd.img
├─ agent.x86_64.ipxe
├─ agent.x86_64-rootfs.img
└─ agent.x86_64-vmlinuz
The contents of the |
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default |
Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process.
If you generated an iPXE script, the location of the assets must match the |
After creating the PXE assets, you can add IBM Z® agents. Only use this procedure for IBM Z® clusters.
Depending on your IBM Z® environment, you can choose from the following options:
Adding IBM Z® agents with z/VM
Adding IBM Z® agents with RHEL KVM
Adding IBM Z® agents with Logical Partition (LPAR)
Currently, ISO boot support on IBM Z® ( |
Use the following procedure to manually add IBM Z® agents with z/VM. Only use this procedure for IBM Z® clusters with z/VM.
Create a parameter file for the z/VM guest:
rd.neednet=1 \
console=ttysclp0 \
coreos.live.rootfs_url=<rootfs_url> \ (1)
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ (2)
zfcp.allow_lun_scan=0 \ (3)
rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \
rd.dasd=0.0.4411 \ (4)
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ (5)
random.trust_cpu=on rd.luks.options=discard \
ignition.firstboot ignition.platform.id=metal \
console=tty1 console=ttyS1,115200n8 \
coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8"
1 | For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported. |
2 | For the ip parameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z® and IBM® LinuxONE". |
3 | The default is 1 . Omit this entry when using an OSA network adapter. |
4 | For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks. |
5 | For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. |
Leave all other parameters unchanged.
Punch the kernel.img
,generic.parm
, and initrd.img
files to the virtual reader of the z/VM guest virtual machine.
For more information, see PUNCH (IBM Documentation).
You can use the |
Log in to the conversational monitor system (CMS) on the bootstrap machine.
IPL the bootstrap machine from the reader by running the following command:
$ ipl c
For more information, see IPL (IBM Documentation).
Use the following procedure to manually add IBM Z® agents with RHEL KVM. Only use this procedure for IBM Z® clusters with RHEL KVM.
Boot your RHEL KVM machine.
To deploy the virtual server, run the virt-install
command with the following parameters:
$ virt-install \
--name <vm_name> \
--autostart \
--ram=16384 \
--cpu host \
--vcpus=8 \
--location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \(1)
--disk <qcow_image_path> \
--network network:macvtap ,mac=<mac_address> \
--graphics none \
--noautoconsole \
--wait=-1 \
--extra-args "rd.neednet=1 nameserver=<nameserver>" \
--extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \
--extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \
--extra-args "random.trust_cpu=on rd.luks.options=discard" \
--extra-args "ignition.firstboot ignition.platform.id=metal" \
--extra-args "console=tty1 console=ttyS1,115200n8" \
--extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \
--osinfo detect=on,require=off
1 | For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. |
Use the following procedure to manually add IBM Z® agents to your cluster that runs in an LPAR environment. Use this procedure only for IBM Z® clusters running in an LPAR.
You have Python 3 installed.
Create a boot parameter file for the agents.
rd.neednet=1 cio_ignore=all,!condev \
console=ttysclp0 \
ignition.firstboot ignition.platform.id=metal
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \(1)
coreos.inst.persistent-kargs=console=ttysclp0
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \(2)
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> \(3)
zfcp.allow_lun_scan=0
ai.ip_cfg_override=1 \(4)
random.trust_cpu=on rd.luks.options=discard
1 | For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported. |
2 | For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE. |
3 | For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. |
4 | Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets. |
Generate the .ins
and initrd.img.addrsize
files by running the following Python script:
The .ins
file is a special file that includes installation data and is present on the FTP server. It can be accessed from the HMC system.
This file contains details such as mapping of the location of installation data on the disk or FTP server, the memory locations where the data is to be copied.
The |
Save the following script to a file, such as generate-files.py
:
generate-files.py
file# The following commands retrieve the size of the `kernel` and `initrd`:
KERNEL_IMG_PATH='./kernel.img'
INITRD_IMG_PATH='./initrd.img'
CMDLINE_PATH='./generic.prm'
kernel_size=(stat -c%s KERNEL_IMG_PATH)
initrd_size=(stat -c%s INITRD_IMG_PATH)
# The following command rounds the `kernel` size up to the next megabytes (MB) boundary.
# This value is the starting address of `initrd.img`.
offset=(( (kernel_size + 1048575) / 1048576 * 1048576 ))
INITRD_IMG_NAME=(echo INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev)
# The following commands create the kernel binary patch file that contains the `initrd` address and size:
KERNEL_OFFSET=0x00000000
KERNEL_CMDLINE_OFFSET=0x00010480
INITRD_ADDR_SIZE_OFFSET=0x00010408
OFFSET_HEX=(printf '0x%08x\n' offset)
# The following command converts the address and size to binary format:
printf "(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin
# The following command concatenates the address and size binaries:
cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize"
# The following command deletes temporary files:
rm -rf temp_address.bin temp_size.bin
# The following commands create the `.ins` file.
# The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied.
KERNEL_IMG_PATH KERNEL_OFFSET
INITRD_IMG_PATH OFFSET_HEX
INITRD_IMG_NAME.addrsize INITRD_ADDR_SIZE_OFFSET
CMDLINE_PATH KERNEL_CMDLINE_OFFSET
Execute the script by running the following command:
$ python3 <file_name>.py
Transfer the initrd
, kernel
, generic.ins
, and initrd.img.addrsize
parameter files to the file server. For more information, see Booting Linux in LPAR mode (IBM documentation).
Start the machine.
Repeat the procedure for all other machines in the cluster.