Installer-provisioned installation of OpenShift Container Platform requires:

  1. One provisioner node with RHEL 8.1 installed.

  2. Three Control Plane nodes.

  3. Baseboard Management Controller (BMC) access to each node.

  4. At least one network:

    1. One required routable network

    2. One optional network for provisioning nodes; and,

    3. One optional management network.

Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements.

Node requirements

Installer-provisioned installation involves a number of hardware node requirements:

  • CPU architecture: All nodes must use x86_64 CPU architecture.

  • Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory and storage configuration.

  • Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, RedFish, or a proprietary protocol.

  • Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the provisioner node and RHCOS 8 for the control plane and worker nodes.

  • Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.

  • Provisioner node: Installer-provisioned installation requires one provisioner node.

  • Control plane: Installer-provisioned installation requires three control plane nodes for high availability.

  • Worker nodes: While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development and testing.

  • Network interfaces: Each node must have at least one 10GB network interface for the routable baremetal network. Each node must have one 10GB network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as eth0 or eno1, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.

  • Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement.

Network requirements

Installer-provisioned installation of OpenShift Container Platform involves several network requirements by default. First, installer-provisioned installation involves a non-routable provisioning network for provisioning the operating system on each bare metal node and a routable baremetal network. Since installer-provisioned installation deploys ironic-dnsmasq, the networks should have no other DHCP servers running on the same broadcast domain. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster.

Network Time Protocol (NTP)

Each OpenShift Container Platform node in the cluster must have access to an NTP server.

Configuring NICs

OpenShift Container Platform deploys with two networks:

  • provisioning: The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. When deploying using the provisioning network, the first NIC on each node, such as eth0 or eno1, must interface with the provisioning network.

  • baremetal: The baremetal network is a routable network. When deploying using the provisioning network, the second NIC on each node, such as eth1 or eno2, must interface with the baremetal network. When deploying without a provisioning network, you can use any NIC on each node to interface with the baremetal network.

Each NIC should be on a separate VLAN corresponding to the appropriate network.

Configuring the DNS server

Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.

<cluster-name>.<domain-name>

For example:

test-cluster.example.com
Reserving IP Addresses for Nodes with the DHCP Server

For the baremetal network, a network administrator must reserve a number of IP addresses, including:

  1. Two virtual IP addresses.

    • One IP address for the API endpoint

    • One IP address for the wildcard ingress endpoint

  2. One IP address for the provisioner node.

  3. One IP address for each control plane (master) node.

  4. One IP address for each worker node, if applicable.

The following table provides an exemplary embodiment of hostnames for each node in the OpenShift Container Platform cluster.

Usage Hostname IP

API

api.<cluster-name>.<domain>

<ip>

Ingress LB (apps)

*.apps.<cluster-name>.<domain>

<ip>

Provisioner node

provisioner.<cluster-name>.<domain>

<ip>

Master-0

openshift-master-0.<cluster-name>.<domain>

<ip>

Master-1

openshift-master-1.<cluster-name>-.<domain>

<ip>

Master-2

openshift-master-2.<cluster-name>.<domain>

<ip>

Worker-0

openshift-worker-0.<cluster-name>.<domain>

<ip>

Worker-1

openshift-worker-1.<cluster-name>.<domain>

<ip>

Worker-n

openshift-worker-n.<cluster-name>.<domain>

<ip>

Additional requirements with no provisioning network

All installer-provisioned installations require a baremetal network. The baremetal network is a routable network used for external network access to the outside world. In addition to the IP address supplied to the OpenShift Container Platform cluster node, installations without a provisioning network require the following:

  • Setting an available IP address from the baremetal network to the bootstrapProvisioningIP configuration setting within the install-config.yaml configuration file.

  • Setting an available IP address from the baremetal network to the provisioningHostIP configuration setting within the install-config.yaml configuration file.

  • Deploying the OpenShift Container Platform cluster using RedFish Virtual Media/iDRAC Virtual Media.

Configuring additional IP addresses for bootstrapProvisioningIP and provisioningHostIP is not required when using a provisioning network.

Configuring nodes

Configuring nodes when using the provisioning network

Each node in the cluster requires the following configuration for proper installation.

A mismatch between nodes will cause an installation failure.

While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:

NIC

Network

VLAN

NIC1

provisioning

<provisioning-vlan>

NIC2

baremetal

<baremetal-vlan>

NIC1 is a non-routable network (provisioning) that is only used for the installation of the OpenShift Container Platform cluster.

The RHEL 8.x installation process on the provisioner node might vary. To install RHEL 8.x using a local Satellite server or a PXE server, PXE-enable NIC2.

PXE

Boot order

NIC1 PXE-enabled provisioning network

1

NIC2 baremetal network. PXE-enabled is optional.

2

Ensure PXE is disabled on all other NICs.

Configure the control plane and worker nodes as follows:

PXE

Boot order

NIC1 PXE-enabled (provisioning network)

1

Configuring nodes without the provisioning network

The installation process requires one NIC:

NIC

Network

VLAN

NICx

baremetal

<baremetal-vlan>

NICx is a routable network (baremetal) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet.

Out-of-band management

Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.

Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation.

The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.

Required data for installation

Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes:

  • Out-of-band management IP

    • Examples

      • Dell (iDRAC) IP

      • HP (iLO) IP

When using the provisioning network
  • NIC1 (provisioning) MAC address

  • NIC2 (baremetal) MAC address

When omitting the provisioning network
  • NICx (baremetal) MAC address

Validation checklist for nodes

When using the provisioning network
  • NIC1 VLAN is configured for the provisioning network.

  • NIC2 VLAN is configured for the baremetal network.

  • NIC1 is PXE-enabled on the provisioner, Control Plane (master), and worker nodes.

  • PXE has been disabled on all other NICs.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.

When omitting the provisioning network
  • NICx VLAN is configured for the baremetal network.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.