<cluster-name>.<domain-name>
Installing OpenShift Container Platform with Installer Provisioned Infrastructure (IPI) requires:
One provisioner node with Red Hat Enterprise Linux (RHEL) 8.1 installed.
Three control plane or master nodes.
At least two worker nodes.
IPMI access to each node.
At least two networks:
One network for provisioning nodes
One routable network; and,
One optional management network.
Before installing OpenShift Container Platform with IPI, ensure the hardware environment meets the following requirements.
IPI installation involves a number of hardware node requirements:
CPU architecture: All nodes must use x86_64
CPU architecture.
Unified Extensible Firmware Interface (UEFI): UEFI boot is required on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI device PXE settings must be set to use the IPv6 protocol on the provisioning network NIC.
Similar nodes: Nodes should have an identical configuration per role. That is, control plane nodes should be the same brand and model with the same CPU, RAM and storage configuration. Worker nodes should be identical.
Intelligent Platform Management Interface (IPMI): IPI installation requires IPMI enabled on each node.
Latest generation: Nodes should be of the most recent generation. IPI installation relies on IPMI, which should be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the provisioner node and RHCOS 8 for the worker nodes.
Network interfaces: Each node must have at least two 10 GB network interfaces (NICs)- one for the provisioning
network and one for the routable baremetal
network. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as eth0
or eno1
, should be the same name on all of the other nodes.
The same principle applies to the remaining NICs on each node.
Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
Provisioner node: IPI installation requires one provisioner node.
Control plane: IPI installation requires three Control Plane or master nodes for high availability.
Worker nodes: A typical production cluster will have many worker nodes. IPI installation in a high availability environment requires at least two worker nodes in an initial cluster.
IPI installation involves several network requirements. First, IPI installation involves a non-routable provisioning
network for provisioning the OS on each bare metal node and a routable baremetal
network. Since IPI installation deploys ironic-dnsmasq
, the networks should have no other DHCP servers running on the same broadcast domain. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster.
Each OpenShift Container Platform node in the cluster must have access to an NTP server.
OpenShift Container Platform deploys with two networks:
provisioning
: The provisioning
network is a non-routable network used for
provisioning the underlying operating system on each node that is a part of the
OpenShift Container Platform cluster. The first NIC on each node, such as eth0
or eno1
,
must interface with the provisioning
network.
baremetal
: The baremetal
network is a routable network. The second NIC on each node, such as eth1
or eno2
, must interface with the baremetal
network.
Each NIC should be on a separate VLAN corresponding to the appropriate network. |
Clients access the OpenShift Container Platform cluster nodes over the baremetal
network.
A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
<cluster-name>.<domain-name>
For example:
test-cluster.example.com
For the baremetal
network, a network administrator must reserve a number of IP addresses, including:
Three virtual IP addresses.
One IP address for the API endpoint
One IP address for the wildcard Ingress endpoint
One IP address for the name server
One IP Address for the provisioner node.
One IP address for each Control Plane (Master) node.
One IP address for each worker node.
The following table provides an exemplary embodiment of hostnames for each node in the OpenShift Container Platform cluster.
Usage | Hostname | IP |
---|---|---|
API |
api.<cluster-name>.<domain> |
<ip> |
Ingress LB (apps) |
*.apps.<cluster-name>.<domain> |
<ip> |
Nameserver |
ns1.<cluster-name>.<domain> |
<ip> |
Provisioner node |
provisioner.<cluster-name>.<domain> |
<ip> |
Master-0 |
openshift-master-0.<cluster-name>.<domain> |
<ip> |
Master-1 |
openshift-master-1.<cluster-name>-.<domain> |
<ip> |
Master-2 |
openshift-master-2.<cluster-name>.<domain> |
<ip> |
Worker-0 |
openshift-worker-0.<cluster-name>.<domain> |
<ip> |
Worker-1 |
openshift-worker-1.<cluster-name>.<domain> |
<ip> |
Worker-n |
openshift-worker-n.<cluster-name>.<domain> |
<ip> |
Each node in the cluster requires the following configuration for proper installation.
A mismatch between nodes will cause an installation failure. |
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:
NIC |
Network |
VLAN |
NIC1 |
|
<provisioning-vlan> |
NIC2 |
|
<baremetal-vlan> |
NIC1 is a non-routable network (provisioning
) that is only used for the installation of the OpenShift Container Platform cluster.
The Red Hat Enterprise Linux (RHEL) 8.1 installation process on the provisioner node may vary. To install RHEL 8.1 using a local Satellite server or a PXE server, you may PXE-enable NIC2.
PXE |
Boot order |
NIC1 PXE-enabled |
1 |
NIC2 |
2 |
Ensure PXE is disabled on all other NICs. |
Configure the Control Plane (master) and worker nodes as follows:
PXE |
Boot order |
NIC1 PXE-enabled (provisioning network) |
1 |
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.
Each node must be accessible via out-of-band management. The provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation.
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning
network or the baremetal
network are valid options.
Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes:
Out-of-band management IP
Examples
Dell (iDRAC) IP
HP (iLO) IP
NIC1 (provisioning
) MAC address
NIC2 (baremetal
) MAC address
NIC1 VLAN is configured for the provisioning
network.
NIC2 VLAN is configured for the baremetal
network.
NIC1 is PXE-enabled on the provisioner, control plane (master), and worker nodes.
NIC2 is PXE-enabled when using a local PXE or Satellite server to install OS images.
PXE has been disabled on all other NICs.
Control Plane (master) and worker nodes are configured.
All nodes accessible via out-of-band management.
A separate management network has been created. (optional)
Required data for installation.