For a cluster that contains user-provisioned infrastructure, you must deploy all
of the required machines.
This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure.
Required machines for cluster installation
The smallest OpenShift Container Platform clusters require the following hosts:
Table 1. Minimum required hosts
Hosts |
Description |
One temporary bootstrap machine |
The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster
on the three control plane machines. You can remove the bootstrap machine after
you install the cluster. |
Three control plane machines |
The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. |
At least two compute machines, which are also known as worker machines. |
The workloads requested by OpenShift Container Platform users run on the compute machines. |
|
To maintain high availability of your cluster, use separate physical hosts for
these cluster machines.
|
The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Table 2. Minimum resource requirements
Machine |
Operating System |
vCPU [1] |
Virtual RAM |
Storage |
IOPS [2] |
Bootstrap |
RHCOS |
2 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
2 |
16 GB |
100 GB |
300 |
Compute |
RHCOS |
2 |
8 GB |
100 GB |
300 |
-
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
-
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Minimum IBM Power requirements
You can install OpenShift Container Platform version 4.11 on the following IBM hardware:
Hardware requirements
Operating system requirements
On your IBM Power instance, set up:
-
Three guest virtual machines for OpenShift Container Platform control plane machines
-
Two guest virtual machines for OpenShift Container Platform compute machines
-
One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine
Disk storage for the IBM Power guest virtual machines
Network for the PowerVM guest virtual machines
-
Dedicated physical adapter, or SR-IOV virtual function
-
Available by the Virtual I/O Server using Shared Ethernet Adapter
-
Virtualized by the Virtual I/O Server using IBM vNIC
Storage / main memory
-
100 GB / 16 GB for OpenShift Container Platform control plane machines
-
100 GB / 8 GB for OpenShift Container Platform compute machines
-
100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine
Recommended IBM Power system requirements
Hardware requirements
Operating system requirements
On your IBM Power instance, set up:
-
Three guest virtual machines for OpenShift Container Platform control plane machines
-
Two guest virtual machines for OpenShift Container Platform compute machines
-
One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine
Disk storage for the IBM Power guest virtual machines
Network for the PowerVM guest virtual machines
-
Dedicated physical adapter, or SR-IOV virtual function
-
Available by the Virtual I/O Server using Shared Ethernet Adapter
-
Virtualized by the Virtual I/O Server using IBM vNIC
Storage / main memory
-
120 GB / 32 GB for OpenShift Container Platform control plane machines
-
120 GB / 32 GB for OpenShift Container Platform compute machines
-
120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine
Certificate signing requests management
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
Networking requirements for user-provisioned infrastructure
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs
during boot
to fetch their Ignition config files.
During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.
|
If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options.
|
The Kubernetes API server must be able to resolve the node names of the cluster
machines. If the API servers and worker nodes are in different zones, you can
configure a default DNS search zone to allow the API server to resolve the
node names. Another supported approach is to always refer to hosts by their
fully-qualified domain names in both the node objects and all DNS requests.
Setting the cluster node hostnames through DHCP
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost
or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.
Network connectivity requirements
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster
components to communicate. Each machine must be able to resolve the hostnames
of all other machines in the cluster.
This section provides details about the ports that are required.
|
In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images
for platform containers and provide telemetry data to Red Hat.
|
Table 3. Ports used for all-machine to all-machine communications
Protocol |
Port |
Description |
|