-
One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs.
In OpenShift Container Platform version 4.10, you can install a cluster on IBM Z and LinuxONE infrastructure that you provision in a restricted network.
While this document refers to only IBM Z, all information in it also applies to LinuxONE. |
Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. |
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You created a mirror registry for installation in a restricted network and obtained the imageContentSources
data for your version of OpenShift Container Platform.
Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process.
Ensure that installation steps are done from a machine with access to the installation media. |
You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany
access.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
Be sure to also review this site list if you are configuring a proxy. |
In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. |
Clusters in restricted networks have the following additional limitations and restrictions:
The ClusterVersion
status includes an Unable to retrieve available updates
error.
By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.
This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure.
The smallest OpenShift Container Platform clusters require the following hosts:
Hosts | Description |
---|---|
One temporary bootstrap machine |
The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. |
Three control plane machines |
The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. |
At least two compute machines, which are also known as worker machines. |
The workloads requested by OpenShift Container Platform users run on the compute machines. |
To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. |
The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS |
---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
N/A |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
N/A |
Compute |
RHCOS |
2 |
8 GB |
100 GB |
N/A |
One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
You can install OpenShift Container Platform version 4.10 on the following IBM hardware:
IBM z16 (all models), IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s
LinuxONE, any version
The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
At least one network connection to both connect to the LoadBalancer
service and to serve data for traffic outside the cluster.
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. |
Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. |
One instance of z/VM 7.1 or later
On your z/VM instance, set up:
Three guest virtual machines for OpenShift Container Platform control plane machines
Two guest virtual machines for OpenShift Container Platform compute machines
One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
A direct-attached OSA or RoCE network adapter
A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.
FCP attached disk storage
16 GB for OpenShift Container Platform control plane machines
8 GB for OpenShift Container Platform compute machines
16 GB for the temporary OpenShift Container Platform bootstrap machine
3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
Two network connections to connect to both connect to the LoadBalancer
service and to serve data for traffic outside the cluster.
HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network.
2 or 3 instances of z/VM 7.1 or later for high availability
On your z/VM instances, set up:
3 guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance.
At least 6 guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances.
1 guest virtual machine for the temporary OpenShift Container Platform bootstrap machine.
To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE
. Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation.
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
A direct-attached OSA or RoCE network adapter
A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.
FCP attached disk storage
16 GB for OpenShift Container Platform control plane machines
8 GB for OpenShift Container Platform compute machines
16 GB for the temporary OpenShift Container Platform bootstrap machine
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM Documentation.
See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization.
See Topics in LPAR performance for LPAR weight management and entitlements.
Recommended host practices for IBM Z & LinuxONE environments
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs
during boot
to fetch their Ignition config files.
During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.
If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. |
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost
or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
This section provides details about the ports that are required.
Protocol | Port | Description |
---|---|---|
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
|
openshift-sdn |
|
UDP |
|
VXLAN |
|
Geneve |
|
|
Host level services, including the node exporter on ports |
|
|
IPsec IKE packets |
|
|
IPsec NAT-T packets |
|
TCP/UDP |
|
Kubernetes node port |
ESP |
N/A |
IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
|
Kubernetes API |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.
If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
In OpenShift Container Platform deployments, DNS name resolution is required for the following components:
The Kubernetes API
The OpenShift Container Platform application wildcard
The bootstrap, control plane, and compute machines
Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.
The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name>
is the cluster name and <base_domain>
is the base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description | |
---|---|---|---|
Kubernetes API |
|
A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
|
|
A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster.
|
||
Routes |
|
A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, |
|
Bootstrap machine |
|
A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. |
|
Control plane machines |
|
DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. |
|
Compute machines |
|
DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. |
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. |
You can use the |
This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4
and the base domain is example.com
.
The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5 (1)
api-int.ocp4.example.com. IN A 192.168.1.5 (2)
;
*.apps.ocp4.example.com. IN A 192.168.1.5 (3)
;
bootstrap.ocp4.example.com. IN A 192.168.1.96 (4)
;
master0.ocp4.example.com. IN A 192.168.1.97 (5)
master1.ocp4.example.com. IN A 192.168.1.98 (5)
master2.ocp4.example.com. IN A 192.168.1.99 (5)
;
worker0.ocp4.example.com. IN A 192.168.1.11 (6)
worker1.ocp4.example.com. IN A 192.168.1.7 (6)
;
;EOF
1 | Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. | ||
2 | Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. | ||
3 | Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
|
||
4 | Provides name resolution for the bootstrap machine. | ||
5 | Provides name resolution for the control plane machines. | ||
6 | Provides name resolution for the compute machines. |
The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster.
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. (1)
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. (2)
;
96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. (3)
;
97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. (4)
98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. (4)
99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. (4)
;
11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. (5)
7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. (5)
;
;EOF
1 | Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. |
2 | Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. |
3 | Provides reverse DNS resolution for the bootstrap machine. |
4 | Provides reverse DNS resolution for the control plane machines. |
5 | Provides reverse DNS resolution for the compute machines. |
A PTR record is not required for the OpenShift Container Platform application wildcard. |
Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you want to deploy the API and application ingress load balancers with a Red Hat E |