×

In OpenShift Container Platform version 4.11, you can install a cluster on IBM Z and LinuxONE infrastructure that you provision in a restricted network.

While this document refers to only IBM Z, all information in it also applies to LinuxONE.

Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster.

Prerequisites

About installations in restricted networks

In OpenShift Container Platform 4.11, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.

If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.

Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network.

Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

  • The ClusterVersion status includes an Unable to retrieve available updates error.

  • By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.

Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.11, you require access to the internet to obtain the images that are necessary to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

Requirements for a cluster with user-provisioned infrastructure

For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.

This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure.

Required machines for cluster installation

The smallest OpenShift Container Platform clusters require the following hosts:

Table 1. Minimum required hosts
Hosts Description

One temporary bootstrap machine

The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.

Three control plane machines

The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane.

At least two compute machines, which are also known as worker machines.

The workloads requested by OpenShift Container Platform users run on the compute machines.

To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines.

The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.

Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.

Minimum resource requirements for cluster installation

Each cluster machine must meet the following minimum requirements:

Machine Operating System vCPU [1] Virtual RAM Storage IOPS

Bootstrap

RHCOS

4

16 GB

100 GB

N/A

Control plane

RHCOS

4

16 GB

100 GB

N/A

Compute

RHCOS

2

8 GB

100 GB

N/A

  1. One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs.

If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.

Minimum IBM Z system environment

You can install OpenShift Container Platform version 4.11 on the following IBM hardware:

  • IBM z16 (all models), IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s

  • LinuxONE, any version

Hardware requirements

  • The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.

  • At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster.

You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster.

Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.

Operating system requirements

  • One instance of z/VM 7.1 or later

On your z/VM instance, set up:

  • Three guest virtual machines for OpenShift Container Platform control plane machines

  • Two guest virtual machines for OpenShift Container Platform compute machines

  • One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine

IBM Z network connectivity requirements

To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:

  • A direct-attached OSA or RoCE network adapter

  • A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.

Disk storage for the z/VM guest virtual machines
  • FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.

  • FCP attached disk storage

Storage / Main Memory
  • 16 GB for OpenShift Container Platform control plane machines

  • 8 GB for OpenShift Container Platform compute machines

  • 16 GB for the temporary OpenShift Container Platform bootstrap machine

Preferred IBM Z system environment

Hardware requirements

  • 3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.

  • Two network connections to connect to both connect to the LoadBalancer service and to serve data for traffic outside the cluster.

  • HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network.

Operating system requirements

  • 2 or 3 instances of z/VM 7.1 or later for high availability

On your z/VM instances, set up:

  • 3 guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance.

  • At least 6 guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances.

  • 1 guest virtual machine for the temporary OpenShift Container Platform bootstrap machine.

  • To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE. Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation.

IBM Z network connectivity requirements

To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:

  • A direct-attached OSA or RoCE network adapter

  • A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.

Disk storage for the z/VM guest virtual machines
  • FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.

  • FCP attached disk storage

Storage / Main Memory
  • 16 GB for OpenShift Container Platform control plane machines

  • 8 GB for OpenShift Container Platform compute machines

  • 16 GB for the temporary OpenShift Container Platform bootstrap machine

Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

Additional resources

Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files.

During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.

It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.

If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

Setting the cluster node hostnames through DHCP

On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.

Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.

Network connectivity requirements

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.

This section provides details about the ports that are required.

Table 2. Ports used for all-machine to all-machine communications
Protocol Port Description

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 3. Ports used for all-machine to control plane communications
Protocol Port Description

TCP

6443

Kubernetes API

Table 4. Ports used for control plane machine to control plane machine communications
Protocol Port Description

TCP

2379-2380

etcd server and peer ports

NTP configuration for user-provisioned infrastructure

OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

Additional resources

User-provisioned DNS requirements

In OpenShift Container Platform deployments, DNS name resolution is required for the following components:

  • The Kubernetes API

  • The OpenShift Container Platform application wildcard

  • The bootstrap, control plane, and compute machines

Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.

DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.

The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 5. Required DNS records
Compone