×

System Requirements

The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.

Red Hat Subscriptions

You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.

OpenShift Container Platform 3.6 requires Docker 1.12.

Minimum Hardware Requirements

The system requirements vary per host type:

Masters

  • Physical or virtual system, or an instance running on a public or private IaaS.

  • Base OS: RHEL 7.3 or 7.4 with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.3.6 or later.

  • Minimum 4 vCPU (additional are strongly recommended).

  • Minimum 16 GB RAM (additional memory is strongly recommended, especially if etcd is co-located on masters).

  • Minimum 40 GB hard disk space for the file system containing /var/. redcircle 1

  • Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.

  • Minimum 1 GB hard disk space for the file system containing the system’s temporary directory. redcircle 2

  • Masters with a co-located etcd require a minimum of 4 cores. 2 core systems will not work.

Nodes

  • Physical or virtual system, or an instance running on a public or private IaaS.

  • Base OS: RHEL 7.3 or 7.4 with "Minimal" installation option, or RHEL Atomic Host 7.3.6 or later.

  • NetworkManager 1.0 or later.

  • 1 vCPU.

  • Minimum 8 GB RAM.

  • Minimum 15 GB hard disk space for the file system containing /var/. redcircle 1

  • Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.

  • Minimum 1 GB hard disk space for the file system containing the system’s temporary directory. redcircle 2

  • An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage.

Separate etcd Nodes

  • Minimum 20 GB hard disk space for etcd data.

  • Consult Hardware Recommendations to properly size your etcd nodes.

  • Currently, OpenShift Container Platform stores image, build, and deployment metadata in etcd. You must periodically prune old resources. If you are planning to leverage a large number of images/builds/deployments, place etcd on machines with large amounts of memory and fast SSD drives.

redcircle 1 Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage with Docker-formatted Containers for instructions on configuring this during or after installation.

redcircle 2 The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.

OpenShift Container Platform only supports servers with the x86_64 architecture.

Production Level Hardware Requirements

Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:

Master Hosts

In a highly available OpenShift Container Platform cluster with a separate etcd cluster, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.

When planning an environment with multiple masters, a minimum of three etcd hosts and a load-balancer between the master hosts are required.

The OpenShift Container Platform master caches deserialized versions of resources aggressively to ease CPU load. However, in smaller clusters of less than 1000 pods, this cache can waste a lot of memory for negligible CPU load reduction. The default cache size is 50,000 entries, which, depending on the size of your resources, can grow to occupy 1 to 2 GB of memory. This cache size can be reduced using the following setting the in /etc/origin/master/master-config.yaml:

kubernetesMasterConfig:
  apiServerArguments:
    deserialization-cache-size:
    - "1000"
Node Hosts

The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.

Use the above with the following table to plan the maximum loads for nodes and pods:

Host Sizing Recommendation

Maximum nodes per cluster

2000

Maximum pods per cluster

120000

Maximum pods per nodes

250

Maximum pods per core

10

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Configuring Core Usage

By default, OpenShift Container Platform masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Container Platform to use by setting the GOMAXPROCS environment variable.

For example, run the following before starting the server to make OpenShift Container Platform only run on one core:

# export GOMAXPROCS=1

SELinux

Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Container Platform or the installer will fail. Also, configure SELINUXTYPE=targeted in the /etc/selinux/config file:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Using OverlayFS

OverlayFS is a union file system that allows you to overlay one file system on top of another.

As of Red Hat Enterprise Linux 7.4, you have the option to configure your OpenShift Container Platform environment to use OverlayFS. The overlay2 graph driver is fully supported in addition to the older overlay driver. However, Red Hat recommends using overlay2 instead of overlay, because of its speed and simple implementation.

Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.

See the Overlay Graph Driver section of the Atomic Host documentation for instructions on how to to enable the overlay2 graph driver for the Docker service.

NTP

You must enable Network Time Protocol (NTP) to prevent masters and nodes in the cluster from going out of sync. Set openshift_clock_enabled to true in the Ansible playbook to enable NTP on masters and nodes in the cluster during Ansible installation.

# openshift_clock_enabled=true

Security Warning

OpenShift Container Platform runs containers on your hosts, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access your host’s Docker daemon and perform docker build and docker push operations. As such, you should be aware of the inherent security risks associated with performing docker run operations on arbitrary images as they effectively have root access.

For more information, see these articles:

To address these risks, OpenShift Container Platform uses security context constraints that control the actions that pods can perform and what it has the ability to access.

Environment Requirements

The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.

DNS

OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.

Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.

Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:

  1. By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.

  2. OpenShift Container Platform then inserts one DNS value into the pods (above the node’s nameserver values). That value is defined in the /etc/origin/node/node-config.yaml file by the dnsIP parameter, which by default is set to the address of the host node because the host is using dnsmasq.

  3. If the dnsIP parameter is omitted from the node-config.yaml file, then the value defaults to the kubernetes service IP, which is the first nameserver in the pod’s /etc/resolv.conf file.

As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.

NetworkManager is required on the nodes in order to populate dnsmasq with the DNS IP addresses. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no.

The following is an example set of DNS records for the Single Master and Multiple Nodes scenario:

master    A   10.64.33.100
node1     A   10.64.33.101
node2     A   10.64.33.102

If you do not have a properly functioning DNS environment, you could experience failure with:

  • Product installation via the reference Ansible-based scripts

  • Deployment of the infrastructure containers (registry, routers)

  • Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone

Configuring Hosts to Use DNS

Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:

  • Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.

  • Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration. Optionally, you can add a value to dnsIP in the node-config.yaml file to prepend the pod’s resolv.conf file. The second nameserver is then defined by the host’s first nameserver. By default, this will be the IP address of the node host.

    For most configurations, do not set the openshift_dns_ip option during the advanced installation of OpenShift Container Platform (using Ansible), because this option overrides the default IP address set by dnsIP.

    Instead, allow the installer to configure each node to use dnsmasq and forward requests to SkyDNS or the external DNS provider. If you do set the openshift_dns_ip option, then it should be set either with a DNS IP that queries SkyDNS first, or to the SkyDNS service or endpoint IP (the Kubernetes service IP).

To verify that hosts can be resolved by your DNS server:

  1. Check the contents of /etc/resolv.conf:

    $ cat /etc/resolv.conf
    # Generated by NetworkManager
    search example.com
    nameserver 10.64.33.1
    # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh

    In this example, 10.64.33.1 is the address of our DNS server.

  2. Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:

    $ dig <node_hostname> @<IP_address> +short

    For example:

    $ dig master.example.com @10.64.33.1 +short
    10.64.33.100
    $ dig node1.example.com @10.64.33.1 +short
    10.64.33.101

Configuring a DNS Wildcard

Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.

A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.

For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:

*.cloudapps.example.com. 300 IN  A 192.168.133.2

In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f command on each node.

In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly.

Network Access

A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.

NetworkManager

NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no.

Configuring firewalld as the firewall

While iptables is the default firewall, firewalld is recommended for new installations. You can enable firewalld by setting os_firewall_use_firewalld=true in the Ansible inventory file.

[OSEv3:vars]
os_firewall_use_firewalld=True

Setting this variable to true opens the required ports and adds rules to the default zone, which ensure that firewalld is configured correctly.

Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.

Required Ports

The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.

Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.

Table 1. Node to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

Table 2. Nodes to Master

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

4789

UDP

Required for SDN communication between pods on separate hosts.

443 or 8443

TCP

Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on.

Table 3. Master to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

10250

TCP

The master proxies to node hosts via the Kubelet for oc commands.

In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself.

In a single-master cluster:

  • Ports marked with (L) must be open.

  • Ports not marked with (L) need not be open.

In a multiple-master cluster, all the listed ports must be open.

Table 4. Master to Master

53 (L) or 8053 (L)

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

2049 (L)

TCP/UDP

Required when provisioning an NFS host as part of the installer.

2379

TCP

Used for standalone etcd (clustered) to accept changes in state.

2380

TCP

etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered).

4001 (L)

TCP

Used for embedded etcd (non-clustered) to accept changes in state.

4789 (L)

UDP

Required for SDN communication between pods on separate hosts.

Table 5. External to Load Balancer

9000

TCP

If you choose the native HA method, optional to allow access to the HAProxy statistics page.

Table 6. External to Master

443 or 8443

TCP

Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on.

8444

TCP

Port that the atomic-openshift-master-controllers service listens on. Required to be open for the /metrics and /healthz endpoints.

Table 7. IaaS Deployments

22

TCP

Required for SSH by the installer or system administrator.

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts.

80 or 443

TCP

For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router.

1936

TCP

(Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information.

4001

TCP

For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections.

2379 and 2380

TCP

For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd.

4789

UDP

For VxLAN use (OpenShift SDN). Required only internally on node hosts.

8443

TCP

For use by the OpenShift Container Platform web console, shared with the API server.

10250

TCP

For use by the Kubelet. Required to be externally open on nodes.

Notes

  • In the above examples, port 4789 is used for User Datagram Protocol (UDP).

  • When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.

  • OpenShift Container Platform internal DNS cannot be received over SDN. Depending on the detected values of openshift_facts, or if the openshift_ip and openshift_public_ip values are overridden, it will be the computed value of openshift_ip. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.

  • The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of openshift_hostname and openshift_public_hostname.

  • Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:

    # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \
        --dport 1936 -j ACCEPT
Table 8. Aggregated Logging

9200

TCP

For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally opened for direct access to Elasticsearch by means of a route. The route can be created using oc expose.

9300

TCP

For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other.

Persistent Storage

The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.

The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.

Cloud Provider Considerations

There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.

Overriding Detected IP Addresses and Host Names

Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts playbook:

# ansible-playbook [-i /path/to/inventory] \
    /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section.

Now, verify the detected common settings. If they are not what you expect them to be, you can override them.

The Advanced Installation topic discusses the available Ansible variables in greater detail.

Variable Usage

hostname

  • Should resolve to the internal IP from the instances themselves.

  • openshift_hostname overrides.

ip

  • Should be the internal IP of the instance.

  • openshift_ip will overrides.

public_hostname

  • Should resolve to the external IP from hosts outside of the cloud.

  • Provider openshift_public_hostname overrides.

public_ip

  • Should be the externally accessible IP associated with the instance.

  • openshift_public_ip overrides.

use_openshift_sdn

  • Should be true unless the cloud is GCE.

  • openshift_use_openshift_sdn overrides.

If openshift_hostname is set to a value other than the metadata-provided private-dns-name value, the native cloud integration for those providers will no longer work.

Post-Installation Configuration for Cloud Providers

Following the installation process, you can configure OpenShift Container Platform for AWS, OpenStack, or GCE.

Containerized GlusterFS Considerations

If you choose to configure containerized GlusterFS persistent storage for your cluster, or if you choose to configure a containerized GlusterFS-backed OpenShift Container Registry, you must consider the following prerequisites.

Storage Nodes

To use containerized GlusterFS persistent storage:

  • A minimum of 3 storage nodes is required.

  • Each storage node must have at least 1 raw block device with least 100 GB available.

To run a containerized GlusterFS-backed OpenShift Container Registry:

  • A minimum of 3 storage nodes is required.

  • Each storage node must have at least 1 raw block device with at least 10 GB of free storage.

While containerized GlusterFS persistent storage can be configured and deployed on the same OpenShift Container Platform cluster as a containerized GlusterFS-backed registry, their storage should be kept separate from each other and also requires additional storage nodes. For example, if both are configured, a total of 6 storage nodes would be needed: 3 for the registry and 3 for persistent storage. This limitation is imposed to avoid potential impacts on performance in I/O and volume creation.

Required Software Components

For any RHEL (non-Atomic) storage nodes, the following RPM respository must be enabled:

# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms

The mount.glusterfs command must be available on all nodes that will host pods that will use GlusterFS volumes. For RPM-based systems, the glusterfs-fuse package must be installed:

# yum install glusterfs-fuse

If GlusterFS is already installed on the nodes, ensure the latest version is installed:

# yum update glusterfs-fuse