×

This document details the tested cluster maximums for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, along with information about the test environment and configuration used to test the maximums. For ROSA with HCP clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster.

ROSA with HCP cluster maximums

Consider the following tested object maximums when you plan a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster installation. The table specifies the maximum limits for each tested type in a ROSA with HCP cluster.

These guidelines are based on a cluster of 500 compute (also known as worker) nodes. For smaller clusters, the maximums are lower.

Customers running ROSA with HCP 4.14.x and 4.15.x clusters require a minimum z-stream version of 4.14.28 or 4.15.15 and greater to scale to 500 worker nodes. For earlier versions, the maximum is 90 worker nodes.

Table 1. Tested cluster maximums
Maximum type 4.x tested maximum

Number of pods [1]

25,000

Number of pods per node

250

Number of pods per core

There is no default value

Number of namespaces [2]

5,000

Number of pods per namespace [3]

25,000

Number of services [4]

10,000

Number of services per namespace

5,000

Number of back ends per service

5,000

Number of deployments per namespace [3]

2,000

  1. The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application.

  2. When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available.

  3. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.

  4. Each service port and each service back end has a corresponding entry in iptables. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system.