×

This documentation outlines the service definition for the Red Hat OpenShift Service on AWS (ROSA) managed service.

Account management

This section provides information about the service definition for Red Hat OpenShift Service on AWS account management.

Billing

Red Hat OpenShift Service on AWS is billed through Amazon Web Services (AWS) based on the usage of AWS components used by the service, such as load balancers, storage, EC2 instances, other components, and Red Hat subscriptions for the OpenShift service.

Any additional Red Hat software must be purchased separately.

Cluster self-service

Customers can self-service their clusters, including, but not limited to:

  • Create a cluster

  • Delete a cluster

  • Add or remove an identity provider

  • Add or remove a user from an elevated group

  • Configure cluster privacy

  • Add or remove machine pools and configure autoscaling

  • Define upgrade policies

You can perform these self-service tasks using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa.

Instance types

Single availability zone clusters require a minimum of 3 control plane nodes, 2 infrastructure nodes, and 2 worker nodes deployed to a single availability zone.

Multiple availability zone clusters require a minimum of 3 control plane nodes, 3 infrastructure nodes, and 3 worker nodes. Additional nodes must be purchased in multiples of three to maintain proper node distribution.

All Red Hat OpenShift Service on AWS clusters support a maximum of 180 worker nodes.

Control plane and infrastructure nodes are deployed and managed by Red Hat. Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss. There are at least 3 control plane nodes that handle etcd- and API-related workloads. There are at least 2 infrastructure nodes that handle metrics, routing, the web console, and other workloads. You must not run any workloads on the control and infrastructure nodes. Any workloads you intend to run must be deployed on worker nodes. See the Red Hat Operator support section below for more information about Red Hat workloads that must be deployed on worker nodes.

Approximately one vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This reservation of resources is necessary to run processes required by the underlying platform. These processes include system daemons such as udev, kubelet, and container runtime among others. The reserved resources also account for kernel reservations.

OpenShift Container Platform core systems such as audit log aggregation, metrics collection, DNS, image registry, SDN, and others might consume additional allocatable resources to maintain the stability and maintainability of the cluster. The additional resources consumed might vary based on usage.

For additional information, see the Kubernetes documentation.

As of Red Hat OpenShift Service on AWS 4.11, the default per-pod PID limit is 4096. If you want to enable this PID limit, you must upgrade your Red Hat OpenShift Service on AWS clusters to this version or later. Red Hat OpenShift Service on AWS clusters running on earlier versions use a default PID limit of 1024.

You can configure the per-pod PID limit on a Red Hat OpenShift Service on AWS cluster by using the ROSA CLI. For more information, see "Configuring PID limits".

AWS instance types

Red Hat OpenShift Service on AWS offers the following worker node instance types and sizes:

General purpose
  • m5.metal (96† vCPU, 384 GiB)

  • m5.xlarge (4 vCPU, 16 GiB)

  • m5.2xlarge (8 vCPU, 32 GiB)

  • m5.4xlarge (16 vCPU, 64 GiB)

  • m5.8xlarge (32 vCPU, 128 GiB)

  • m5.12xlarge (48 vCPU, 192 GiB)

  • m5.16xlarge (64 vCPU, 256 GiB)

  • m5.24xlarge (96 vCPU, 384 GiB)

  • m5a.xlarge (4 vCPU, 16 GiB)

  • m5a.2xlarge (8 vCPU, 32 GiB)

  • m5a.4xlarge (16 vCPU, 64 GiB)

  • m5a.8xlarge (32 vCPU, 128 GiB)

  • m5a.12xlarge (48 vCPU, 192 GiB)

  • m5a.16xlarge (64 vCPU, 256 GiB)

  • m5a.24xlarge (96 vCPU, 384 GiB)

  • m5dn.metal (96 vCPU, 384 GiB)

  • m5zn.metal (48 vCPU, 192 GiB)

  • m5d.metal (96† vCPU, 384 GiB)

  • m5n.metal (96 vCPU, 384 GiB)

  • m6a.metal (192 vCPU, 768 GiB)

  • m6a.xlarge (4 vCPU, 16 GiB)

  • m6a.2xlarge (8 vCPU, 32 GiB)

  • m6a.4xlarge (16 vCPU, 64 GiB)

  • m6a.8xlarge (32 vCPU, 128 GiB)

  • m6a.12xlarge (48 vCPU, 192 GiB)

  • m6a.16xlarge (64 vCPU, 256 GiB)

  • m6a.24xlarge (96 vCPU, 384 GiB)

  • m6a.32xlarge (128 vCPU, 512 GiB)

  • m6a.48xlarge (192 vCPU, 768 GiB)

  • m6i.metal (128 vCPU, 512 GiB)

  • m6i.xlarge (4 vCPU, 16 GiB)

  • m6i.2xlarge (8 vCPU, 32 GiB)

  • m6i.4xlarge (16 vCPU, 64 GiB)

  • m6i.8xlarge (32 vCPU, 128 GiB)

  • m6i.12xlarge (48 vCPU, 192 GiB)

  • m6i.16xlarge (64 vCPU, 256 GiB)

  • m6i.24xlarge (96 vCPU, 384 GiB)

  • m6i.32xlarge (128 vCPU, 512 GiB)

  • m6id.xlarge (4 vCPU, 16 GiB)

  • m6id.2xlarge (8 vCPU, 32 GiB)

  • m6id.4xlarge (16 vCPU, 64 GiB)

  • m6id.8xlarge (32 vCPU, 128 GiB)

  • m6id.12xlarge (48 vCPU, 192 GiB)

  • m6id.16xlarge (64 vCPU, 256 GiB)

  • m6id.24xlarge (96 vCPU, 384 GiB)

  • m6id.32xlarge (128 vCPU, 512 GiB)

  • m6id.metal (128 vCPU, 512 GiB)

  • m6idn.xlarge (4 vCPU, 16 GiB)

  • m6idn.2xlarge (8 vCPU, 32 GiB)

  • m6idn.4xlarge (16 vCPU, 64 GiB)

  • m6idn.8xlarge (32 vCPU, 128 GiB)

  • m6idn.12xlarge (48 vCPU, 192 GiB)

  • m6idn.16xlarge (64 vCPU, 256 GiB)

  • m6idn.24xlarge (96 vCPU, 384 GiB)

  • m6idn.32xlarge (128 vCPU, 512 GiB)

  • m6in.xlarge (4 vCPU, 16 GiB)

  • m6in.2xlarge (8 vCPU, 32 GiB)

  • m6in.4xlarge (16 vCPU, 64 GiB)

  • m6in.8xlarge (32 vCPU, 128 GiB)

  • m6in.12xlarge (48 vCPU, 192 GiB)

  • m6in.16xlarge (64 vCPU, 256 GiB)

  • m6in.24xlarge (96 vCPU, 384 GiB)

  • m6in.32xlarge (128 vCPU, 512 GiB)

  • m7a.xlarge (4 vCPU, 16 GiB)

  • m7a.2xlarge (8 vCPU, 32 GiB)

  • m7a.4xlarge (16 vCPU, 64 GiB)

  • m7a.8xlarge (32 vCPU, 128 GiB)

  • m7a.12xlarge (48 vCPU, 192 GiB)

  • m7a.16xlarge (64 vCPU, 256 GiB)

  • m7a.24xlarge (96 vCPU, 384 GiB)

  • m7a.32xlarge (128 vCPU, 512 GiB)

  • m7a.48xlarge (192 vCPU, 768 GiB)

  • m7a.metal-48xl (192 vCPU, 768 GiB)

  • m7i-flex.2xlarge (8 vCPU, 32 GiB)

  • m7i-flex.4xlarge (16 vCPU, 64 GiB)

  • m7i-flex.8xlarge (32 vCPU, 128 GiB)

  • m7i-flex.xlarge (4 vCPU, 16 GiB)

  • m7i.xlarge (4 vCPU, 16 GiB)

  • m7i.2xlarge (8 vCPU, 32 GiB)

  • m7i.4xlarge (16 vCPU, 64 GiB)

  • m7i.8xlarge (32 vCPU, 128 GiB)

  • m7i.12xlarge (48 vCPU, 192 GiB)

  • m7i.16xlarge (64 vCPU, 256 GiB)

  • m7i.24xlarge (96 vCPU, 384 GiB)

  • m7i.48xlarge (192 vCPU, 768 GiB)

  • m7i.metal-24xl (96 vCPU, 384 GiB)

  • m7i.metal-48xl (192 vCPU, 768 GiB)

† These instance types offer 96 logical processors on 48 physical cores. They run on single servers with two physical Intel sockets.

Burstable general purpose
  • t3.xlarge (4 vCPU, 16 GiB)

  • t3.2xlarge (8 vCPU, 32 GiB)

  • t3a.xlarge (4 vCPU, 16 GiB)

  • t3a.2xlarge (8 vCPU, 32 GiB)

Memory intensive
  • x1.16xlarge (64 vCPU, 976 GiB)

  • x1.32xlarge (128 vCPU, 1,952 GiB)

  • x1e.xlarge (4 vCPU, 122 GiB)

  • x1e.2xlarge (8 vCPU, 244 GiB)

  • x1e.4xlarge (16 vCPU, 488 GiB)

  • x1e.8xlarge (32 vCPU, 976 GiB)

  • x1e.16xlarge (64 vCPU, 1,952 GiB)

  • x1e.32xlarge (128 vCPU, 3,904 GiB)

  • x2idn.16xlarge (64 vCPU, 1,024 GiB)

  • x2idn.24xlarge (96 vCPU, 1,536 GiB)

  • x2idn.32xlarge (128 vCPU, 2,048 GiB)

  • x2iedn.xlarge (4 vCPU, 128 GiB)

  • x2iedn.2xlarge (8 vCPU, 256 GiB)

  • x2iedn.4xlarge (16 vCPU, 512 GiB)

  • x2iedn.8xlarge (32 vCPU, 1,024 GiB)

  • x2iedn.16xlarge (64 vCPU, 2,048 GiB)

  • x2iedn.24xlarge (96 vCPU, 3,072 GiB)

  • x2iedn.32xlarge (128 vCPU, 4,096 GiB)

  • x2iezn.metal (48 vCPU, 1,536 GiB)

  • x2iezn.2xlarge (8 vCPU, 256 GiB)

  • x2iezn.4xlarge (16vCPU, 512 GiB)

  • x2iezn.6xlarge (24vCPU, 768 GiB)

  • x2iezn.8xlarge (32vCPU, 1,024 GiB)

  • x2iezn.12xlarge (48vCPU, 1,536 GiB)

  • x2idn.metal (128vCPU, 2,048 GiB)

  • x2iedn.metal (128vCPU, 4,096 GiB)

Memory optimized
  • r4.xlarge (4 vCPU, 30.5 GiB)

  • r4.2xlarge (8 vCPU, 61 GiB)

  • r4.4xlarge (16 vCPU, 122 GiB)

  • r4.8xlarge (32 vCPU, 244 GiB)

  • r4.16xlarge (64 vCPU, 488 GiB)

  • r5.metal (96† vCPU, 768 GiB)

  • r5.xlarge (4 vCPU, 32 GiB)

  • r5.2xlarge (8 vCPU, 64 GiB)

  • r5.4xlarge (16 vCPU, 128 GiB)

  • r5.8xlarge (32 vCPU, 256 GiB)

  • r5.12xlarge (48 vCPU, 384 GiB)

  • r5.16xlarge (64 vCPU, 512 GiB)

  • r5.24xlarge (96 vCPU, 768 GiB)

  • r5a.xlarge (4 vCPU, 32 GiB)

  • r5a.2xlarge (8 vCPU, 64 GiB)

  • r5a.4xlarge (16 vCPU, 128 GiB)

  • r5a.8xlarge (32 vCPU, 256 GiB)

  • r5a.12xlarge (48 vCPU, 384 GiB)

  • r5a.16xlarge (64 vCPU, 512 GiB)

  • r5a.24xlarge (96 vCPU, 768 GiB)

  • r5ad.xlarge (4 vCPU, 32 GiB)

  • r5ad.2xlarge (8 vCPU, 64 GiB)

  • r5ad.4xlarge (16 vCPU, 128 GiB)

  • r5ad.8xlarge (32 vCPU, 256 GiB)

  • r5ad.12xlarge (48 vCPU, 384 GiB)

  • r5ad.16xlarge (64 vCPU, 512 GiB)

  • r5ad.24xlarge (96 vCPU, 768 GiB)

  • r5b.metal (96 768 GiB)

  • r5b.xlarge (4 vCPU, 32 GiB)

  • r5b.2xlarge (8 vCPU, 364 GiB)

  • r5b.4xlarge (16 vCPU, 3,128 GiB)

  • r5b.8xlarge (32 vCPU, 3,256 GiB)

  • r5b.12xlarge (48 vCPU, 3,384 GiB)

  • r5b.16xlarge (64 vCPU, 3,512 GiB)

  • r5b.24xlarge (96 vCPU, 3,768 GiB)

  • r5d.metal (96† vCPU, 768 GiB)

  • r5d.xlarge (4 vCPU, 32 GiB)

  • r5d.2xlarge (8 vCPU, 64 GiB)

  • r5d.4xlarge (16 vCPU, 128 GiB)

  • r5d.8xlarge (32 vCPU, 256 GiB)

  • r5d.12xlarge (48 vCPU, 384 GiB)

  • r5d.16xlarge (64 vCPU, 512 GiB)

  • r5d.24xlarge (96 vCPU, 768 GiB)

  • r5n.metal (96 vCPU, 768 GiB)

  • r5n.xlarge (4 vCPU, 32 GiB)

  • r5n.2xlarge (8 vCPU, 64 GiB)

  • r5n.4xlarge (16 vCPU, 128 GiB)

  • r5n.8xlarge (32 vCPU, 256 GiB)

  • r5n.12xlarge (48 vCPU, 384 GiB)

  • r5n.16xlarge (64 vCPU, 512 GiB)

  • r5n.24xlarge (96 vCPU, 768 GiB)

  • r5dn.metal (96 vCPU, 768 GiB)

  • r5dn.xlarge (4 vCPU, 32 GiB)

  • r5dn.2xlarge (8 vCPU, 64 GiB)

  • r5dn.4xlarge (16 vCPU, 128 GiB)

  • r5dn.8xlarge (32 vCPU, 256 GiB)

  • r5dn.12xlarge (48 vCPU, 384 GiB)

  • r5dn.16xlarge (64 vCPU, 512 GiB)

  • r5dn.24xlarge (96 vCPU, 768 GiB)

  • r6a.xlarge (4 vCPU, 32 GiB)

  • r6a.2xlarge (8 vCPU, 64 GiB)

  • r6a.4xlarge (16 vCPU, 128 GiB)

  • r6a.8xlarge (32 vCPU, 256 GiB)

  • r6a.12xlarge (48 vCPU, 384 GiB)

  • r6a.16xlarge (64 vCPU, 512 GiB)

  • r6a.24xlarge (96 vCPU, 768 GiB)

  • r6a.32xlarge (128 vCPU, 1,024 GiB)

  • r6a.48xlarge (192 vCPU, 1,536 GiB)

  • r6i.metal (128 vCPU, 1,024 GiB)

  • r6i.xlarge (4 vCPU, 32 GiB)

  • r6i.2xlarge (8 vCPU, 64 GiB)

  • r6i.4xlarge (16 vCPU, 128 GiB)

  • r6i.8xlarge (32 vCPU, 256 GiB)

  • r6i.12xlarge (48 vCPU, 384 GiB)

  • r6i.16xlarge (64 vCPU, 512 GiB)

  • r6i.24xlarge (96 vCPU, 768 GiB)

  • r6i.32xlarge (128 vCPU, 1,024 GiB)

  • r6id.metal (128 vCPU, 1,024 GiB)

  • r6id.xlarge (4 vCPU, 32 GiB)

  • r6id.2xlarge (8 vCPU, 64 GiB)

  • r6id.4xlarge (16 vCPU, 128 GiB)

  • r6id.8xlarge (32 vCPU, 256 GiB)

  • r6id.12xlarge (48 vCPU, 384 GiB)

  • r6id.16xlarge (64 vCPU, 512 GiB)

  • r6id.24xlarge (96 vCPU, 768 GiB)

  • r6id.32xlarge (128 vCPU, 1,024 GiB)

  • r6idn.12xlarge (48 vCPU, 384 GiB)

  • r6idn.16xlarge (64 vCPU, 512 GiB)

  • r6idn.24xlarge (96 vCPU, 768 GiB)

  • r6idn.2xlarge (8 vCPU, 64 GiB)

  • r6idn.32xlarge (128 vCPU, 1,024 GiB)

  • r6idn.4xlarge (16 vCPU, 128 GiB)

  • r6idn.8xlarge (32 vCPU, 256 GiB)

  • r6idn.xlarge (4 vCPU, 32 GiB)

  • r6in.12xlarge (48 vCPU, 384 GiB)

  • r6in.16xlarge (64 vCPU, 512 GiB)

  • r6in.24xlarge (96 vCPU, 768 GiB)

  • r6in.2xlarge (8 vCPU, 64 GiB)

  • r6in.32xlarge (128 vCPU, 1,024 GiB)

  • r6in.4xlarge (16 vCPU, 128 GiB)

  • r6in.8xlarge (32 vCPU, 256 GiB)

  • r6in.xlarge (4 vCPU, 32 GiB)

  • r7iz.xlarge (4 vCPU, 32 GiB)

  • r7iz.2xlarge (8 vCPU, 64 GiB)

  • r7iz.4xlarge (16 vCPU, 128 GiB)

  • r7iz.8xlarge (32 vCPU, 256 GiB)

  • r7iz.12xlarge (48 vCPU, 384 GiB)

  • r7iz.16xlarge (64 vCPU, 512 GiB)

  • r7iz.32xlarge (128 vCPU, 1024 GiB)

  • r7iz.metal-16xl (64 vCPU, 512 GiB)

  • r7iz.metal-32xl (128 vCPU, 1,024 GiB)

  • z1d.metal (48‡ vCPU, 384 GiB)

  • z1d.xlarge (4 vCPU, 32 GiB)

  • z1d.2xlarge (8 vCPU, 64 GiB)

  • z1d.3xlarge (12 vCPU, 96 GiB)

  • z1d.6xlarge (24 vCPU, 192 GiB)

  • z1d.12xlarge (48 vCPU, 384 GiB)

† These instance types offer 96 logical processors on 48 physical cores. They run on single servers with two physical Intel sockets.

‡ This instance type offers 48 logical processors on 24 physical cores.

Accelerated computing
  • p3.2xlarge (8 vCPU, 61 GiB)

  • p3.8xlarge (32 vCPU, 244 GiB)

  • p3.16xlarge (64 vCPU, 488 GiB)

  • p3dn.24xlarge (96 vCPU, 768 GiB)

  • p4d.24xlarge (96 vCPU, 1,152 GiB)

  • p4de.24xlarge (96 vCPU, 1,152 GiB)

  • p5.48xlarge (192 vCPU, 2,048 GiB)

  • g4dn.xlarge (4 vCPU, 16 GiB)

  • g4dn.2xlarge (8 vCPU, 32 GiB)

  • g4dn.4xlarge (16 vCPU, 64 GiB)

  • g4dn.8xlarge (32 vCPU, 128 GiB)

  • g4dn.12xlarge (48 vCPU, 192 GiB)

  • g4dn.16xlarge (64 vCPU, 256 GiB)

  • g4dn.metal (96 vCPU, 384 GiB)

  • g5.xlarge (4 vCPU, 16 GiB)

  • g5.2xlarge (8 vCPU, 32 GiB)

  • g5.4xlarge (16 vCPU, 64 GiB)

  • g5.8xlarge (32 vCPU, 128 GiB)

  • g5.16xlarge (64 vCPU, 256 GiB)

  • g5.12xlarge (48 vCPU, 192 GiB)

  • g5.24xlarge (96 vCPU, 384 GiB)

  • g5.48xlarge (192 vCPU, 768 GiB)

  • dl1.24xlarge (96 vCPU, 768 GiB)†

† Intel specific; not covered by Nvidia

Support for the GPU instance type software stack is provided by AWS. Ensure that your AWS service quotas can accommodate the desired GPU instance types.

Compute optimized
  • c5.metal (96 vCPU, 192 GiB)

  • c5.xlarge (4 vCPU, 8 GiB)

  • c5.2xlarge (8 vCPU, 16 GiB)

  • c5.4xlarge (16 vCPU, 32 GiB)

  • c5.9xlarge (36 vCPU, 72 GiB)

  • c5.12xlarge (48 vCPU, 96 GiB)

  • c5.18xlarge (72 vCPU, 144 GiB)

  • c5.24xlarge (96 vCPU, 192 GiB)

  • c5d.metal (96 vCPU, 192 GiB)

  • c5d.xlarge (4 vCPU, 8 GiB)

  • c5d.2xlarge (8 vCPU, 16 GiB)

  • c5d.4xlarge (16 vCPU, 32 GiB)

  • c5d.9xlarge (36 vCPU, 72 GiB)

  • c5d.12xlarge (48 vCPU, 96 GiB)

  • c5d.18xlarge (72 vCPU, 144 GiB)

  • c5d.24xlarge (96 vCPU, 192 GiB)

  • c5a.xlarge (4 vCPU, 8 GiB)

  • c5a.2xlarge (8 vCPU, 16 GiB)

  • c5a.4xlarge (16 vCPU, 32 GiB)

  • c5a.8xlarge (32 vCPU, 64 GiB)

  • c5a.12xlarge (48 vCPU, 96 GiB)

  • c5a.16xlarge (64 vCPU, 128 GiB)

  • c5a.24xlarge (96 vCPU, 192 GiB)

  • c5ad.xlarge (4 vCPU, 8 GiB)

  • c5ad.2xlarge (8 vCPU, 16 GiB)

  • c5ad.4xlarge (16 vCPU, 32 GiB)

  • c5ad.8xlarge (32 vCPU, 64 GiB)

  • c5ad.12xlarge (48 vCPU, 96 GiB)

  • c5ad.16xlarge (64 vCPU, 128 GiB)

  • c5ad.24xlarge (96 vCPU, 192 GiB)

  • c5n.metal (72 vCPU, 192 GiB)

  • c5n.xlarge (4 vCPU, 10.5 GiB)

  • c5n.2xlarge (8 vCPU, 21 GiB)

  • c5n.4xlarge (16 vCPU, 42 GiB)

  • c5n.9xlarge (36 vCPU, 96 GiB)

  • c5n.18xlarge (72 vCPU, 192 GiB)

  • c6a.xlarge (4 vCPU, 8 GiB)

  • c6a.2xlarge (8 vCPU, 16 GiB)

  • c6a.4xlarge (16 vCPU, 32 GiB)

  • c6a.8xlarge (32 vCPU, 64 GiB)

  • c6a.12xlarge (48 vCPU, 96 GiB)

  • c6a.16xlarge (64 vCPU, 128 GiB)

  • c6a.24xlarge (96 vCPU, 192 GiB)

  • c6a.32xlarge (128 vCPU, 256 GiB)

  • c6a.48xlarge (192 vCPU, 384 GiB)

  • c6i.metal (128 vCPU, 256 GiB)

  • c6i.xlarge (4 vCPU, 8 GiB)

  • c6i.2xlarge (8 vCPU, 16 GiB)

  • c6i.4xlarge (16 vCPU, 32 GiB)

  • c6i.8xlarge (32 vCPU, 64 GiB)

  • c6i.12xlarge (48 vCPU, 96 GiB)

  • c6i.16xlarge (64 vCPU, 128 GiB)

  • c6i.24xlarge (96 vCPU, 192 GiB)

  • c6i.32xlarge (128 vCPU, 256 GiB)

  • c6id.metal (128 vCPU, 256 GiB)

  • c6id.xlarge (4 vCPU, 8 GiB)

  • c6id.2xlarge (8 vCPU, 16 GiB)

  • c6id.4xlarge (16 vCPU, 32 GiB)

  • c6id.8xlarge (32 vCPU, 64 GiB)

  • c6id.12xlarge (48 vCPU, 96 GiB)

  • c6id.16xlarge (64 vCPU, 128 GiB)

  • c6id.24xlarge (96 vCPU, 192 GiB)

  • c6id.32xlarge (128 vCPU, 256 GiB)

  • c6in.12xlarge (48 vCPU, 96 GiB)

  • c6in.16xlarge (64 vCPU, 128 GiB)

  • c6in.24xlarge (96 vCPU, 192 GiB)

  • c6in.2xlarge (8 vCPU, 16 GiB)

  • c6in.32xlarge (128 vCPU, 256 GiB)

  • c6in.4xlarge (16 vCPU, 32 GiB)

  • c6in.8xlarge (32 vCPU, 64 GiB)

  • c6in.xlarge (4 vCPU, 8 GiB)

  • m5zn.12xlarge (48 vCPU, 192 GiB)

  • m5zn.2xlarge (8 vCPU, 32 GiB)

  • m5zn.3xlarge (16 vCPU, 48 GiB)

  • m5zn.6xlarge (32 vCPU, 96 GiB)

  • m5zn.xlarge (4 vCPU, 16 GiB)

Storage optimized
  • c5ad.12xlarge (48 vCPU, 96 GiB)

  • c5ad.16xlarge (64 vCPU, 128 GiB)

  • c5ad.24xlarge (96 vCPU, 192 GiB)

  • c5ad.2xlarge (8 vCPU, 16 GiB)

  • c5ad.4xlarge (16 vCPU, 32 GiB)

  • c5ad.8xlarge (32 vCPU, 64 GiB)

  • c5ad.xlarge (4 vCPU, 8 GiB)

  • i3.metal (72† vCPU, 512 GiB)

  • i3.xlarge (4 vCPU, 30.5 GiB)

  • i3.2xlarge (8 vCPU, 61 GiB)

  • i3.4xlarge (16 vCPU, 122 GiB)

  • i3.8xlarge (32 vCPU, 244 GiB)

  • i3.16xlarge (64 vCPU, 488 GiB)

  • i3en.metal (96 vCPU, 768 GiB)

  • i3en.xlarge (4 vCPU, 32 GiB)

  • i3en.2xlarge (8 vCPU, 64 GiB)

  • i3en.3xlarge (12 vCPU, 96 GiB)

  • i3en.6xlarge (24 vCPU, 192 GiB)

  • i3en.12xlarge (48 vCPU, 384 GiB)

  • i3en.24xlarge (96 vCPU, 768 GiB)

  • i4i.xlarge (4 vCPU, 32 GiB)

  • i4i.2xlarge (8 vCPU, 64 GiB)

  • i4i.4xlarge (16 vCPU, 128 GiB)

  • i4i.8xlarge (32 vCPU, 256 GiB)

  • i4i.12xlarge (48 vCPU, 384 GiB)

  • i4i.16xlarge (64 vCPU, 512 GiB)

  • i4i.24xlarge (96 vCPU, 768 GiB)

  • i4i.32xlarge (128 vCPU, 1,024 GiB)

  • i4i.metal (128 vCPU, 1,024 GiB)

  • m5ad.xlarge (4 vCPU, 16 GiB)

  • m5ad.2xlarge (8 vCPU, 32 GiB)

  • m5ad.4xlarge (16 vCPU, 64 GiB)

  • m5ad.8xlarge (32 vCPU, 128 GiB)

  • m5ad.12xlarge (48 vCPU, 192 GiB)

  • m5ad.16xlarge (64 vCPU, 256 GiB)

  • m5ad.24xlarge (96 vCPU, 384 GiB)

  • m5d.xlarge (4 vCPU, 16 GiB)

  • m5d.2xlarge (8 vCPU, 32 GiB)

  • m5d.4xlarge (16 vCPU, 64 GiB)

  • m5d.8xlarge (32 vCPU, 28 GiB)

  • m5d.12xlarge (48 vCPU, 192 GiB)

  • m5d.16xlarge (64 vCPU, 256 GiB)

  • m5d.24xlarge (96 vCPU, 384 GiB)

† This instance type offers 72 logical processors on 36 physical cores.

Virtual instance types initialize faster than ".metal" instance types.

High memory
  • u-3tb1.56xlarge (224 vCPU, 3,072 GiB)

  • u-6tb1.56xlarge (224 vCPU, 6,144 GiB)

  • u-6tb1.112xlarge (448 vCPU, 6,144 GiB)

  • u-6tb1.metal (448 vCPU, 6,144 GiB)

  • u-9tb1.112xlarge (448 vCPU, 9,216 GiB)

  • u-9tb1.metal (448 vCPU, 9,216 GiB)

  • u-12tb1.112xlarge (448 vCPU, 12,288 GiB)

  • u-12tb1.metal (448 vCPU, 12,288 GiB)

  • u-18tb1.metal (448 vCPU, 18,432 GiB)

  • u-24tb1.metal (448 vCPU, 24,576 GiB)

  • u-24tb1.112xlarge (448 vCPU, 24,576 GiB)

Network Optimized
  • c5n.xlarge (4 vCPU, 10.5 GiB)

  • c5n.2xlarge (8 vCPU, 21 GiB)

  • c5n.4xlarge (16 vCPU, 42 GiB)

  • c5n.9xlarge (36 vCPU, 96 GiB)

  • c5n.18xlarge (72 vCPU, 192 GiB)

  • m5dn.xlarge (4 vCPU, 16 GiB)

  • m5dn.2xlarge (8 vCPU, 32 GiB)

  • m5dn.4xlarge (16 vCPU, 64 GiB)

  • m5dn.8xlarge (32 vCPU, 128 GiB)

  • m5dn.12xlarge (48 vCPU, 192 GiB)

  • m5dn.16xlarge (64 vCPU, 256 GiB)

  • m5dn.24xlarge (96 vCPU, 384 GiB)

  • m5n.12xlarge (48 vCPU, 192 GiB)

  • m5n.16xlarge (64 vCPU, 256 GiB)

  • m5n.24xlarge (96 vCPU, 384 GiB)

  • m5n.xlarge (4 vCPU, 16 GiB)

  • m5n.2xlarge (8 vCPU, 32 GiB)

  • m5n.4xlarge (16 vCPU, 64 GiB)

  • m5n.8xlarge (32 vCPU, 128 GiB)

Additional Resources

Regions and availability zones

The following AWS regions are currently available for Red Hat OpenShift 4 and are supported for Red Hat OpenShift Service on AWS.

Regions in China are not supported, regardless of their support on OpenShift 4.

For GovCloud (US) regions, you must submit an Access request for Red Hat OpenShift Service on AWS (ROSA) FedRAMP.

GovCloud (US) regions are only supported on ROSA Classic clusters.

AWS Regions
  • us-east-1 (N. Virginia)

  • us-east-2 (Ohio)

  • us-west-1 (N. California)

  • us-west-2 (Oregon)

  • af-south-1 (Cape Town, AWS opt-in required)

  • ap-east-1 (Hong Kong, AWS opt-in required)

  • ap-south-2 (Hyderabad, AWS opt-in required)

  • ap-southeast-3 (Jakarta, AWS opt-in required)

  • ap-southeast-4 (Melbourne, AWS opt-in required)

  • ap-south-1 (Mumbai)

  • ap-northeast-3 (Osaka)

  • ap-northeast-2 (Seoul)

  • ap-southeast-1 (Singapore)

  • ap-southeast-2 (Sydney)

  • ap-northeast-1 (Tokyo)

  • ca-central-1 (Central Canada)

  • eu-central-1 (Frankfurt)

  • eu-west-1 (Ireland)

  • eu-west-2 (London)

  • eu-south-1 (Milan, AWS opt-in required)

  • eu-west-3 (Paris)

  • me-south-1 (Bahrain, AWS opt-in required)

  • me-central-1 (UAE, AWS opt-in required)

  • sa-east-1 (São Paulo)

  • us-gov-east-1 (AWS GovCloud - US-East)

  • us-gov-west-1 (AWS GovCloud - US-West)

Multiple availability zone clusters can only be deployed in regions with at least 3 availability zones. For more information, see the Regions and Availability Zones section in the AWS documentation.

Each new Red Hat OpenShift Service on AWS cluster is installed within an installer-created or preexisting Virtual Private Cloud (VPC) in a single region, with the option to deploy into a single availability zone (Single-AZ) or across multiple availability zones (Multi-AZ). This provides cluster-level network and resource isolation, and enables cloud-provider VPC settings, such as VPN connections and VPC Peering. Persistent volumes (PVs) are backed by Amazon Elastic Block Storage (Amazon EBS), and are specific to the availability zone in which they are provisioned. Persistent volume claims (PVCs) do not bind to a volume until the associated pod resource is assigned into a specific availability zone to prevent unschedulable pods. Availability zone-specific resources are only usable by resources in the same availability zone.

The region and the choice of single or multiple availability zone cannot be changed after a cluster has been deployed.

Local Zones

Red Hat OpenShift Service on AWS supports the use of AWS Local Zones, which are metropolis-centralized availability zones where customers can place latency-sensitive application workloads. Local Zones are extensions of AWS Regions that have their own internet connection. For more information about AWS Local Zones, see the AWS documentation How Local Zones work.

For steps to enable AWS Local Zones and to add a Local Zone to a machine pool, see Configuring Local Zones for machine pools.

Service Level Agreement (SLA)

Any SLAs for the service itself are defined in Appendix 4 of the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

Limited support status

When a cluster transitions to a Limited Support status, Red Hat no longer proactively monitors the cluster, the SLA is no longer applicable, and credits requested against the SLA are denied. It does not mean that you no longer have product support. In some cases, the cluster can return to a fully-supported status if you remediate the violating factors. However, in other cases, you might have to delete and recreate the cluster.

A cluster might move to a Limited Support status for many reasons, including the following scenarios:

If you do not upgrade a cluster to a supported version before the end-of-life date

Red Hat does not make any runtime or SLA guarantees for versions after their end-of-life date. To receive continued support, upgrade the cluster to a supported version prior to the end-of-life date. If you do not upgrade the cluster prior to the end-of-life date, the cluster transitions to a Limited Support status until it is upgraded to a supported version.

Red Hat provides commercially reasonable support to upgrade from an unsupported version to a supported version. However, if a supported upgrade path is no longer available, you might have to create a new cluster and migrate your workloads.

If you remove or replace any native Red Hat OpenShift Service on AWS components or any other component that is installed and managed by Red Hat

If cluster administrator permissions were used, Red Hat is not responsible for any of your or your authorized users’ actions, including those that affect infrastructure services, service availability, or data loss. If Red Hat detects any such actions, the cluster might transition to a Limited Support status. Red Hat notifies you of the status change and you should either revert the action or create a support case to explore remediation steps that might require you to delete and recreate the cluster.

If you have questions about a specific action that might cause a cluster to move to a Limited Support status or need further assistance, open a support ticket.

Support

Red Hat OpenShift Service on AWS includes Red Hat Premium Support, which can be accessed by using the Red Hat Customer Portal.

See Red Hat OpenShift Service on AWS SLAs for support response times.

AWS support is subject to a customer’s existing support contract with AWS.

Logging

Red Hat OpenShift Service on AWS provides optional integrated log forwarding to Amazon (AWS) CloudWatch.

Cluster audit logging

Cluster audit logs are available through AWS CloudWatch, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a support case.

Application logging

Application logs sent to STDOUT are collected by Fluentd and forwarded to AWS CloudWatch through the cluster logging stack, if it is installed.

Monitoring

This section provides information about the service definition for Red Hat OpenShift Service on AWS monitoring.

Cluster metrics

Red Hat OpenShift Service on AWS clusters come with an integrated Prometheus stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the web console. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics provided by an Red Hat OpenShift Service on AWS user.

Cluster status notification

Red Hat communicates the health and status of Red Hat OpenShift Service on AWS clusters through a combination of a cluster dashboard available in OpenShift Cluster Manager, and email notifications sent to the email address of the contact that originally deployed the cluster, and any additional contacts specified by the customer.

Networking

This section provides information about the service definition for Red Hat OpenShift Service on AWS networking.

Custom domains for applications

Starting with Red Hat OpenShift Service on AWS 4.14, the Custom Domain Operator is deprecated. To manage Ingress in Red Hat OpenShift Service on AWS 4.14 or later, use the Ingress Operator. The functionality is unchanged for Red Hat OpenShift Service on AWS 4.13 and earlier versions.

To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster’s router.

Domain validated certificates

Red Hat OpenShift Service on AWS includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two separate TLS wildcard certificates that are provided and installed on each cluster: one is for the web console and route default hostnames, and the other is for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, such as the internal API endpoint, use TLS certificates signed by the cluster’s built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.

Custom certificate authorities for builds

Red Hat OpenShift Service on AWS supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.

Load balancers

Red Hat OpenShift Service on AWS uses up to five different load balancers:

  • An internal control plane load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.

  • An external control plane load balancer that is used for accessing the OpenShift and Kubernetes APIs. This load balancer can be disabled in OpenShift Cluster Manager. If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal control plane load balancer.

  • An external control plane load balancer for Red Hat that is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from whitelisted bastion hosts.

  • A default external router/ingress load balancer that is the default application load balancer, denoted by apps in the URL. The default load balancer can be configured in OpenShift Cluster Manager to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.

  • Optional: A secondary router/ingress load balancer that is a secondary application load balancer, denoted by apps2 in the URL. The secondary load balancer can be configured in OpenShift Cluster Manager to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. If a Label match is configured for this router load balancer, then only application routes matching this label are exposed on this router load balancer; otherwise, all application routes are also exposed on this router load balancer.

  • Optional: Load balancers for services. Enable non-HTTP/SNI traffic and non-standard ports for services. These load balancers can be mapped to a service running on Red Hat OpenShift Service on AWS to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. Each AWS account has a quota which limits the number of Classic Load Balancers that can be used within each cluster.

Cluster ingress

Project administrators can add route annotations for many different purposes, including ingress control through IP allow-listing.

Ingress policies can also be changed by using NetworkPolicy objects, which leverage the ovs-networkpolicy plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.

All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.

Cluster egress

Pod egress traffic control through EgressNetworkPolicy objects can be used to prevent or limit outbound traffic in Red Hat OpenShift Service on AWS.

Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires that the 0.0.0.0/0 route belongs only to the Internet gateway; it is not possible to route this range over private connections.

OpenShift 4 clusters use NAT gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each availability zone a cluster is deployed into receives a distinct NAT gateway, therefore up to 3 unique static IP addresses can exist for cluster egress traffic. Any traffic that remains inside the cluster, or that does not go out to the public Internet, will not pass through the NAT gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic; therefore, a customer must not rely on whitelisting individual IP addresses when accessing private resources.

Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:

$ oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'"

Cloud network configuration

Red Hat OpenShift Service on AWS allows for the configuration of a private network connection through AWS-managed technologies, such as:

  • VPN connections

  • VPC peering

  • Transit Gateway

  • Direct Connect

Red Hat site reliability engineers (SREs) do not monitor private network connections. Monitoring these connections is the responsibility of the customer.

DNS forwarding

For Red Hat OpenShift Service on AWS clusters that have a private cloud network configuration, a customer can specify internal DNS servers available on that private connection, that should be queried for explicitly provided domains.

Network verification

Network verification checks run automatically when you deploy a Red Hat OpenShift Service on AWS cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment.

You can also run the network verification checks manually to validate the configuration for an existing cluster.

Additional resources

Storage

This section provides information about the service definition for Red Hat OpenShift Service on AWS storage.

Encrypted-at-rest OS and node storage

Control plane, infrastructure, and worker nodes use encrypted-at-rest Amazon Elastic Block Store (Amazon EBS) storage.

Encrypted-at-rest PV

EBS volumes that are used for PVs are encrypted-at-rest by default.

Block storage (RWO)

Persistent volumes (PVs) are backed by Amazon Elastic Block Store (Amazon EBS), which is Read-Write-Once.

PVs can be attached only to a single node at a time and are specific to the availability zone in which they were provisioned. However, PVs can be attached to any node in the availability zone.

Each cloud provider has its own limits for how many PVs can be attached to a single node. See AWS instance type limits for details.

Shared Storage (RWX)

The AWS CSI Driver can be used to provide RWX support for Red Hat OpenShift Service on AWS. A community Operator is provided to simplify setup. See Amazon Elastic File Storage Setup for OpenShift Dedicated and Red Hat OpenShift Service on AWS for details.

Platform

This section provides information about the service definition for the Red Hat OpenShift Service on AWS (ROSA) platform.

Cluster backup policy

Red Hat does not provide a backup method for ROSA clusters with STS, which is the default. It is critical that customers have a backup plan for their applications and application data. The table below only applies to clusters created with IAM user credentials.

Application and application data backups are not a part of the Red Hat OpenShift Service on AWS service. The following table outlines the cluster backup policy.

Component Snapshot frequency Retention Notes

Full object store backup

Daily

7 days

This is a full backup of all Kubernetes objects like etcd. No persistent volumes (PVs) are backed up in this backup schedule.

Weekly

30 days

Full object store backup

Hourly

24 hour

This is a full backup of all Kubernetes objects like etcd. No PVs are backed up in this backup schedule.

Node root volume

Never

N/A

Nodes are considered to be short-term. Nothing critical should be stored on a node’s root volume.

Autoscaling

Node autoscaling is available on Red Hat OpenShift Service on AWS. You can configure the autoscaler option to automatically scale the number of machines in a cluster.

Daemonsets

Customers can create and run daemonsets on Red Hat OpenShift Service on AWS. To restrict daemonsets to only running on worker nodes, use the following nodeSelector:

...
spec:
  nodeSelector:
    role: worker
...

Multiple availability zone

In a multiple availability zone cluster, control plane nodes are distributed across availability zones and at least one worker node is required in each availability zone.

Node labels

Custom node labels are created by Red Hat during node creation and cannot be changed on Red Hat OpenShift Service on AWS clusters at this time. However, custom labels are supported when creating new machine pools.

OpenShift version

Red Hat OpenShift Service on AWS is run as a service and is kept up to date with the latest OpenShift Container Platform version. Upgrade scheduling to the latest version is available.

Upgrades

Upgrades can be scheduled using the ROSA CLI, rosa, or through OpenShift Cluster Manager.

See the Red Hat OpenShift Service on AWS Life Cycle for more information on the upgrade policy and procedures.

Windows Containers

Red Hat OpenShift support for Windows Containers is not available on Red Hat OpenShift Service on AWS at this time.

Container engine

Red Hat OpenShift Service on AWS runs on OpenShift 4 and uses CRI-O as the only available container engine.

Operating system

Red Hat OpenShift Service on AWS runs on OpenShift 4 and uses Red Hat CoreOS as the operating system for all control plane and worker nodes.

Red Hat Operator support

Red Hat workloads typically refer to Red Hat-provided Operators made available through Operator Hub. Red Hat workloads are not managed by the Red Hat SRE team, and must be deployed on worker nodes. These Operators may require additional Red Hat subscriptions, and may incur additional cloud infrastructure costs. Examples of these Red Hat-provided Operators are:

  • Red Hat Quay

  • Red Hat Advanced Cluster Management

  • Red Hat Advanced Cluster Security

  • Red Hat OpenShift Service Mesh

  • OpenShift Serverless

  • Red Hat OpenShift Logging

  • Red Hat OpenShift Pipelines

Kubernetes Operator support

All Operators listed in the OperatorHub marketplace should be available for installation. These Operators are considered customer workloads, and are not monitored by Red Hat SRE.

Security

This section provides information about the service definition for Red Hat OpenShift Service on AWS security.

Authentication provider

Authentication for the cluster can be configured using either OpenShift Cluster Manager or cluster creation process or using the ROSA CLI, rosa. ROSA is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. The use of multiple identity providers provisioned at the same time is supported. The following identity providers are supported:

  • GitHub or GitHub Enterprise

  • GitLab

  • Google

  • LDAP

  • OpenID Connect

  • htpasswd

Privileged containers

Privileged containers are available for users with the cluster-admin role. Usage of privileged containers as cluster-admin is subject to the responsibilities and exclusion notes in the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

Customer administrator user

In addition to normal users, Red Hat OpenShift Service on AWS provides access to an ROSA-specific group called dedicated-admin. Any users on the cluster that are members of the dedicated-admin group:

  • Have administrator access to all customer-created projects on the cluster.

  • Can manage resource quotas and limits on the cluster.

  • Can add and manage NetworkPolicy objects.

  • Are able to view information about specific nodes and PVs in the cluster, including scheduler information.

  • Can access the reserved dedicated-admin project on the cluster, which allows for the creation of service accounts with elevated privileges and also gives the ability to update default limits and quotas for projects on the cluster.

  • Can install Operators from OperatorHub and perform all verbs in all *.operators.coreos.com API groups.

Cluster administration role

The administrator of Red Hat OpenShift Service on AWS has default access to the cluster-admin role for your organization’s cluster. While logged into an account with the cluster-admin role, users have increased permissions to run privileged security contexts.

Project self-service

By default, all users have the ability to create, update, and delete their projects. This can be restricted if a member of the dedicated-admin group removes the self-provisioner role from authenticated users:

$ oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth

Restrictions can be reverted by applying:

$ oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth

Regulatory compliance

See Understanding process and security for ROSA for the latest compliance information.

Network security

With Red Hat OpenShift Service on AWS, AWS provides a standard DDoS protection on all load balancers, called AWS Shield. This provides 95% protection against most commonly used level 3 and 4 attacks on all the public facing load balancers used for Red Hat OpenShift Service on AWS. A 10-second timeout is added for HTTP requests coming to the haproxy router to receive a response or the connection is closed to provide additional protection.

etcd encryption

In Red Hat OpenShift Service on AWS, the control plane storage is encrypted at rest by default and this includes encryption of the etcd volumes. This storage-level encryption is provided through the storage layer of the cloud provider.

You can also enable etcd encryption, which encrypts the key values in etcd, but not the keys. If you enable etcd encryption, the following Kubernetes API server and OpenShift API server resources are encrypted:

  • Secrets

  • Config maps

  • Routes

  • OAuth access tokens

  • OAuth authorize tokens

The etcd encryption feature is not enabled by default and it can be enabled only at cluster installation time. Even with etcd encryption enabled, the etcd key values are accessible to anyone with access to the control plane nodes or cluster-admin privileges.

By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Red Hat recommends that you enable etcd encryption only if you specifically require it for your use case.

Additional resources