Account management

Billing

Each OpenShift Dedicated cluster requires a minimum annual base cluster purchase and there are two billing options available for each cluster: Standard and Customer Cloud Subscription (CCS).

Standard OpenShift Dedicated clusters are deployed in to their own cloud infrastructure accounts, each owned by Red Hat. Red Hat is responsible for this account, and cloud infrastructure costs are paid directly by Red Hat. The customer only pays the Red Hat subscription costs.

In the CCS model, the customer pays the cloud infrastructure provider directly for cloud costs and the cloud infrastructure account is part of a customer’s Organization, with specific access granted to Red Hat. The customer will have restricted access to this account, but will be able to view billing and usage information. In this model, the customer pays Red Hat for the CCS subscription and pays the cloud provider for the cloud costs. It is the customer’s responsibility to pre-purchase or provide Reserved Instance (RI) compute instances to ensure lower cloud infrastructure costs.

Additional resources can be purchased for an OpenShift Dedicated Cluster, including:

  • Additional nodes (can be different types and sizes through the use of machine pools)

  • Middleware (JBoss EAP, JBoss Fuse, and so on) - additional pricing based on specific middleware component

  • Additional storage in increments of 500 GB (standard only; 100 GB included)

  • Additional 12 TiB Network I/O (standard only; 12 TB included)

  • Load Balancers for Services are available in bundles of 4; enables non-HTTP/SNI traffic or non-standard ports (standard only)

Cluster self-service

Customers can create, scale, and delete their clusters from OpenShift Cluster Manager (OCM), provided that they have pre-purchased the necessary subscriptions.

Actions available in OpenShift Cluster Manager (OCM) must not be directly performed from within the cluster as this might cause adverse affects, including having all actions automatically reverted.

Cloud providers

OpenShift Dedicated offers OpenShift Container Platform clusters as a managed service on the following cloud providers:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

Compute

Single availability zone clusters require a minimum of 2 worker nodes for Customer Cloud Subscription (CCS) clusters deployed to a single availability zone. A minimum of 4 worker nodes is required for standard clusters. These 4 worker nodes are included in the base subscription.

Multiple availability zone clusters require a minimum of 3 worker nodes for Customer Cloud Subscription (CCS) clusters, 1 deployed to each of 3 availability zones. A minimum of 9 worker nodes are required for standard clusters. These 9 worker nodes are included in the base subscription, and additional nodes must be purchased in multiples of 3 to maintain proper node distribution.

Worker nodes must all be the same type and size within a single OpenShift Dedicated cluster.

The default machine pool node type and size cannot be changed after the cluster has been created.

Control and infrastructure nodes are also provided by Red Hat. There are at least 3 control planenodes that handle etcd and API-related workloads. There are at least 2 infrastructure nodes that handle metrics, routing, the web console, and other workloads. Control and infrastructure nodes are strictly for Red Hat workloads to operate the service, and customer workloads are not permitted to be deployed on these nodes.

Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This is necessary to run processes required by the underlying platform. This includes system daemons such as udev, kubelet, container runtime, and so on, and also accounts for kernel reservations. OpenShift Container Platform core systems such as audit log aggregation, metrics collection, DNS, image registry, SDN, and so on might consume additional allocatable resources to maintain the stability and maintainability of the cluster. The additional resources consumed might vary based on usage.

AWS compute types

OpenShift Dedicated offers the following worker node types and sizes on AWS:

General purpose
  • M5.xlarge (4 vCPU, 16 GiB)

  • M5.2xlarge (8 vCPU, 32 GiB)

  • M5.4xlarge (16 vCPU, 64 GiB)

  • M5.8xlarge (32 vCPU, 128 GiB)

  • M5.12xlarge (48 vCPU, 192 GiB)

  • M5.16xlarge (64 vCPU, 256 GiB)

  • M5.24xlarge (96 vCPU, 384 GiB)

Memory-optimized
  • R5.xlarge (4 vCPU, 32 GiB)

  • R5.2xlarge (8 vCPU, 64 GiB)

  • R5.4xlarge (16 vCPU, 128 GiB)

  • R5.8xlarge (32 vCPU, 256 GiB)

  • R5.12xlarge (48 vCPU, 384 GiB)

  • R5.16xlarge (64 vCPU, 512 GiB)

  • R5.24xlarge (96 vCPU, 768 GiB)

Compute-optimized
  • C5.2xlarge (8 vCPU, 16 GiB)

  • C5.4xlarge (16 vCPU, 32 GiB)

  • C5.9xlarge (36 vCPU, 72 GiB)

  • C5.12xlarge (48 vCPU, 96 GiB)

  • C5.18xlarge (72 vCPU, 144 GiB)

  • C5.24xlarge (96 vCPU, 192 GiB)

Google Cloud compute types

OpenShift Dedicated offers the following worker node types and sizes on Google Cloud that are chosen to have a common CPU and memory capacity that are the same as other cloud instance types:

General purpose
  • custom-4-16384 (4 vCPU, 16 GiB)

  • custom-8-32768 (8 vCPU, 32 GiB)

  • custom-16-65536 (16 vCPU, 64 GiB)

  • custom-32-131072 (32 vCPU, 128 GiB)

  • custom-48-196608 (48 vCPU, 192 GiB)

  • custom-64-262144 (64 vCPU, 256 GiB)

  • custom-96-393216 (96 vCPU, 384 GiB)

Memory-optimized
  • custom-4-32768-ext (4 vCPU, 32 GiB)

  • custom-8-65536-ext (8 vCPU, 64 GiB)

  • custom-16-131072-ext (16 vCPU, 128 GiB)

  • custom-32-262144 (32 vCPU, 256 GiB)

  • custom-48-393216 (48 vCPU, 384 GiB)

  • custom-64-524288 (64 vCPU, 512 GiB)

  • custom-96-786432 (96 vCPU, 768 GiB)

Compute-optimized
  • custom-8-16384 (8 vCPU, 16 GiB)

  • custom-16-32768 (16 vCPU, 32 GiB)

  • custom-36-73728 (36 vCPU, 72 GiB)

  • custom-48-98304 (48 vCPU, 96 GiB)

  • custom-72-147456 (72 vCPU, 144 GiB)

  • custom-96-196608 (96 vCPU, 192 GiB)

Regions and availability zones

The following AWS regions are supported by OpenShift Container Platform 4 and are supported for OpenShift Dedicated:

  • af-south-1 (Cape Town, AWS opt-in required)

  • ap-east-1 (Hong Kong, AWS opt-in required)

  • ap-northeast-1 (Tokyo)

  • ap-northeast-2 (Seoul)

  • ap-south-1 (Mumbai)

  • ap-southeast-1 (Singapore)

  • ap-southeast-2 (Sydney)

  • ca-central-1 (Central Canada)

  • eu-central-1 (Frankfurt)

  • eu-north-1 (Stockholm)

  • eu-south-1 (Milan, AWS opt-in required)

  • eu-west-1 (Ireland)

  • eu-west-2 (London)

  • eu-west-3 (Paris)

  • me-south-1 (Bahrain, AWS opt-in required)

  • sa-east-1 (São Paulo)

  • us-east-1 (N. Virginia)

  • us-east-2 (Ohio)

  • us-west-1 (N. California)

  • us-west-2 (Oregon)

The following Google Cloud regions are currently supported:

  • asia-east1, Changhua County, Taiwan

  • asia-east2, Hong Kong

  • asia-northeast1, Tokyo, Japan

  • asia-northeast2, Osaka, Japan

  • asia-northeast3, Seoul, Korea

  • asia-south1, Mumbai, India

  • asia-southeast1, Jurong West, Singapore

  • asia-southeast2, Jakarta, Indonesia

  • europe-north1, Hamina, Finland

  • europe-west1, St. Ghislain, Belgium

  • europe-west2, London, England, UK

  • europe-west3, Frankfurt, Germany

  • europe-west4, Eemshaven, Netherlands

  • europe-west6, Zürich, Switzerland

  • northamerica-northeast1, Montréal, Québec, Canada

  • southamerica-east1, Osasco (São Paulo), Brazil

  • us-central1, Council Bluffs, Iowa, USA

  • us-east1, Moncks Corner, South Carolina, USA

  • us-east4, Ashburn, Northern Virginia, USA

  • us-west1, The Dalles, Oregon, USA

  • us-west2, Los Angeles, California, USA

  • us-west3, Salt Lake City, Utah, USA

  • us-west4, Las Vegas, Nevada, USA

Multi-AZ clusters can only be deployed in regions with at least 3 availability zones (see AWS and Google Cloud).

Each new OpenShift Dedicated cluster is installed within a dedicated Virtual Private Cloud (VPC) in a single Region, with the option to deploy into a single Availability Zone (Single-AZ) or across multiple Availability Zones (Multi-AZ). This provides cluster-level network and resource isolation, and enables cloud-provider VPC settings, such as VPN connections and VPC Peering. Persistent volumes are backed by cloud block storage and are specific to the availability zone in which they are provisioned. Persistent volumes do not bind to a volume until the associated pod resource is assigned into a specific availability zone in order to prevent unschedulable pods. Availability zone-specific resources are only usable by resources in the same availability zone.

The region and the choice of single or multi availability zone cannot be changed once a cluster has been deployed.

Service level agreement (SLA)

Any SLAs for the service itself are defined in Appendix 4 of the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

Limited support status

You must not remove or replace any native OpenShift Dedicated components or any other component installed and managed by Red Hat. If using cluster administration rights, Red Hat is not responsible for any actions taken by you or any of your authorized users, including actions that might affect infrastructure services, service availability, and data loss.

If any actions that affect infrastructure services, service availability, or data loss are detected, Red Hat will notify the customer of such and request either that the action be reverted or to create a support case to work with Red Hat to remedy any issues.

Support

OpenShift Dedicated includes Red Hat Premium Support, which can be accessed by using the Red Hat Customer Portal.

See the Scope of Coverage Page for more details on what is covered with included support for OpenShift Dedicated.

See OpenShift Dedicated SLAs for support response times.

Logging

OpenShift Dedicated provides optional integrated log forwarding to Amazon CloudWatch.

Cluster audit logging

Cluster audit logs are available through Amazon CloudWatch, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a support case. Audit log requests must specify a date and time range not to exceed 21 days. When requesting audit logs, customers should be aware that audit logs are many GB per day in size.

Application logging

Application logs sent to STDOUT are collected by Fluentd and forwarded to Amazon CloudWatch through the cluster logging stack, if it is installed.

Monitoring

Cluster metrics

OpenShift Dedicated clusters come with an integrated Prometheus/Grafana stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the web console and can also be used to view cluster-level status and capacity/usage through a Grafana dashboard. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics provided by an OpenShift Dedicated user.

Cluster status notification

Red Hat communicates the health and status of OpenShift Dedicated clusters through a combination of a cluster dashboard available in OpenShift Cluster Manager (OCM), and email notifications sent to the email address of the contact that originally deployed the cluster.

Networking

Custom domains for applications

To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a Route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster’s router.

Custom domains for cluster services

Custom domains and subdomains are not available for the platform service routes, for example, the API or web console routes, or for the default application routes.

Domain validated certificates

OpenShift Dedicated includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, for example, the internal API endpoint, use TLS certificates signed by the cluster’s built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.

Custom certificate authorities for builds

OpenShift Dedicated supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.

Load balancers

OpenShift Dedicated uses up to 5 different load balancers:

  • Internal control plane load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.

  • External control plane load balancer that is used for accessing the OpenShift Container Platform and Kubernetes APIs. This load balancer can be disabled in OpenShift Cluster Manager (OCM). If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal control load balancer.

  • External control plane load balancer for Red Hat that is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from allowlisted bastion hosts.

  • Default router/ingress load balancer that is the default application load balancer, denoted by apps in the URL. The default load balancer can be configured in OpenShift Cluster Manager (OCM) to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.

  • Optional: Secondary router/ingress load balancer that is a secondary application load balancer, denoted by apps2 in the URL. The secondary load balancer can be configured in OpenShift Cluster Manager (OCM) to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. If a 'Label match' is configured for this router load balancer, then only application routes matching this label will be exposed on this router load balancer, otherwise all application routes are also exposed on this router load balancer.

  • Optional: Load balancers for services that can can be mapped to a service running on OpenShift Dedicated to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. These can be purchased in groups of 4 for standard clusters, or they can be provisioned without charge in Customer Cloud Subscription (CCS) clusters; however, each AWS account has a quota that limits the number of Classic Load Balancers that can be used within each cluster.

Network usage

For standard OpenShift Dedicated clusters, network usage is measured based on data transfer between inbound, VPC peering, VPN, and AZ traffic. On a standard OpenShift Dedicated base cluster, 12 TB of network I/O is provided. Additional network I/O can be purchased in 12 TB increments. For CCS OpenShift Dedicated clusters, network usage is not monitored, and is billed directly by the cloud provider.

Cluster ingress

Project administrators can add route annotations for many different purposes, including ingress control through IP allowlisting.

Ingress policies can also be changed by using NetworkPolicy objects, which leverage the ovs-networkpolicy plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.

All cluster ingress traffic goes through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.

Cluster egress

Pod egress traffic control through EgressNetworkPolicy objects can be used to prevent or limit outbound traffic in OpenShift Dedicated.

Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires the 0.0.0.0/0 route to belong only to the internet gateway; it is not possible to route this range over private connections.

OpenShift Dedicated clusters use NAT Gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each subnet a cluster is deployed into receives a distinct NAT Gateway. For clusters deployed on AWS with multiple availability zones, up to 3 unique static IP addresses can exist for cluster egress traffic. For clusters deployed on Google Cloud, regardless of availability zone topology, there will by 1 static IP address for worker node egress traffic. Any traffic that remains inside the cluster or does not go out to the public internet will not pass through the NAT Gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic, and therefore a customer should not rely on allowlisting individual IP address when accessing private resources.

Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:

$ oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup

Cloud network configuration

OpenShift Dedicated allows for the configuration of a private network connection through several cloud provider managed technologies:

  • VPN connections

  • AWS VPC peering

  • AWS Transit Gateway

  • AWS Direct Connect

  • Google Cloud VPC Network peering

  • Google Cloud Classic VPN

  • Google Cloud HA VPN

Red Hat SREs do not monitor private network connections. Monitoring these connections is the responsibility of the customer.

DNS forwarding

For OpenShift Dedicated clusters that have a private cloud network configuration, a customer can specify internal DNS servers available on that private connection that should be queried for explicitly provided domains.

Storage

Encrypted-at-rest OS/node storage

Control plane nodes use encrypted-at-rest-EBS storage.

Encrypted-at-rest PV

EBS volumes used for persistent volumes (PVs) are encrypted-at-rest by default.

Block storage (RWO)

Persistent volumes (PVs) are backed by AWS EBS and Google Cloud persistent disk block storage, which uses the ReadWriteOnce (RWO) access mode. On a standard OpenShift Dedicated base cluster, 100 GB of block storage is provided for PVs, which is dynamically provisioned and recycled based on application requests. Additional persistent storage can be purchased in 500 GB increments.

PVs can only be attached to a single node at a time and are specific to the availability zone in which they were provisioned, but they can be attached to any node in the availability zone.

Each cloud provider has its own limits for how many PVs can be attached to a single node. See AWS instance type limits or Google Cloud Platform custom machine types for details.

Shared storage (RWX)

The AWS CSI Driver can be used to provide RWX support for OpenShift Dedicated on AWS. A community Operator is provided to simplify setup.

Platform

Cluster backup policy

It is critical that customers have a backup plan for their applications and application data.

Application and application data backups are not a part of the OpenShift Dedicated service. All Kubernetes objects in each OpenShift Dedicated cluster are backed up to facilitate a prompt recovery in the unlikely event that a cluster becomes irreparably inoperable.

The backups are stored in a secure object storage (Multi-AZ) bucket in the same account as the cluster. Node root volumes are not backed up because Red Hat Enterprise Linux CoreOS is fully managed by the OpenShift Container Platform cluster and no stateful data should be stored on the root volume of a node.

The following table shows the frequency of backups:

Component Snapshot Frequency Retention Notes

Full object store backup

Daily at 0100 UTC

7 days

This is a full backup of all Kubernetes objects. No persistent volumes (PVs) are backed up in this backup schedule.

Full object store backup

Weekly on Mondays at 0200 UTC

30 days

This is a full backup of all Kubernetes objects. No PVs are backed up in this backup schedule.

Full object store backup

Hourly at 17 minutes past the hour

24 hours

This is a full backup of all Kubernetes objects. No PVs are backed up in this backup schedule.

Autoscaling

Node autoscaling is not available on OpenShift Dedicated at this time.

Daemon sets

Customers may create and run DaemonSets on OpenShift Dedicated. In order to restrict DaemonSets to only running on worker nodes, use the following nodeSelector:

...
spec:
  nodeSelector:
    role: worker
...

Multiple availability zone

In a multiple availability zone cluster, control nodes are distributed across availability zones and at least three worker nodes are required in each availability zone.

Node labels

Custom node labels are created by Red Hat during node creation and cannot be changed on OpenShift Dedicated clusters at this time.

OpenShift version

OpenShift Dedicated is run as a service and is kept up to date with the latest OpenShift Container Platform version.

Upgrades

Refer to OpenShift Dedicated Life Cycle for more information on the upgrade policy and procedures.

Windows containers

Windows containers are not available on OpenShift Dedicated at this time.

Container engine

OpenShift Dedicated runs on OpenShift 4 and uses CRI-O as the only available container engine.

Operating system

OpenShift Dedicated runs on OpenShift 4 and uses Red Hat Enterprise Linux CoreOS as the operating system for all control plane and worker nodes.

Kubernetes Operator support

All Operators listed in the OperatorHub marketplace should be available for installation. Operators installed from OperatorHub, including Red Hat Operators, are not SRE managed as part of the OpenShift Dedicated service. Refer to the Red Hat Customer Portal for more information on the supportability of a given Operator.

Security

Authentication provider

Authentication for the cluster is configured as part of the OpenShift Cluster Manager (OCM) cluster creation process. OpenShift is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. Provisioning multiple identity providers provisioned at the same time is supported. The following identity providers are supported:

  • GitHub or GitHub Enterprise OAuth

  • GitLab OAuth

  • Google OAuth

  • LDAP

  • OpenID connect

Privileged containers

Privileged containers are not available by default on OpenShift Dedicated. The anyuid and nonroot Security Context Constraints are available for members of the dedicated-admins group, and should address many use cases. Privileged containers are only available for cluster-admin users.

Customer administrator user

In addition to normal users, OpenShift Dedicated provides access to an OpenShift Dedicated-specific group called dedicated-admin. Any users on the cluster that are members of the dedicated-admin group:

  • Have administrator access to all customer-created projects on the cluster.

  • Can manage resource quotas and limits on the cluster.

  • Can add and manage NetworkPolicy objects.

  • Are able to view information about specific nodes and PVs in the cluster, including scheduler information.

  • Can access the reserved dedicated-admin project on the cluster, which allows for the creation of service accounts with elevated privileges and also gives the ability to update default limits and quotas for projects on the cluster.

Cluster administration role

As an administrator of OpenShift Dedicated with Customer Cloud Subscriptions (CCS), you have access to the cluster-admin role. While logged in to an account with the cluster-admin role, users have mostly unrestricted access to control and configure the cluster. There are some configurations that are blocked with webhooks to prevent destablizing the cluster, or because they are managed in OpenShift Cluster Manager (OCM) and any in-cluster changes would be overwritten.

Project self-service

All users, by default, have the ability to create, update, and delete their projects. This can be restricted if a member of the dedicated-admin group removes the self-provisioner role from authenticated users:

$ oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth

Restrictions can be reverted by applying:

$ oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth

Regulatory compliance

See OpenShift Dedicated Process and Security Overview for the latest compliance information.

Network security

With OpenShift Dedicated on AWS, AWS provides a standard DDoS protection on all Load Balancers, called AWS Shield. This provides 95% protection against most commonly used level 3 and 4 attacks on all the public facing Load Balancers used for OpenShift Dedicated. A 10-second timeout is added for HTTP requests coming to the haproxy router to receive a response or the connection is closed to provide additional protection.