OpenShift Dedicated is a cloud-based Kubernetes container platform. The foundation of OpenShift Dedicated is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Dedicated and Kubernetes, see product architecture.

Glossary of common terms for OpenShift Dedicated architecture

This glossary defines common terms that are used in the architecture content.

access policies

A set of roles that dictate how users, applications, and entities within a cluster interact with one another. An access policy increases cluster security.

admission plugins

Admission plugins enforce security policies, resource limitations, or configuration requirements.


To control access to an OpenShift Dedicated cluster, an administrator with the dedicated-admin role can configure user authentication to ensure only approved users access the cluster. To interact with an OpenShift Dedicated cluster, you must authenticate with the OpenShift Dedicated API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Dedicated API.


A temporary machine that runs minimal Kubernetes and deploys the OpenShift Dedicated control plane.

certificate signing requests (CSRs)

A resource requests a denoted signer to sign a certificate. This request might get approved or denied.

Cluster Version Operator (CVO)

An Operator that checks with the OpenShift Dedicated Update Service to see the valid updates and update paths based on current component versions and information in the graph.

compute nodes

Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes.

configuration drift

A situation where the configuration on a node does not match what the machine config specifies.


Lightweight and executable images that consist of software and all of its dependencies. Because containers virtualize the operating system, you can run containers anywhere, such as data centers, public or private clouds, and local hosts.

container orchestration engine

Software that automates the deployment, management, scaling, and networking of containers.

container workloads

Applications that are packaged and deployed in containers.

control groups (cgroups)

Partitions sets of processes into groups to manage and limit the resources processes consume.

control plane

A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines.


A Kubernetes native container runtime implementation that integrates with the operating system to deliver an efficient Kubernetes experience.


A Kubernetes resource object that maintains the life cycle of an application.


A text file that contains the user commands to perform on a terminal to assemble the image.

hybrid cloud deployments

Deployments that deliver a consistent platform across bare metal, virtual, private, and public cloud environments. This offers speed, agility, and portability.


A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users.

installer-provisioned infrastructure

The installation program deploys and configures the infrastructure that the cluster runs on.


A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod.

kubernetes manifest

Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemon sets.

Machine Config Daemon (MCD)

A daemon that regularly checks the nodes for configuration drift.

Machine Config Operator (MCO)

An Operator that applies the new configuration to your cluster machines.

machine config pools (MCP)

A group of machines, such as control plane components or user workloads, that are based on the resources that they handle.


Additional information about cluster deployment artifacts.


An approach to writing software. Applications can be separated into the smallest components, independent from each other by using microservices.

mirror registry

A registry that holds the mirror of OpenShift Dedicated images.

monolithic applications

Applications that are self-contained, built, and packaged as a single piece.


A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources.


Network information of OpenShift Dedicated cluster.


A worker machine in the OpenShift Dedicated cluster. A node is either a virtual machine (VM) or a physical machine.

OpenShift CLI (oc)

A command line tool to run OpenShift Dedicated commands on the terminal.

OpenShift Update Service (OSUS)

For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift update service as a hosted service located behind public APIs.

OpenShift image registry

A registry provided by OpenShift Dedicated to manage images.


The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Dedicated cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.


A platform that contains various OpenShift Dedicated Operators to install.

Operator Lifecycle Manager (OLM)

OLM helps you to install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way.


An upgrade system for Linux-based operating systems that performs atomic upgrades of complete file system trees. OSTree tracks meaningful changes to the file system tree using an addressable object store, and is designed to complement existing package management systems.

over-the-air (OTA) updates

The OpenShift Dedicated Update Service (OSUS) provides over-the-air updates to OpenShift Dedicated, including Red Hat Enterprise Linux CoreOS (RHCOS).


One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Dedicated cluster. A pod is the smallest compute unit defined, deployed, and managed.

private registry

OpenShift Dedicated can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images.

public registry

OpenShift Dedicated can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images.

RHEL OpenShift Dedicated Cluster Manager

A managed service where you can install, modify, operate, and upgrade your OpenShift Dedicated clusters.

RHEL Quay Container Registry

A Quay.io container registry that serves most of the container images and Operators to OpenShift Dedicated clusters.

replication controllers

An asset that indicates how many pod replicas are required to run at a time.

role-based access control (RBAC)

A key security control to ensure that cluster users and workloads have only access to resources required to execute their roles.


Routes expose a service to allow for network access to pods from users and applications outside the OpenShift Dedicated instance.


The increasing or decreasing of resource capacity.


A service exposes a running application on a set of pods.

Source-to-Image (S2I) image

An image created based on the programming language of the application source code in OpenShift Dedicated to deploy applications.


OpenShift Dedicated supports many types of storage for cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Dedicated cluster.


A component to collect information such as size, health, and status of OpenShift Dedicated.


A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Dedicated.

web console

A user interface (UI) to manage OpenShift Dedicated.

worker node

Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes.

Additional resources

Understanding how OpenShift Dedicated differs from OpenShift Container Platform

OpenShift Dedicated uses the same code base as OpenShift Container Platform but is installed in an opinionated way to be optimized for performance, scalability, and security. OpenShift Dedicated is a fully managed service; therefore, many of the OpenShift Dedicated components and settings that you manually set up in OpenShift Container Platform are set up for you by default.

Review the following differences between OpenShift Dedicated and a standard installation of OpenShift Container Platform on your own infrastructure:

OpenShift Container Platform OpenShift Dedicated

The customer installs and configures OpenShift Container Platform.

OpenShift Dedicated is installed through Red Hat OpenShift Cluster Manager and in a standardized way that is optimized for performance, scalability, and security.

Customers can choose their computing resources.

OpenShift Dedicated is hosted and managed in a public cloud (Amazon Web Services or Google Cloud Platform) either owned by Red Hat or provided by the customer.

Customers have top-level administrative access to the infrastructure.

Customers have a built-in administrator group (dedicated-admin), though the top-level administration access is available when cloud accounts are provided by the customer.

Customers can use all supported features and configuration settings available in OpenShift Container Platform.

Some OpenShift Container Platform features and configuration settings might not be available or changeable in OpenShift Dedicated.

You set up control plane components such as the API server and etcd on machines that get the control role. You can modify the control plane components, but are responsible for backing up, restoring, and making control plane data highly available.

Red Hat sets up the control plane and manages the control plane components for you. The control plane is highly available.

You are responsible for updating the underlying infrastructure for the control plane and worker nodes. You can use the OpenShift web console to update OpenShift Container Platform versions.

Red Hat automatically notifies the customer when updates are available. You can manually or automatically schedule updates in OpenShift Cluster Manager.

Support is provided based on the terms of your Red Hat subscription or cloud provider.

Engineered, operated, and supported by Red Hat with a 99.95% uptime SLA and 24x7 coverage. For details, see Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

About the control plane

The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Dedicated assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types.

You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Dedicated because they provide the following services:

  • Perform health checks

  • Provide ways to watch applications

  • Manage over-the-air updates

  • Ensure applications stay in the specified state

About containerized applications for developers

As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example:

  • Use various build-tool, base-image, and registry options to build a simple container application.

  • Use supporting components such as OperatorHub and templates to develop your application.

  • Package and deploy your application as an Operator.

You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application.

About admission plugins

You can use admission plugins to regulate how OpenShift Dedicated functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, configuration requirements, and other settings.