OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture.
This glossary defines common terms that are used in the architecture content.
A set of roles that dictate how users, applications, and entities within a cluster interacts with one another. An access policy increases cluster security.
Admission plugins enforce security policies, resource limitations, or configuration requirements.
To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API.
A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane.
A resource requests a denoted signer to sign a certificate. This request might get approved or denied.
An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph.
Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes.
A situation where the configuration on a node does not match what the machine config specifies.
Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers anywhere, from a data center to a public or private cloud to your local host.
Software that automates the deployment, management, scaling, and networking of containers.
Applications that are packaged and deployed in containers.
Partitions sets of processes into groups to manage and limit the resources processes consume.
A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines.
A Kubernetes native container runtime implementation that integrates with the operating system to deliver an efficient Kubernetes experience.
A Kubernetes resource object that maintains the life cycle of an application.
A text file that contains the user commands to perform on a terminal to assemble the image.
A OpenShift Container Platform feature that enables hosting a control plane on the OpenShift Container Platform cluster from its data plane and workers. This model performs following actions:
Optimize infrastructure costs required for the control planes.
Improve the cluster creation time.
Enable hosting the control plane using the Kubernetes native high level primitives. For example, deployments, stateful sets.
Allow a strong network segmentation between the control plane and workloads.
Deployments that deliver a consistent platform across bare metal, virtual, private, and public cloud environments. This offers speed, agility, and portability.
A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users.
The installation program deploys and configures the infrastructure that the cluster runs on.
A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod.
Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemon sets.
A daemon that regularly checks the nodes for configuration drift.
An Operator that applies the new configuration to your cluster machines.
A group of machines, such as control plane components or user workloads, that are based on the resources that they handle.
Additional information about cluster deployment artifacts.
An approach to writing software. Applications can be separated into the smallest components, independent from each other by using microservices.
A registry that holds the mirror of OpenShift Container Platform images.
Applications that are self-contained, built, and packaged as a single piece.
A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources.
Network information of OpenShift Container Platform cluster.
A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine.
For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift Container Platform update service as a hosted service located behind public APIs.
oc
)A command line tool to run OpenShift Container Platform commands on the terminal.
A managed RHEL OpenShift Container Platform offering on Amazon Web Services (AWS) and Google Cloud Platform (GCP). OpenShift Dedicated focuses on building and scaling applications.
A registry provided by OpenShift Container Platform to manage images.
The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.
A platform that contains various OpenShift Container Platform Operators to install.
OLM helps you to install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way.
An upgrade system for Linux-based operating systems that performs atomic upgrades of complete file system trees. OSTree tracks meaningful changes to the file system tree using an addressable object store, and is designed to complement existing package management systems.
The OpenShift Container Platform Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS).
One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed.
OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images.
OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images.
A managed service where you can install, modify, operate, and upgrade your OpenShift Container Platform clusters.
A Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters.
An asset that indicates how many pod replicas are required to run at a time.
A key security control to ensure that cluster users and workloads have only access to resources required to execute their roles.
Routes expose a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance.
The increasing or decreasing of resource capacity.
A service exposes a running application on a set of pods.
An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications.
OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster.
A component to collect information such as size, health, and status of OpenShift Container Platform.
A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform.
You can install OpenShift Container Platform on the infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.
A user interface (UI) to manage OpenShift Container Platform.
Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes.
For more information on networking, see OpenShift Container Platform networking.
For more information on storage, see OpenShift Container Platform storage.
For more information on authentication, see OpenShift Container Platform authentication.
For more information on Operator Lifecycle Manager (OLM), see OLM.
For more information on logging, see About Logging.
For more information on over-the-air (OTA) updates, see Introduction to OpenShift updates.
As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster by using one of the following methods:
Installer-provisioned infrastructure
User-provisioned infrastructure
The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types.
You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services:
Perform health checks
Provide ways to watch applications
Manage over-the-air updates
Ensure applications stay in the specified state
As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example:
Use various build-tool, base-image, and registry options to build a simple container application.
Use supporting components such as OperatorHub and templates to develop your application.
Package and deploy your application as an Operator.
You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application.
As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks:
Learn about the next generation of single-purpose container operating system technology.
Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS)
Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS):
Installer-provisioned deployment
User-provisioned deployment
The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines.
You can learn how Ignition works, the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation.
You can use admission plugins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, or configuration requirements.