A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster.
Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application.
In OpenShift Container Platform, you can access, manage, and monitor a node through the Node
object representing the node.
Using the OpenShift CLI (oc
) or the web console, you can perform the following operations on a node.
The read operations allow an administrator or a developer to get information about nodes in an OpenShift Container Platform cluster.
Get information about a node, such as memory and CPU usage, health, status, and age.
As an administrator, you can easily manage a node in an OpenShift Container Platform cluster through several tasks:
Add or update node labels. A label is a key-value pair applied to a Node
object. You can control the scheduling of pods using labels.
Change node configuration using a custom resource definition (CRD), or the kubeletConfig
object.
Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a Ready
status allow pod placement by default while the control plane nodes do not; you can change this default behavior by configuring the worker nodes to be unschedulable and the control plane nodes to be schedulable.
Allocate resources for nodes using the system-reserved
setting. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved
CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes.
Configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit, or both.
Reboot a node gracefully using pod anti-affinity.
Delete a node from a cluster by scaling down the cluster using a machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node.
OpenShift Container Platform allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers.
Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator.
Run background tasks on nodes automatically with daemon sets. You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes.
Free node resources using garbage collection. You can ensure that your nodes are running efficiently by removing terminated containers and the images not referenced by any running pods.
Configure an OpenShift Container Platform cluster to have worker nodes at the network edge (remote worker nodes). For information on the challenges of having remote worker nodes in an OpenShift Container Platform cluster and some recommended approaches for managing pods on a remote worker node, see Using remote worker nodes at the network edge.
A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are:
As an administrator, you can get information about pods in a project through the following tasks:
List pods associated with a project, including information such as the number of replicas and restarts, current status, and age.
View pod usage statistics such as CPU, memory, and storage consumption.
The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Container Platform cluster.
Control scheduling of pods using the advanced scheduling features available in OpenShift Container Platform:
Node-to-pod binding rules such as pod affinity, node affinity, and anti-affinity.
Configure the descheduler to evict pods based on specific strategies so that the scheduler reschedules the pods to more appropriate nodes.
Configure how pods behave after a restart using pod controllers and restart policies.
Add and remove volumes to and from any object that has a pod template. A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data.
You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Container Platform. The following operations involve using those tools and features to better manage pods.
Operation | User | More information |
---|---|---|
Create and use a horizontal pod autoscaler. |
Developer |
You can use a horizontal pod autoscaler to specify the minimum and the maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. Using a horizontal pod autoscaler, you can automatically scale pods. |
Administrator and developer |
As an administrator, use a vertical pod autoscaler to better use cluster resources by monitoring the resources and the resource requirements of workloads. As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod. |
|
Provide access to external resources using device plug-ins. |
Administrator |
A device plug-in is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can deploy a device plug-in to provide a consistent and portable solution to consume hardware devices across clusters. |
Provide sensitive data to pods using the |
Administrator |
Some applications need sensitive information, such as passwords and usernames. You can use the |
A container is the basic unit of an OpenShift Container Platform application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud.
Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as:
OpenShift Container Platform provides specialized containers called Init containers. Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed.
Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Container Platform cluster to keep the cluster efficient and the application pods highly available.