$ oc edit nodes.config/cluster
You can enable Linux control group version 2 (cgroup v2) in your cluster by editing the node.config
object. Enabling cgroup v2 in OpenShift Container Platform disables all cgroups version 1 controllers and hierarchies in your cluster. cgroup v1 is enabled by default.
cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information, and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2.
|
OpenShift Container Platform cgroups version 2 support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You enable cgroup v2 by editing the node.config
object.
Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. |
You have a running OpenShift Container Platform cluster that uses version 4.12 or later.
You are logged in to the cluster as a user with administrative privileges.
You have enabled the TechPreviewNoUpgrade
feature set by using the feature gates.
Enable cgroup v2 on nodes:
Edit the node.config
object:
$ oc edit nodes.config/cluster
Add spec.cgroupMode: "v2"
:
node.config
objectapiVersion: config.openshift.io/v1
kind: Node
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
release.openshift.io/create-only: "true"
creationTimestamp: "2022-07-08T16:02:51Z"
generation: 1
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: 36282574-bf9f-409e-a6cd-3032939293eb
resourceVersion: "1865"
uid: 0c0f7a4c-4307-4187-b591-6155695ac85b
spec:
cgroupMode: "v2" (1)
...
1 | Enables cgroup v2. |
Check the machine configs to see that the new machine configs were added:
$ oc get mc
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m (1)
99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m
99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-ssh 3.2.0 40m
99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-worker-ssh 3.2.0 40m
rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
worker-enable-cgroups-v2 3.2.0 10s
1 | New machine configs are created, as expected. |
Check that the new kernelArguments
were added to the new machine configs:
$ oc describe mc <name>
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 05-worker-kernelarg-selinuxpermissive
spec:
kernelArguments:
- systemd_unified_cgroup_hierarchy=1 (1)
- cgroup_no_v1="all" (2)
- psi=1 (3)
1 | Enables cgroup v2 in systemd. |
2 | Disables cgroups v1. |
3 | Enables the Linux Pressure Stall Information (PSI) feature. |
Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.25.0
ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.25.0
ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.25.0
ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.25.0
ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.25.0
ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.25.0
After a node returns to the Ready
state, start a debug session for that node:
$ oc debug node/<node_name>
Set /host
as the root directory within the debug shell:
sh-4.4# chroot /host
Check that the sys/fs/cgroup/cgroup2fs
file is present on your nodes. This file is created by cgroup v2:
$ stat -c %T -f /sys/fs/cgroup
cgroup2fs