As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection.
The OpenShift Container Platform node performs two types of garbage collection:
Container garbage collection: Removes terminated containers.
Image garbage collection: Removes images not referenced by any running pods.
Container garbage collection can be performed using eviction thresholds.
When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using
eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period.
eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action.
The following table lists the eviction thresholds:
|Node condition||Eviction signal||Description|
The available memory on the node.
The available disk space or inodes on the node root file system,
If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between
false. As a consequence, the scheduler could make poor scheduling decisions.
To protect against this oscillation, use the
eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false.
Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node.
The policy for image garbage collection is based on two conditions:
The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85.
The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80.
For image garbage collection, you can modify any of the following variables using a custom resource.
The minimum age for an unused image before the image is removed by garbage collection. The default is 2m.
The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85.
The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80.
Two lists of images are retrieved in each garbage collector run:
A list of images currently running in at least one pod.
A list of images available on a host.
As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the previous spins. All images are then sorted by the time stamp.
Once the collection starts, the oldest images get deleted first until the stopping criterion is met.
As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a
kubeletConfig object for each machine config pool.
OpenShift Container Platform supports only one
You can configure any combination of the following:
Soft eviction for containers
Hard eviction for containers
Eviction for images
Obtain the label associated with the static
MachineConfigPool CRD for the type of node you want to configure by entering the following command:
$ oc edit machineconfigpool <name>
$ oc edit machineconfigpool worker
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" (1) name: worker
|1||The label appears under Labels.|
If the label is not present, add a key/value pair such as:
$ oc label machineconfigpool worker custom-kubelet=small-pods
Create a custom resource (CR) for your configuration change.
If there is one file system, or if
apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig (1) spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" (2) kubeletConfig: evictionSoft: (3) memory.available: "500Mi" (4) nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: (5) memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: (6) memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s (7) imageMinimumGCAge: 5m (8) imageGCHighThresholdPercent: 80 (9) imageGCLowThresholdPercent: 75 (10)
|1||Name for the object.|
|2||Specify the label from the machine config pool.|
|3||Type of eviction:
|4||Eviction thresholds based on a specific eviction trigger signal.|
|5||Grace periods for the soft eviction. This parameter does not apply to
|6||Eviction thresholds based on a specific eviction trigger signal.
|7||The duration to wait before transitioning out of an eviction pressure condition.|
|8||The minimum age for an unused image before the image is removed by garbage collection.|
|9||The percent of disk usage (expressed as an integer) that triggers image garbage collection.|
|10||The percent of disk usage (expressed as an integer) that image garbage collection attempts to free.|
Run the following command to create the CR:
$ oc create -f <file_name>.yaml
$ oc create -f gc-container.yaml
Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with
UPDATING as 'true` until the change is fully implemented:
$ oc get machineconfigpool
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True