To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned
objects and referencing those config maps in your node pools.
Procedure
-
Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned
manifest defines a profile that sets vm.dirty_ratio
to 55 on nodes that contain the tuned-1-node-label
node label with any value. Save the following ConfigMap
manifest in a file named tuned-1.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: tuned-1
namespace: clusters
data:
tuning: |
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: tuned-1
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Custom OpenShift profile
include=openshift-node
[sysctl]
vm.dirty_ratio="55"
name: tuned-1-profile
recommend:
- priority: 20
profile: tuned-1-profile
|
If you do not add any labels to an entry in the spec.recommend section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the spec.recommend section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned .spec.recommend.match section, node labels will not persist during an upgrade unless you set the .spec.management.upgradeType value of the node pool to InPlace .
|
-
Create the ConfigMap
object in the management cluster:
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
-
Reference the ConfigMap
object in the spec.tuningConfig
field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool
, named nodepool-1
, which contains 2 nodes.
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
...
name: nodepool-1
namespace: clusters
...
spec:
...
tuningConfig:
- name: tuned-1
status:
...
|
You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster.
|
Verification
Now that you have created the ConfigMap
object that contains a Tuned
manifest and referenced it in a NodePool
, the Node Tuning Operator syncs the Tuned
objects into the hosted cluster. You can verify which Tuned
objects are defined and which TuneD profiles are applied to each node.
-
List the Tuned
objects in the hosted cluster:
$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator
Example output
NAME AGE
default 7m36s
rendered 7m36s
tuned-1 65s
-
List the Profile
objects in the hosted cluster:
$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator
Example output
NAME TUNED APPLIED DEGRADED AGE
nodepool-1-worker-1 tuned-1-profile True False 7m43s
nodepool-1-worker-2 tuned-1-profile True False 7m14s
|
If no custom profiles are created, the openshift-node profile is applied by default.
|
-
To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio