The custom resource (CR) for the Operator has two major sections. The first section, profile:
, is a list of TuneD profiles and their names. The second, recommend:
, defines the profile selection logic.
Multiple custom tuning specifications can co-exist as multiple CRs in the Operator’s namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated.
The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState
field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows:
-
Managed: the Operator will update its operands as configuration resources are updated
-
Unmanaged: the Operator will ignore changes to the configuration resources
-
Removed: the Operator will remove its operands and resources the Operator provisioned
The profile:
section lists TuneD profiles and their names.
{
"profile": [
{
"name": "tuned_profile_1",
"data": "# TuneD profile specification\n[main]\nsummary=Description of tuned_profile_1 profile\n\n[sysctl]\nnet.ipv4.ip_forward=1\n# ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD\n"
},
{
"name": "tuned_profile_n",
"data": "# TuneD profile specification\n[main]\nsummary=Description of tuned_profile_n profile\n\n# tuned_profile_n profile settings\n"
}
]
}
The profile:
selection logic is defined by the recommend:
section of the CR. The recommend:
section is a list of items to recommend the profiles based on a selection criteria.
"recommend": [
{
"recommend-item-1": details_of_recommendation,
# ...
"recommend-item-n": details_of_recommendation,
}
]
The individual items of the list:
{
"profile": [
{
# ...
}
],
"recommend": [
{
"profile": <tuned_profile_name>, (1)
"priority":{ <priority>, (2)
},
"match": [ (3)
{
"label": <label_information> (4)
},
]
},
]
}
1 |
A TuneD profile to apply on a match. For example tuned_profile_1 . |
2 |
Profile ordering priority. Lower numbers mean higher priority (0 is the highest priority). |
3 |
If omitted, profile match is assumed unless a profile with a higher priority matches first. |
4 |
The label for the profile matched items. |
<match>
is an optional list recursively defined as follows:
"match": [
{
"label": (1)
},
]
1 |
Node or pod label name. |
If <match>
is not omitted, all nested <match>
sections must also evaluate to true
. Otherwise, false
is assumed and the profile with the respective <match>
section will not be applied or recommended. Therefore, the nesting (child <match>
sections) works as logical AND operator. Conversely, if any item of the <match>
list matches, the entire <match>
list evaluates to true
. Therefore, the list acts as logical OR operator.
Example: Node or pod label based matching
[
{
"match": [
{
"label": "tuned.openshift.io/elasticsearch",
"match": [
{
"label": "node-role.kubernetes.io/master"
},
{
"label": "node-role.kubernetes.io/infra"
}
],
"type": "pod"
}
],
"priority": 10,
"profile": "openshift-control-plane-es"
},
{
"match": [
{
"label": "node-role.kubernetes.io/master"
},
{
"label": "node-role.kubernetes.io/infra"
}
],
"priority": 20,
"profile": "openshift-control-plane"
},
{
"priority": 30,
"profile": "openshift-node"
}
]
The CR above is translated for the containerized TuneD daemon into its recommend.conf
file based on the profile priorities. The profile with the highest priority (10
) is openshift-control-plane-es
and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch
label set. If not, the entire <match>
section evaluates as false
. If there is such a pod with the label, in order for the <match>
section to evaluate to true
, the node label also needs to be node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
If the labels for the profile with priority 10
matched, openshift-control-plane-es
profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile (openshift-control-plane
) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master
or node-role.kubernetes.io/infra
.
Finally, the profile openshift-node
has the lowest priority of 30
. It lacks the <match>
section and, therefore, will always match. It acts as a profile catch-all to set openshift-node
profile, if no other profile with higher priority matches on a given node.
Example: Machine pool based matching
{
"apiVersion": "tuned.openshift.io/v1",
"kind": "Tuned",
"metadata": {
"name": "openshift-node-custom",
"namespace": "openshift-cluster-node-tuning-operator"
},
"spec": {
"profile": [
{
"data": "[main]\nsummary=Custom OpenShift node profile with an additional kernel parameter\ninclude=openshift-node\n[bootloader]\ncmdline_openshift_node_custom=+skew_tick=1\n",
"name": "openshift-node-custom"
}
],
"recommend": [
{
"priority": 20,
"profile": "openshift-node-custom"
}
]
}
}
Cloud provider-specific TuneD profiles
With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a Red Hat OpenShift Service on AWS cluster. This can be accomplished without adding additional node labels or grouping nodes into
machine pools.
This functionality takes advantage of spec.providerID
node object values in the form of <cloud-provider>://<cloud-provider-specific-id>
and writes the file /var/lib/ocp-tuned/provider
with the value <cloud-provider>
in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider>
profile if such profile exists.
The openshift
profile that both openshift-control-plane
and openshift-node
profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider>
that will be applied to all Cloud provider-specific cluster nodes.
Example GCE Cloud provider profile
{
"apiVersion": "tuned.openshift.io/v1",
"kind": "Tuned",
"metadata": {
"name": "provider-gce",
"namespace": "openshift-cluster-node-tuning-operator"
},
"spec": {
"profile": [
{
"data": "[main]\nsummary=GCE Cloud provider-specific profile\n# Your tuning for GCE Cloud provider goes here.\n",
"name": "provider-gce"
}
]
}
}
|
Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles.
|