$ oc -n NAMESPACE patch HC HCNAME --patch '{"spec":{"release":{"image": "example"}}}' --type=merge
After you configure your environment for hosted control planes and create a hosted cluster, you can further manage your clusters and nodes.
Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Updates for hosted control planes involve updating the hosted cluster and the node pools. For a cluster to remain fully operational during an update process, you must meet the requirements of the Kubernetes version skew policy while completing the control plane and node updates.
The spec.release
value dictates the version of the control plane. The HostedCluster
object transmits the intended spec.release
value to the HostedControlPlane.spec.release
value and runs the appropriate Control Plane Operator version.
The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO).
With node pools, you can configure the software that is running in the nodes by exposing the spec.release
and spec.config
values. You can start a rolling node pool update in the following ways:
Changing the spec.release
or spec.config
values.
Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type.
Changing the cluster configuration, if the change propagates to the node.
Node pools support replace updates and in-place updates. The nodepool.spec.release
value dictates the version of any particular node pool. A NodePool
object completes a replace or an in-place rolling update according to the .spec.management.upgradeType
value.
After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one.
A replace update creates instances in the new version while it removes old instances from the previous version. This update type is effective in cloud environments where this level of immutability is cost effective.
Replace updates do not preserve any manual changes because the node is entirely re-provisioned.
An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal.
In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates.
On hosted control planes, you update your version of OpenShift Container Platform by updating the node pools. The node pool version must not surpass the hosted control plane version.
To start the process to update to a new version of OpenShift Container Platform, change the spec.release.image
value of the node pool by entering the following command:
$ oc -n NAMESPACE patch HC HCNAME --patch '{"spec":{"release":{"image": "example"}}}' --type=merge
To verify that the new version was rolled out, check the .status.version
value and the status conditions.
On hosted control planes, you can configure node pools by creating a MachineConfig
object inside of a config map in the management cluster.
To create a MachineConfig
object inside of a config map in the management cluster, enter the following information:
apiVersion: v1
kind: ConfigMap
metadata:
name: <configmap-name>
namespace: clusters
data:
config: |
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: <machineconfig-name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:...
mode: 420
overwrite: true
path: ${PATH} (1)
1 | Sets the path on the node where the MachineConfig object is stored. |
After you add the object to the config map, you can apply the config map to the node pool as follows:
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
config:
- name: ${configmap-name}
# ...
Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned
objects and referencing those config maps in your node pools.
Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned
manifest defines a profile that sets vm.dirty_ratio
to 55 on nodes that contain the tuned-1-node-label
node label with any value. Save the following ConfigMap
manifest in a file named tuned-1.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: tuned-1
namespace: clusters
data:
tuning: |
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: tuned-1
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Custom OpenShift profile
include=openshift-node
[sysctl]
vm.dirty_ratio="55"
name: tuned-1-profile
recommend:
- priority: 20
profile: tuned-1-profile
If you do not add any labels to an entry in the |
Create the ConfigMap
object in the management cluster:
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
Reference the ConfigMap
object in the spec.tuningConfig
field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool
, named nodepool-1
, which contains 2 nodes.
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
...
name: nodepool-1
namespace: clusters
...
spec:
...
tuningConfig:
- name: tuned-1
status:
...
You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster. |
Now that you have created the ConfigMap
object that contains a Tuned
manifest and referenced it in a NodePool
, the Node Tuning Operator syncs the Tuned
objects into the hosted cluster. You can verify which Tuned
objects are defined and which TuneD profiles are applied to each node.
List the Tuned
objects in the hosted cluster:
$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator
NAME AGE
default 7m36s
rendered 7m36s
tuned-1 65s
List the Profile
objects in the hosted cluster:
$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator
NAME TUNED APPLIED DEGRADED AGE
nodepool-1-worker-1 tuned-1-profile True False 7m43s
nodepool-1-worker-2 tuned-1-profile True False 7m14s
If no custom profiles are created, the |
To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio
vm.dirty_ratio = 55
Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane.
You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview).
Create a namespace and an Operator group:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-sriov-network-operator
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: sriov-network-operators
namespace: openshift-sriov-network-operator
spec:
targetNamespaces:
- openshift-sriov-network-operator
Create a subscription to the SR-IOV Operator:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subsription
namespace: openshift-sriov-network-operator
spec:
channel: "4.13"
name: sriov-network-operator
config:
nodeSelector:
node-role.kubernetes.io/worker: ""
source: s/qe-app-registry/redhat-operators
sourceNamespace: openshift-marketplace
To verify that the SR-IOV Operator is ready, run the following command and view the resulting output:
$ oc get csv -n openshift-sriov-network-operator
NAME DISPLAY VERSION REPLACES PHASE
sriov-network-operator.4.13.0-202211021237 SR-IOV Network Operator 4.13.0-202211021237 sriov-network-operator.4.13.0-202210290517 Succeeded
To verify that the SR-IOV pods are deployed, run the following command:
$ oc get pods -n openshift-sriov-network-operator
The steps to delete a hosted cluster differ depending on which provider you use.
If the cluster is on AWS, follow the instructions in Destroying a hosted cluster on AWS.
If the cluster is on bare metal, follow the instructions in Destroying a hosted cluster on bare metal.
If the cluster is on OpenShift Virtualization, follow the instructions in Destroying a hosted cluster on OpenShift Virtualization.
If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature.