OpenShift Container Platform 4.13 introduces the following notable technical changes.
Cloud controller managers for additional cloud providers
The Kubernetes community plans to deprecate the use of the Kubernetes controller manager to interact with underlying cloud platforms in favor of using cloud controller managers. As a result, there is no plan to add Kubernetes controller manager support for any new cloud platforms.
The Nutanix implementation that is added in this release of OpenShift Container Platform uses cloud controller managers. In addition, this release introduces the General Availability of using cloud controller managers for VMware vSphere.
To manage the cloud controller manager and cloud node manager deployments and lifecycles, use the Cluster Cloud Controller Manager Operator.
The MCD now syncs kubelet CA certificates on paused pools
Previously, the Machine Config Operator (MCO) updated the kubelet client certificate authority (CA) certificate, /etc/kubernetes/kubelet-ca.crt
, as a part of the regular machine config update. Starting with OpenShift Container Platform 4.13, the kubelet-ca.crt
no longer gets updated as a part of the regular machine config update. As a result of this change, the Machine Config Daemon (MCD) automatically keeps the kubelet-ca.crt
up to date whenever changes to the certificate occur.
Also, if a machine config pool is paused, the MCD is now able to push the newly rotated certificates to those nodes. A new rendered machine config, which contains the changes to the certificate, is generated for the pool, like in previous versions. The pool will indicate that an update is required; this condition will be removed in a future release of this product. However, because the certificate is updated separately, it is safe to keep the pool paused, assuming there are no further updates.
Also, the MachineConfigControllerPausedPoolKubeletCA
alert has been removed, because the nodes should always have the most up-to-date kubelet-ca.crt
.
Change in SSH key location
OpenShift Container Platform 4.13 introduces a RHEL 9.2 based RHCOS. Before this update, SSH keys were located in /home/core/.ssh/authorized_keys
on RHCOS. With this update, on RHEL 9.2 based RHCOS, SSH keys are located in /home/core/.ssh/authorized_keys.d/ignition
.
If you customized the default OpenSSH /etc/ssh/sshd_config
server configuration file, you must update it according to this Red Hat Knowledgebase article.
Future restricted enforcement for pod security admission
Currently, pod security violations are shown as warnings and logged in the audit logs, but do not cause the pod to be rejected.
Global restricted enforcement for pod security admission is currently planned for the next minor release of OpenShift Container Platform. When this restricted enforcement is enabled, pods with pod security violations will be rejected.
To prepare for this upcoming change, ensure that your workloads match the pod security admission profile that applies to them. Workloads that are not configured according to the enforced security standards defined globally or at the namespace level will be rejected. The restricted-v2
SCC admits workloads according to the Restricted Kubernetes definition.
If you are receiving pod security violations, see the following resources:
The oc-mirror plugin now retrieves graph data container images from an OpenShift API endpoint
The oc-mirror OpenShift CLI (oc
) plugin now downloads the graph data tarball from an OpenShift API endpoint instead of downloading the entire graph data repository from GitHub. Retrieving this data from Red Hat instead of an external vendor is more suitable for users with stringent security and compliance restrictions on external content.
The data that the oc-mirror plugin downloads now excludes content that is in the graph data repository but not needed by the OpenShift Update Service. The container also uses UBI Micro as the base image instead of UBI, resulting in a container image that is significantly smaller than before.
These changes do not affect the user workflow for the oc-mirror plugin.
The Dockerfile for the graph data container image is now retrieved from an OpenShift API endpoint
If you are creating a graph data container image for the OpenShift Update Service by using the Dockerfile, note that the graph data tarball is now downloaded from an OpenShift API endpoint instead of GitHub.
The nodeip-configuration service is now enabled on a vSphere user-provisioned infrastructure cluster
In OpenShift Container Platform 4.13, the nodeip-configuration
service is now enabled on a vSphere user-provisioned infrastructure cluster. This service determines the network interface controller (NIC) that OpenShift Container Platform uses for communication with the Kubernetes API server when the node boots. In rare circumstances, the service might select an incorrect node IP after an upgrade. If this happens, you can use the NODEIP_HINT
feature to restore the original node IP. See Troubleshooting network issues.
Operator SDK 1.28
|
Operator SDK 1.28 supports Kubernetes 1.26.
|
If you have Operator projects that were previously created or maintained with Operator SDK 1.25, update your projects to keep compatibility with Operator SDK 1.28.
Change in disk ordering behavior for RHCOS based on RHEL 9.2
OpenShift Container Platform 4.13 introduces a RHEL 9.2 based RHCOS. With this update, symbolic disk naming can change across reboots. This can cause issues if you apply configuration files after installation or when provisioning a node that references a disk which uses symbolic naming, such as /dev/sda
, for creating services. The effects of this issue depend on the component you are configuring. It is recommended to use a specific naming scheme for devices, including for any specific disk references, such as dev/disk/by-id
.
With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node.
Documentation about backup, restore, and disaster recovery for hosted control planes moved
In the documentation for OpenShift Container Platform 4.13, the procedures to back up and restore etcd on a hosted cluster and to restore a hosted cluster within an AWS region were moved from the "Backup and restore" section to the "Hosted control planes" section. The content itself was not changed.