×

Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.

Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.

About this release

OpenShift Container Platform (RHSA-2022:0056) is now available. This release uses Kubernetes 1.23 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.10 are included in this topic.

Red Hat did not publicly release OpenShift Container Platform 4.10.0 as the GA version and, instead, is releasing OpenShift Container Platform 4.10.3 as the GA version.

OpenShift Container Platform 4.10 clusters are available at https://console.redhat.com/openshift. The Red Hat OpenShift Cluster Manager application for OpenShift Container Platform allows you to deploy OpenShift clusters to either on-premise or cloud environments.

OpenShift Container Platform 4.10 is supported on Red Hat Enterprise Linux (RHEL) 8.4 and 8.5, as well as on Red Hat Enterprise Linux CoreOS (RHCOS) 4.10.

You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines.

OpenShift Container Platform layered and dependent component support and compatibility

The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.

New features and enhancements

This release adds improvements related to the following components and concepts.

Documentation

Getting started with OpenShift Container Platform

OpenShift Container Platform 4.10 now includes a getting started guide. Getting Started with OpenShift Container Platform defines basic terminology and provides role-based next steps for developers and administrators.

The tutorials walk new users through the web console and the OpenShift CLI (oc) interfaces. New users can accomplish the following tasks through the Getting Started:

  • Create a project

  • Grant view permissions

  • Deploy a container image from Quay

  • Examine and scale an application

  • Deploy a Python application from GitHub

  • Connect to a database from Quay

  • Create a secret

  • Load and view your application

Red Hat Enterprise Linux CoreOS (RHCOS)

Improved customization of bare metal RHCOS installation

The coreos-installer utility now has iso customize and pxe customize subcommands for more flexible customization when installing RHCOS on bare metal from the live ISO and PXE images.

This includes the ability to customize the installation to fetch Ignition configs from HTTPS servers that use a custom certificate authority or self-signed certificate.

Installation and upgrade

New default component types for AWS installations

The OpenShift Container Platform 4.10 installer uses new default component types for installations on AWS. The installation program uses the following components by default:

  • AWS EC2 M6i instances for both control plane and compute nodes, where available

  • AWS EBS gp3 storage

Enhancements to API in the install-config.yaml file

Previously, when a user installed OpenShift Container Platform on a bare metal installer-provisioned infrastructure, they had nowhere to configure custom network interfaces, such as static IPs or vLANs to communicate with the Ironic server.

When configuring a Day 1 installation on bare metal only, users can now use the API in the install-config.yaml file to customize the network configuration (networkConfig). This configuration is set during the installation and provisioning process and includes advanced options, such as setting static IPs per host.

OpenShift Container Platform on ARM

OpenShift Container Platform 4.10 is now supported on ARM based AWS EC2 and bare-metal platforms. Instance availability and installation documentation can be found in Supported installation methods for different platforms.

The following features are supported for OpenShift Container Platform on ARM:

  • OpenShift Cluster Monitoring

  • RHEL 8 Application Streams

  • OVNKube

  • Elastic Block Store (EBS) for AWS

  • AWS .NET applications

  • NFS storage on bare metal

The following Operators are supported for OpenShift Container Platform on ARM:

  • Node Tuning Operator

  • Node Feature Discovery Operator

  • Cluster Samples Operator

  • Cluster Logging Operator

  • Elasticsearch Operator

  • Service Binding Operator

Installing a cluster on IBM Cloud using installer-provisioned infrastructure (Technology Preview)

OpenShift Container Platform 4.10 introduces support for installing a cluster on IBM Cloud using installer-provisioned infrastructure in Technology Preview.

The following limitations apply for IBM Cloud using IPI:

  • Deploying IBM Cloud using IPI on a previously existing network is not supported.

  • The Cloud Credential Operator (CCO) can use only Manual mode. Mint mode or STS are not supported.

  • IBM Cloud DNS Services is not supported. An instance of IBM Cloud Internet Services is required.

  • Private or disconnected deployments are not supported.

For more information, see Preparing to install on IBM Cloud.

Thin provisioning support for VMware vSphere cluster installation

OpenShift Container Platform 4.10 introduces support for thin-provisioned disks when you install a cluster using installer-provisioned infrastructure. You can provision disks as thin, thick, or eagerZeroedThick. For more information about disk provisioning modes in VMware vSphere, see Installation configuration parameters.

Installing a cluster into an Amazon Web Services GovCloud region

Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMIs) are now available for AWS GovCloud regions. The availability of these AMIs improves the installation process because you are no longer required to upload a custom RHCOS AMI to deploy a cluster.

Using a custom AWS IAM role for instance profiles

Beginning with OpenShift Container Platform 4.10, if you configure a cluster with an existing IAM role, the installation program no longer adds the shared tag to the role when deploying the cluster. This enhancement improves the installation process for organizations that want to use a custom IAM role, but whose security policies prevent the use of the shared tag.

CSI driver installation on vSphere clusters

To install a CSI driver on a cluster running on vSphere, you must have the following components installed:

  • Virtual hardware version 15 or later

  • vSphere version 6.7 Update 3 or later

  • VMware ESXi version 6.7 Update 3 or later

Components with versions earlier than those above are still supported, but are deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform will require vSphere virtual hardware version 15 or later.

If your cluster is deployed on vSphere, and the preceding components are lower than the version mentioned above, upgrading from OpenShift Container Platform 4.9 to 4.10 on vSphere is supported, but no vSphere CSI driver will be installed. Bug fixes and other upgrades to 4.10 are still supported, however upgrading to 4.11 will be unavailable.

Installing a cluster on Alibaba Cloud using installer-provisioned infrastructure (Technology Preview)

OpenShift Container Platform 4.10 introduces the ability for installing a cluster on Alibaba Cloud using installer-provisioned infrastructure in Technology Preview. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.

Installing a cluster on Microsoft Azure Stack Hub using installer-provisioned infrastructure

OpenShift Container Platform 4.10 introduces support for installing a cluster on Azure Stack Hub using installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.

Conditional updates

OpenShift Container Platform 4.10 adds support for consuming conditional update paths provided by the OpenShift Update Service. Conditional update paths convey identified risks and the conditions under which those risks apply to clusters. The Administrator perspective on the web console only offers recommended upgrade paths for which the cluster does not match known risks. However, OpenShift CLI (oc) 4.10 or later can be used to display additional upgrade paths for OpenShift Container Platform 4.10 clusters. Associated risk information including supporting documentation references is displayed with the paths. The administrator may review the referenced materials and choose to perform the supported, but no longer recommended, upgrade.

Disconnected mirroring with the oc-mirror CLI plug-in (Technology Preview)

This release introduces the oc-mirror OpenShift CLI (oc) plug-in as a Technology Preview. You can use the oc-mirror plug-in to mirror images in a disconnected environment.

Installing a cluster on RHOSP that uses OVS-DPDK

You can now install a cluster on Red Hat OpenStack Platform (RHOSP) for which compute machines run on Open vSwitch with the Data Plane Development Kit (OVS-DPDK) networks. Workloads that run on these machines can benefit from the performance and latency improvements of OVS-DPDK.

Setting compute machine affinity during installation on RHOSP

You can now select compute machine affinity when you install a cluster on RHOSP. By default, compute machines are deployed with a soft-anti-affinity server policy, but you can also choose anti-affinity or soft-affinity policies.

Web console

Developer perspective

  • With this update, you can specify the name of a service binding connector in the Topology view while making a binding connection.

  • With this update, creating pipelines workflow has now been enhanced:

    • You can now choose a user-defined pipeline from a drop-down list while importing your application from the Import from Git pipeline workflow.

    • Default webhooks are added for the pipelines that are created using Import from Git workflow and the URL is visible in the side panel of the selected resources in the Topology view.

    • You can now opt out of the default Tekton Hub integration by setting the parameter enable-devconsole-integration to false in the TektonConfig custom resource.

      Example TektonConfig CR to opt out of Tekton Hub integration
      ...
      hub:
        params:
          - name: enable-devconsole-integration
            value: 'false'
      ...
    • Pipeline builder contains the Tekton Hub tasks that are supported by the cluster, all other unsupported tasks are excluded from the list.

  • With this update, the application export workflow now displays the export logs dialog or alert while the export is in progress. You can use the dialog to cancel or restart the exporting process.

  • With this update, you can add your new Helm Chart Repository to the Developer Catalog by creating a custom resource. Refer to the quick start guides in the Developer perspective to add a new ProjectHelmChartRepository.

  • With this update, you can now access community devfiles samples using the Developer Catalog.

Dynamic plug-in (Technology Preview)

Starting with OpenShift Container Platform 4.10, the ability to create OpenShift console dynamic plug-ins is now available as a Technology Preview feature. You can use this feature to customize your interface at runtime in many ways, including:

  • Adding custom pages

  • Adding perspectives and updating navigation items

  • Adding tabs and actions to resource pages

For more information about the dynamic plug-in, see Adding a dynamic plug-in to the OpenShift Container Platform web console.

Multicluster console (Technology Preview)

Starting with OpenShift Container Platform 4.10, the multicluster console is now available as a Technology Preview feature. By enabling this feature, you can connect to remote clusters' API servers from a single OpenShift Container Platform console.

Running a pod in debug mode

With this update, you can now view debug terminals in the web console. When a pod has a container that is in a CrashLoopBackOff state, a debug pod can be launched. A terminal interface is displayed and can be used to debug the crash looping container.

  • This feature can be accessed by the pod status pop-up window, which is accessed by clicking on the status of a pod, provides links to debug terminals for each crash looping container within that pod.

  • You can also access this feature on the Logs tab of the pod details page. A debug terminal link is displayed above the log window when a crash looping container is selected.

Additionally, the pod status pop-up window now provides links to the Logs and Events tabs of the pod details page.

Customized workload notifications

With this update, you can customize workload notifications on the User Preferences page. User workload notifications under the Notifications tab allows you to hide user workload notifications that appear on the Cluster Overview page or in your drawer.

Improved quota visibility

With this update, non-admin users are now able to view their usage of the AppliedClusterResourceQuota on the Project Overview, ResourceQuotas, and API Explorer pages to determine the cluster-scoped quota available for use. Additionally, AppliedClusterResourceQuota details can now be found on the Search page.

Cluster support level

OpenShift Container Platform now enables you to view support level information about your cluster on the OverviewDetails card, in the Cluster Settings, in the About modal, and adds a notification to your notifications drawer when your cluster is unsupported. From the Overview page, you can manage subscription settings under the Service Level Agreement (SLA).

IBM Z and LinuxONE

With this release, IBM Z and LinuxONE are now compatible with OpenShift Container Platform 4.10. The installation can be performed with z/VM or RHEL KVM. For installation instructions, see the following documentation:

Notable enhancements

The following new features are supported on IBM Z and LinuxONE with OpenShift Container Platform 4.10:

  • Horizontal pod autoscaling

  • The following Multus CNI plug-ins are supported:

    • Bridge

    • Host-device

    • IPAM

    • IPVLAN

  • Compliance Operator 0.1.49

  • NMState Operator

  • OVN-Kubernetes IPsec encryption

  • Vertical Pod Autoscaler Operator

Supported features

The following features are also supported on IBM Z and LinuxONE:

  • Currently, the following Operators are supported:

    • Cluster Logging Operator

    • Compliance Operator 0.1.49

    • Local Storage Operator

    • NFD Operator

    • NMState Operator

    • OpenShift Elasticsearch Operator

    • Service Binding Operator

    • Vertical Pod Autoscaler Operator

  • Encrypting data stored in etcd

  • Helm

  • Multipathing

  • Persistent storage using iSCSI

  • Persistent storage using local volumes (Local Storage Operator)

  • Persistent storage using hostPath

  • Persistent storage using Fibre Channel

  • Persistent storage using Raw Block

  • OVN-Kubernetes

  • Support for multiple network interfaces

  • Three-node cluster support

  • z/VM Emulated FBA devices on SCSI disks

  • 4K FCP block device

These features are available only for OpenShift Container Platform on IBM Z and LinuxONE for 4.10:

  • HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD storage

Restrictions

The following restrictions impact OpenShift Container Platform on IBM Z and LinuxONE:

  • The following OpenShift Container Platform Technology Preview features are unsupported:

    • Precision Time Protocol (PTP) hardware

  • The following OpenShift Container Platform features are unsupported:

    • Automatic repair of damaged machines with machine health checking

    • CodeReady Containers (CRC)

    • Controlling overcommit and managing container density on nodes

    • CSI volume cloning

    • CSI volume snapshots

    • FIPS cryptography

    • NVMe

    • OpenShift Metering

    • OpenShift Virtualization

    • Tang mode disk encryption during OpenShift Container Platform deployment

  • Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS)

  • Persistent shared storage must be provisioned by using either OpenShift Data Foundation or other supported storage protocols

  • Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO with DASD, FCP, or EDEV/FBA

IBM Power

With this release, IBM Power is now compatible with OpenShift Container Platform 4.10. For installation instructions, see the following documentation:

Notable enhancements

The following new features are supported on IBM Power with OpenShift Container Platform 4.10:

  • Horizontal pod autoscaling

  • The following Multus CNI plug-ins are supported:

    • Bridge

    • Host-device

    • IPAM

    • IPVLAN

  • Compliance Operator 0.1.49

  • NMState Operator

  • OVN-Kubernetes IPsec encryption

  • Vertical Pod Autoscaler Operator

Supported features

The following features are also supported on IBM Power:

  • Currently, the following Operators are supported:

    • Cluster Logging Operator

    • Compliance Operator 0.1.49

    • Local Storage Operator

    • NFD Operator

    • NMState Operator

    • OpenShift Elasticsearch Operator

    • SR-IOV Network Operator

    • Service Binding Operator

    • Vertical Pod Autoscaler Operator

  • Encrypting data stored in etcd

  • Helm

  • Multipathing

  • Multus SR-IOV

  • NVMe

  • OVN-Kubernetes

  • Persistent storage using iSCSI

  • Persistent storage using local volumes (Local Storage Operator)

  • Persistent storage using hostPath

  • Persistent storage using Fibre Channel

  • Persistent storage using Raw Block

  • Support for multiple network interfaces

  • Support for Power10

  • Three-node cluster support

  • 4K Disk Support

Restrictions

The following restrictions impact OpenShift Container Platform on IBM Power:

  • The following OpenShift Container Platform Technology Preview features are unsupported:

    • Precision Time Protocol (PTP) hardware

  • The following OpenShift Container Platform features are unsupported:

    • Automatic repair of damaged machines with machine health checking

    • CodeReady Containers (CRC)

    • Controlling overcommit and managing container density on nodes

    • FIPS cryptography

    • OpenShift Metering

    • OpenShift Virtualization

    • Tang mode disk encryption during OpenShift Container Platform deployment

  • Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS)

  • Persistent storage must be of the Filesystem type that uses local volumes, OpenShift Data Foundation, Network File System (NFS), or Container Storage Interface (CSI)

Security and compliance

Information regarding new features, enhancements, and bug fixes for security and compliance components can be found in the Compliance Operator and File Integrity Operator release notes.

For more information about security and compliance, see OpenShift Container Platform security and compliance.

Networking

Dual-stack services require that ipFamilyPolicy is specified

When you create a service that uses multiple IP address families, you must explicitly specify ipFamilyPolicy: PreferDualStack or ipFamilyPolicy: RequireDualStack in your Service object definition. This change breaks backward compatibility with earlier releases of OpenShift Container Platform.

For more information, see BZ#2045576.

Change cluster network MTU after cluster installation

After cluster installation, if you are using the OpenShift SDN cluster network provider or the OVN-Kubernetes cluster network provider, you can change your hardware MTU and your cluster network MTU values. Changing the MTU across the cluster is disruptive and requires that each node is rebooted several times. For more information, see Changing the cluster network MTU.

OVN-Kubernetes support for gateway configuration

The OVN-Kubernetes CNI network provider adds support for configuring how egress traffic is sent to the node gateway. By default, egress traffic is processed in OVN to exit the cluster and traffic is not affected by specialized routes in the kernel routing table.

This enhancement adds a gatewayConfig.routingViaHost field. With this update, the field can be set at runtime as a post-installation activity and when it is set to true, egress traffic is sent from pods to the host networking stack. This update benefits highly specialized installations and applications that rely on manually configured routes in the kernel routing table.

This enhancement has an interaction with the Open vSwitch hardware offloading feature. With this update, when the gatewayConfig.routingViaHost field is set to true, you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack.

Enhancements to networking metrics

The following metrics are now available for clusters. The metric names that start with sdn_controller are unique to the OpenShift SDN CNI network provider. The metric names that start with ovn are unique to the OVN-Kubernetes CNI network provider:

  • network_attachment_definition_instances{networks="egress-router"}

  • openshift_unidle_events_total

  • ovn_controller_bfd_run

  • ovn_controller_ct_zone_commit

  • ovn_controller_flow_generation

  • ovn_controller_flow_installation

  • ovn_controller_if_status_mgr

  • ovn_controller_if_status_mgr_run

  • ovn_controller_if_status_mgr_update

  • ovn_controller_integration_bridge_openflow_total

  • ovn_controller_ofctrl_seqno_run

  • ovn_controller_patch_run

  • ovn_controller_pinctrl_run

  • ovnkube_master_ipsec_enabled

  • ovnkube_master_num_egress_firewall_rules

  • ovnkube_master_num_egress_firewalls

  • ovnkube_master_num_egress_ips

  • ovnkube_master_pod_first_seen_lsp_created_duration_seconds

  • ovnkube_master_pod_lsp_created_port_binding_duration_seconds

  • ovnkube_master_pod_port_binding_chassis_port_binding_up_duration_seconds

  • ovnkube_master_pod_port_binding_port_binding_chassis_duration_seconds

  • sdn_controller_num_egress_firewall_rules

  • sdn_controller_num_egress_firewalls

  • sdn_controller_num_egress_ips

The ovnkube_master_resource_update_total metric is removed for the 4.10 release.

Switching between YAML view and a web console form

  • Previously, changes were not retained when switching between YAML view and Form view on the web console. Additionally, after switching to YAML view, you could not return to Form view. With this update, you can now easily switch between YAML view and Form view on the web console without losing changes.

Listing pods targeted by network policies

When using the network policy functionality in the OpenShift Container Platform web console, the pods affected by a policy are listed. The list changes as the combined namespace and pod selectors in these policy sections are modified:

  • Peer definition

  • Rule definition

  • Ingress

  • Egress

The list of impacted pods includes only those pods accessible by the user.

Enhancement to must-gather to simplify network tracing

The oc adm must-gather command is enhanced in a way that simplifies collecting network packet captures.

Previously, oc adm must-gather could start a single debug pod only. With this enhancement, you can start a debug pod on multiple nodes at the same time.

You can use the enhancement to run packet captures on multiple nodes at the same time to simplify troubleshooting network communication issues. A new --node-selector argument provides a way to identify which nodes you are collecting packet captures for.

Pod-level bonding for secondary networks (Technology Preview)

Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver.

Scenarios where pod-level bonding is required include creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability at pod level.

The current functionality of Bond CNI is available only in active-backup mode. For further details, see BZ#2037214.

Egress IP address support for clusters installed on public clouds

As a cluster administrator, you can associate one or more egress IP addresses with a namespace. An egress IP address ensures that a consistent source IP address is associated with traffic from a particular namespace that is leaving the cluster.

For the OVN-Kubernetes and OpenShift SDN cluster network providers, you can configure an egress IP address on the following public cloud providers:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

To learn more, refer to the respective documentation for your cluster network provider:

OpenShift SDN cluster network provider network policy support for egress policies and ipBlock except

If you use the OpenShift SDN cluster network provider, you can now use egress rules in network policy with ipBlock and ipBlock.except. You define egress policies in the egress array of the NetworkPolicy object.

For more information, refer to About network policy.

Ingress Controller router compression

This enhancement adds the ability to configure global HTTP traffic compression on the HAProxy Ingress Controller for specific MIME types. This update enables gzip-compression of your ingress workloads when there are large amounts of compressible routed traffic.

For more information, see Using router compression.

Support for CoreDNS customization

A cluster administrator can now configure DNS servers to allow DNS name resolution through the configured servers for the default domain. A DNS forwarding configuration can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers.

For more information, see Using DNS forwarding.

Support for configuring the maximum length of the syslog message in the Ingress Controller

You can now set the maximum length of the syslog message in the Ingress Controller to any value between 480 and 4096 bytes.

For more information, see Ingress Controller configuration parameters.

Set CoreDNS forwarding policy

You can now set the CoreDNS forwarding policy through the DNS Operator. The default value is Random, and you can also set the value to RoundRobin or Sequential.

For more information, see Using DNS forwarding.

Open vSwitch hardware offloading support for SR-IOV

You can now configure Open vSwitch hardware offloading to increase data processing performance on compatible bare metal nodes. Hardware offloading is a method for processing data that removes data processing tasks from the CPU and transfers them to the dedicated data processing unit of a network interface controller. Benefits of this feature include faster data processing, reduced CPU workloads, and lower computing costs.

For more information, see Configuring hardware offloading.

Creating DNS records by using the Red Hat External DNS Operator (Technology Preview)

You can now create DNS records by using the Red Hat External DNS Operator on cloud providers such as AWS, Azure, and GCP. You can install the External DNS Operator using OperatorHub. You can use parameters to configure ExternalDNS as required.

For more information, see Installing the External DNS Operator.

Mutable Ingress Controller endpoint publishing strategy enhancement

Cluster administrators can now configure the Ingress Controller endpoint publishing strategy to change the load-balancer scope between Internal and External in OpenShift Container Platform.

OVS hardware offloading for clusters on RHOSP (Technology Preview)

For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading.

For more information, see Enabling OVS hardware offloading.

Reduction of RHOSP resources created by Kuryr

For clusters that run on RHOSP, Kuryr now only creates Neutron networks and subnets for namespaces that have at least one pod on the pods network. Additionally, pools in a namespace are populated after at least one pod on the pods network is created in the namespace.

Support for RHOSP DCN (Technology Preview)

You can now deploy a cluster on a Red Hat OpenStack Platform (RHOSP) deployment that uses a distributed compute node (DCN) configuration. This deployment configuration has several limitations:

  • Only RHOSP version 16 is supported.

  • For RHOSP 16.1.4, only hyper-converged infrastructure (HCI) and Ceph technologies are supported at the edge.

  • For RHOSP 16.2, non-HCI and Ceph technologies are supported as well.

  • Networks must be created ahead of time (Bring Your Own Network) as either tenant or provider networks. These networks must be scheduled in the appropriate availability zones.

Support for external cloud providers for clusters on RHOSP (Technology Preview)

Clusters that run on RHOSP can now use Cloud Provider OpenStack. This capability is available as part of the TechPreviewNoUpgrade feature set.

Configuring host network interfaces with NMState on installer-provisioned clusters

OpenShift Container Platform now provides a networkConfig configuration setting for installer-provisioned clusters, which takes an NMState YAML configuration to configure host interfaces. During installer-provisioned installations, you can add the networkConfig configuration setting and the NMState YAML configuration to the install-config.yaml file. Additionally, you can add the networkConfig configuration setting and the NMState YAML configuration to the bare metal host resource when using the Bare Metal Operator.

The most common use case for the networkConfig configuration setting is to set static IP addresses on a host’s network interface during installation or while expanding the cluster.

Boundary clock and PTP enhancements to linuxptp services

You can now specify multiple network interfaces in a PtpConfig profile to allow nodes running RAN vDU applications to serve as a Precision Time Protocol Telecom Boundary Clock (PTP T-BC). Interfaces configured as boundary clocks now also support PTP fast events.

Support for Intel 800-Series Columbiaville NICs

Intel 800-Series Columbiaville NICs are now fully supported for interfaces configured as boundary clocks or ordinary clocks. Columbiaville NICs are supported in the following configurations:

  • Ordinary clock

  • Boundary clock synced to the Grandmaster clock

  • Boundary clock with one port synchronizing from an upstream source clock, and three ports providing downstream timing to destination clocks

For more information, see Configuring PTP devices.

Kubernetes NMState Operator is GA for bare-metal, IBM Power, IBM Z, and LinuxONE installations

OpenShift Container Platform now provides the Kubernetes NMState Operator for bare-metal, IBM Power, IBM Z, and LinuxONE installations. The Kubernetes NMState Operator is still a Technology Preview for all other platforms. See About the Kubernetes NMState Operator for additional details.

Hardware

Enhancements to MetalLB load balancing

The following enhancements to MetalLB and the MetalLB Operator are included in this release:

  • Support for Border Gateway Protocol (BGP) is added.

  • Support for Bidirectional Forwarding Detection (BFD) in combination with BGP is added.

  • Support for IPv6 and dual-stack networking is added.

  • Support for specifying a node selector on the speaker pods is added. You can now control which nodes are used for advertising load balancer service IP addresses. This enhancement applies to layer 2 mode and BGP mode.

  • Validating web hooks are added to ensure that address pool and BGP peer custom resources are valid.

  • The v1alpha1 API version for the AddressPool and MetalLB custom resource definitions that were introduced in the 4.9 release are deprecated. Both custom resources are updated to the v1beta1 API version.

  • Support for speaker pod tolerations in the MetalLB custom resource definition is added.

For more information, see About MetalLB and the MetalLB Operator.

Support for modifying host firmware settings

OpenShift Container Platform supports the HostFirmwareSettings and FirmwareSchema resources. When deploying OpenShift Container Platform on bare metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host’s firmware and BIOS details. There are two new resources that you can use with the Bare Metal Operator (BMO):

  • HostFirmwareSettings: You can use the HostFirmwareSettings resource to retrieve and manage the BIOS settings for a host. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware field in the BareMetalHost resource returns three vendor-independent fields, the HostFirmwareSettings resource typically comprises many BIOS settings of vendor-specific fields per host model.

  • FirmwareSchema: You can use the FirmwareSchema to identify the host’s modifiable BIOS values and limits when making changes to host firmware settings.

See Bare metal configuration for additional details.

Storage

Storage metrics indicator

  • With this update, workloads can securely share Secrets and ConfigMap objects across namespaces using inline ephemeral csi volumes provided by the Shared Resource CSI Driver. Container Storage Interface (CSI) volumes and the Shared Resource CSI Driver are Technology Preview features. (BUILD-293)

Console Storage Plug-in enhancement

  • A new feature has been added to the Console Storage Plug-in that adds Aria labels throughout the installation flow for screen readers. This provides better accessibility for users that use screen readers to access the console.

  • A new feature has been added that provides metrics indicating the amount of used space on volumes used for persistent volume claims (PVCs). This information appears in the PVC list, and in the PVC details in the Used column. (BZ#1985965)

Persistent storage using the Alibaba AliCloud Disk CSI Driver Operator

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AliCloud Disk. The AliCloud Disk Driver Operator that manages this driver is generally available, and enabled by default in OpenShift Container Platform 4.10.

For more information, see AliCloud Disk CSI Driver Operator.

Persistent storage using the Microsoft Azure File CSI Driver Operator (Technology Preview)

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure File. The Azure File Driver Operator that manages this driver is in Technology Preview.

For more information, see Azure File CSI Driver Operator.

Persistent storage using the IBM VPC Block CSI Driver Operator

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM Virtual Private Cloud (VPC) Block. The IBM VPC Block Driver Operator that manages this driver is generally available, and enabled by default in OpenShift Container Platform 4.10.

For more information, see IBM VPC Block CSI Driver Operator.

Persistent storage using VMware vSphere CSI Driver Operator is generally available

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for vSphere. This feature was previously introduced as a Technology Preview feature in OpenShift Container Platform 4.8 and is now generally available and enabled by default in OpenShift Container Platform 4.10.

For more information, see vSphere CSI Driver Operator.

vSphere CSI Driver Operator installation requires:

Clusters ar