×

Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.

Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.

About this release

OpenShift Container Platform (RHSA-2024:xxxx) is now available. This release uses Kubernetes 1.31 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.18 are included in this topic.

OpenShift Container Platform 4.18 clusters are available at https://console.redhat.com/openshift. From the Red Hat Hybrid Cloud Console, you can deploy OpenShift Container Platform clusters to either on-premises or cloud environments.

OpenShift Container Platform 4.18 is supported on Red Hat Enterprise Linux (RHEL) 8.8 and a later version of RHEL 8 that is released before End of Life of OpenShift Container Platform 4.18. OpenShift Container Platform 4.18 is also supported on Red Hat Enterprise Linux CoreOS (RHCOS). To understand RHEL versions used by RHCOS, see RHEL Versions Utilized by Red Hat Enterprise Linux CoreOS (RHCOS) and OpenShift Container Platform (Knowledgebase article).

You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. RHEL machines are deprecated in OpenShift Container Platform 4.16 and will be removed in a future release.

Starting from OpenShift Container Platform 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including x86_64, 64-bit ARM (aarch64), IBM Power® (ppc64le), and IBM Z® (s390x) architectures. Beyond this, Red Hat also offers a 12-month additional EUS add-on, denoted as Additional EUS Term 2, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of OpenShift Container Platform. For more information about support for all versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.

Commencing with the OpenShift Container Platform 4.14 release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see OpenShift Operator Life Cycles.

OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.

OpenShift Container Platform layered and dependent component support and compatibility

The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.

New features and enhancements

This release adds improvements related to the following components and concepts:

Authentication and authorization

Rotating OIDC bound service account signer keys

With this release, you can use the Cloud Credential Operator (CCO) utility (ccoctl) to rotate the OpenID Connect (OIDC) bound service account signer key for clusters installed on the following cloud providers:

Backup and restore

Hibernating a cluster for up to 90 days

With this release, you can now hibernate your OpenShift Container Platform cluster for up to 90 days and expect the cluster to recover successfully. Before this release, you could only hibernate for up to 30 days.

Extensions (OLM v1)

Operator Lifecycle Manager (OLM) v1 (General Availability)

Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release and has helped enable and grow a substantial ecosystem of solutions and advanced workloads running as Operators.

OpenShift Container Platform 4.18 introduces OLM v1, the next-generation Operator Lifecycle Manager, as a General Availability (GA) feature, designed to improve how you manage Operators on OpenShift Container Platform.

With OLM v1 now generally available, starting in OpenShift Container Platform 4.18, the existing version of OLM that has been included since the launch of OpenShift Container Platform 4 is now known as OLM (Classic).

Previously available as a Technology Preview feature only, the updated framework in OLM v1 evolves many of the concepts that have been part of OLM (Classic) by simplifying Operator management, enhancing security, and boosting reliability.

  • Starting in OpenShift Container Platform 4.18, OLM v1 is now enabled by default, alongside OLM (Classic). OLM v1 is a cluster capability that administrators can optionally disable before installation of OpenShift Container Platform.

  • OLM (Classic) remains fully supported throughout the OpenShift Container Platform 4 lifecycle.

Simplified API

OLM v1 simplifies Operator management with a new, user-friendly API: the ClusterExtension object. By managing Operators as integral extensions of the cluster, OLM v1 caters to the special lifecycle requirements of custom resource definition (CRDs). This design aligns more closely with Kubernetes principles, treating Operators, which consist of custom controllers and CRDs, as cluster-wide singletons.

OpenShift Container Platform continues to give you access to the latest Operator packages, patches, and updates through default Red Hat Operator catalogs, which are enabled by default for OLM v1 in OpenShift Container Platform 4.18. With OLM v1, you can install an Operator package by creating and applying a ClusterExtension API object in your cluster. By interacting with ClusterExtension objects, you can manage the lifecycle of Operator packages, quickly understand their status, and troubleshoot issues.

Streamlined declarative workflows

Leveraging the simplified API, you can define your desired Operator states in a declarative way and, when integrating with tools like Git and Zero Touch Provisioning, let OLM v1 automatically maintain those states. This minimizes human error and unlocks a wider range of use cases.

Uninterrupted operations with continuous reconciliation and optional rollbacks

OLM v1 enhances reliability through continuous reconciliation. Rather than relying on single attempts, OLM v1 proactively addresses Operator installation and update failures, automatically retrying until the issue is resolved. This eliminates the manual steps previously required, such as deleting InstallPlan API objects, and greatly simplifies the resolution of off-cluster issues, such as missing container images or catalog problems.

In addition, OLM v1 provides optional rollbacks, allowing you to revert Operator version updates under specific conditions after carefully assessing any potential risks.

Granular update control for deployments

With granular update control, you can select a specific Operator version or define a range of acceptable versions. For example, if you have tested and approved version 1.2.3 of an Operator in a stage environment, instead of hoping the latest version works just as well in production, you can use version pinning. By specifying 1.2.3 as the desired version, you can ensure that is the exact version that will be deployed for a safe and predictable update.

Alternatively, automatic z-stream updates provide a seamless and secure experience by automatically applying security fixes without manual intervention, minimizing operational disruptions.

Enhanced security with user-provided service accounts

OLM v1 prioritizes security by minimizing its permission requirements and providing greater control over access. By using user-provided ServiceAccount objects for Operator lifecycle operations, OLM v1 access is restricted to only the necessary permissions, significantly reducing the control plane attack surface and improving overall security. In this way, OLM v1 adopts a least-privilege model to minimize the impact of a compromise.

The documentation for OLM v1 exists as a stand-alone guide called Extensions. Previously, OLM v1 documentation was a subsection of the Operators guide, which otherwise documents the OLM (Classic) feature set.

The updated location and guide name reflect a more focused documentation experience and aims to differentiate between OLM v1 and OLM (Classic).

OLM v1 supported extensions

Currently, Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions that meet all of the following criteria:

  • The extension must use the registry+v1 bundle format introduced in OLM (Classic).

  • The extension must support installation via the AllNamespaces install mode.

  • The extension must not use webhooks.

  • The extension must not declare dependencies by using any of the following file-based catalog properties:

    • olm.gvk.required

    • olm.package.required

    • olm.constraint

OLM v1 checks that the extension you want to install meets these constraints. If the extension that you want to install does not meet these constraints, an error message is printed in the cluster extension’s conditions.

Disconnected environment support in OLM v1

To support cluster administrators that prioritize high security by running their clusters in internet-disconnected environments, especially for mission-critical production workloads, OLM v1 supports these disconnected environments, starting in OpenShift Container Platform 4.18.

After using the oc-mirror plugin for the OpenShift CLI (oc) to mirror the images required for your cluster to a mirror registry in your fully or partially disconnected environments, OLM v1 can function properly in these environments by utilizing the sets of resources generated by either oc-mirror plugin v1 or v2.

For more information, see Disconnected environment support in OLM v1.

Improved catalog selection in OLM v1

With this release, you can perform the following actions to control the selection of catalog content when you install or update a cluster extension:

  • Specify labels to select the catalog

  • Use match expressions to filter across catalogs

  • Set catalog priority

For more information, see Catalog content resolution.

Basic support for proxied environments and trusted CA certificates

With this release, Operator Controller and catalogd can now run in proxied environments and include basic support for trusted CA certificates.

Compatibility with OpenShift Container Platform versions

Before cluster administrators can update their OpenShift Container Platform cluster to its next minor version, they must ensure that all installed Operators are updated to a bundle version that is compatible with the next minor version (4.y+1) of a cluster.

Starting in OpenShift Container Platform 4.18, OLM v1 supports the olm.maxOpenShiftVersion annotation in the cluster service version (CSV) of an Operator, similar to the behavior in OLM (Classic), to prevent administrators from updating the cluster before updating the installed Operator to a compatible version.

User access to extension resources

After a cluster extension has been installed and is being managed by Operator Lifecycle Manager (OLM) v1, the extension can often provide CustomResourceDefinition objects (CRDs) that expose new API resources on the cluster. Cluster administrators typically have full management access to these resources by default, whereas non-cluster administrator users, or regular users, might lack sufficient permissions.

OLM v1 does not automatically configure or manage role-based access control (RBAC) for regular users to interact with the APIs provided by installed extensions. Cluster administrators must define the required RBAC policy to create, view, or edit these custom resources (CRs) for such users.

For more information, see User access to extension resources.

Runtime validation of container images using sigstore signatures in OLM v1 (Technology Preview)

Starting in OpenShift Container Platform 4.18, OLM v1 support for handling runtime validation of sigstore signatures for container images is available as a Technology Preview (TP) feature.

OLM v1 known issues

Operator Lifecycle Manager (OLM) v1 does not support the OperatorConditions API introduced in OLM (Classic).

If an extension relies on only the OperatorConditions API to manage updates, the extension might not install correctly. Most extensions that rely on this API fail at start time, but some might fail during reconciliation.

As a workaround, you can pin your extension to a specific version. When you want to update your extension, consult the extension’s documentation to find out when it is safe to pin the extension to a new version.

Edge computing

Shutting down and restarting single-node OpenShift clusters up to 1 year after cluster installation

With this release, you can shut down and restart single-node OpenShift clusters up to 1 year after cluster installation. If certificates expired while the cluster was shut down, you must approve certificate signing requests (CSRs) upon restarting the cluster.

Before this update, you could shut down and restart single-node OpenShift clusters for only 120 days after cluster installation.

Evacuate all workload pods from the single-node OpenShift cluster before you shut it down.

For more information, see Shutting down the cluster gracefully.

Deprecation of SiteConfig v1

SiteConfig v1 is deprecated starting with OpenShift Container Platform 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the ClusterInstance custom resource. For more information, see the Red Hat Knowledge Base solution Procedure to transition from SiteConfig CRs to the ClusterInstance API.

For more information about the SiteConfig Operator, see SiteConfig.

Hosted control planes

Because hosted control planes releases asynchronously from OpenShift Container Platform, it has its own release notes. For more information, see Hosted control planes release notes.

IBM Z and IBM LinuxONE

With this release, IBM Z® and IBM® LinuxONE are now compatible with OpenShift Container Platform 4.18. You can perform the installation with z/VM, LPAR, or Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM). For installation instructions, see Installation methods.

Compute nodes must run Red Hat Enterprise Linux CoreOS (RHCOS).

IBM Z and IBM LinuxONE notable enhancements

The IBM Z® and IBM® LinuxONE release on OpenShift Container Platform 4.18 adds improvements and new capabilities to OpenShift Container Platform components and concepts.

This release introduces support for the following features on IBM Z® and IBM® LinuxONE:

  • Adding compute nodes to on-premise clusters using OpenShift CLI (oc)

IBM Power

The IBM Power® release on OpenShift Container Platform 4.18 adds improvements and new capabilities to OpenShift Container Platform components.

This release introduces support for the following features on IBM Power:

  • Added four new data centers to PowerVS Installer Provisioned Infrastructure deployments

  • Adding compute nodes to on-premise clusters using OpenShift CLI (oc)

IBM Power, IBM Z, and IBM LinuxONE support matrix

Starting in OpenShift Container Platform 4.14, Extended Update Support (EUS) is extended to the IBM Power® and the IBM Z® platform. For more information, see the OpenShift EUS Overview.

Table 1. OpenShift Container Platform features
Feature IBM Power® IBM Z® and IBM® LinuxONE

Adding compute nodes to on-premise clusters using OpenShift CLI (oc)

Supported

Supported

Alternate authentication providers

Supported

Supported

Agent-based Installer

Supported

Supported

Assisted Installer

Supported

Supported

Automatic Device Discovery with Local Storage Operator

Unsupported

Supported

Automatic repair of damaged machines with machine health checking

Unsupported

Unsupported

Cloud controller manager for IBM Cloud®

Supported

Unsupported

Controlling overcommit and managing container density on nodes

Unsupported

Unsupported

CPU manager

Supported

Supported

Cron jobs

Supported

Supported

Descheduler

Supported

Supported

Egress IP

Supported

Supported

Encrypting data stored in etcd

Supported

Supported

FIPS cryptography

Supported

Supported

Helm

Supported

Supported

Horizontal pod autoscaling

Supported

Supported

Hosted control planes

Supported

Supported

IBM Secure Execution

Unsupported

Supported

Installer-provisioned Infrastructure Enablement for IBM Power® Virtual Server

Supported

Unsupported

Installing on a single node

Supported

Supported

IPv6

Supported

Supported

Monitoring for user-defined projects

Supported

Supported

Multi-architecture compute nodes

Supported

Supported

Multi-architecture control plane

Supported

Supported

Multipathing

Supported

Supported

Network-Bound Disk Encryption - External Tang Server

Supported

Supported

Non-volatile memory express drives (NVMe)

Supported

Unsupported

nx-gzip for Power10 (Hardware Acceleration)

Supported

Unsupported

oc-mirror plugin

Supported

Supported

OpenShift CLI (oc) plugins

Supported

Supported

Operator API

Supported

Supported

OpenShift Virtualization

Unsupported

Supported

OVN-Kubernetes, including IPsec encryption

Supported

Supported

PodDisruptionBudget

Supported

Supported

Precision Time Protocol (PTP) hardware

Unsupported

Unsupported

Red Hat OpenShift Local

Unsupported

Unsupported

Scheduler profiles

Supported

Supported

Secure Boot

Unsupported

Supported

Stream Control Transmission Protocol (SCTP)

Supported

Supported

Support for multiple network interfaces

Supported

Supported

The openshift-install utility to support various SMT levels on IBM Power® (Hardware Acceleration)

Supported

Supported

Three-node cluster support

Supported

Supported

Topology Manager

Supported

Unsupported

z/VM Emulated FBA devices on SCSI disks

Unsupported

Supported

4K FCP block device

Supported

Supported

Table 2. Persistent storage options
Feature IBM Power® IBM Z® and IBM® LinuxONE

Persistent storage using iSCSI

Supported [1]

Supported [1],[2]

Persistent storage using local volumes (LSO)

Supported [1]

Supported [1],[2]

Persistent storage using hostPath

Supported [1]

Supported [1],[2]

Persistent storage using Fibre Channel

Supported [1]

Supported [1],[2]

Persistent storage using Raw Block

Supported [1]

Supported [1],[2]

Persistent storage using EDEV/FBA

Supported [1]

Supported [1],[2]

  1. Persistent shared storage must be provisioned by using either Red Hat OpenShift Data Foundation or other supported storage protocols.

  2. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA.

Table 3. Operators
Feature IBM Power® IBM Z® and IBM® LinuxONE

cert-manager Operator for Red Hat OpenShift

Supported

Supported

Cluster Logging Operator

Supported

Supported

Cluster Resource Override Operator

Supported

Supported

Compliance Operator

Supported

Supported

Cost Management Metrics Operator

Supported

Supported

File Integrity Operator

Supported

Supported

HyperShift Operator

Supported

Supported

IBM Power® Virtual Server Block CSI Driver Operator

Supported

Unsupported

Ingress Node Firewall Operator

Supported

Supported

Local Storage Operator

Supported

Supported

MetalLB Operator

Supported

Supported

Network Observability Operator

Supported

Supported

NFD Operator

Supported

Supported

NMState Operator

Supported

Supported

OpenShift Elasticsearch Operator

Supported

Supported

Vertical Pod Autoscaler Operator

Supported

Supported

Table 4. Multus CNI plugins
Feature IBM Power® IBM Z® and IBM® LinuxONE

Bridge

Supported

Supported

Host-device

Supported

Supported

IPAM

Supported

Supported

IPVLAN

Supported

Supported

Table 5. CSI Volumes
Feature IBM Power® IBM Z® and IBM® LinuxONE

Cloning

Supported

Supported

Expansion

Supported

Supported

Snapshot

Supported

Supported

Insights Operator

Insights Runtime Extractor (Technology Preview)

In this release, the Insights Operator introduces the workload data collection Insights Runtime Extractor feature to help Red Hat better understand the workload of your containers. Available as a Technology Preview, the Insights Runtime Extractor feature gathers runtime workload data and sends it to Red Hat. Red Hat uses the collected runtime workload data to gain insights that can help you make investment decisions that will drive and optimize how you use your OpenShift Container Platform containers. For more information, see Enabling features using feature gates.

Rapid Recommendations

In this release, enhancements have been made to the Rapid Recommendations mechanism for remotely configuring the rules that determine the data that the Insights Operator collects.

The Rapid Recommendations feature is version-independent, and builds on the existing conditional data gathering mechanism.

The Insights Operator connects to a secure remote endpoint service running on console.redhat.com to retrieve definitions that contain the rules for determining which container log messages are filtered and collected by Red Hat.

The conditional data-gathering definitions get configured through an attribute named conditionalGathererEndpoint in the pod.yml configuration file.

conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules

In earlier iterations, the rules for determining the data that the Insights Operator collects were hard-coded and tied to the corresponding OpenShift Container Platform version.

The preconfigured endpoint URL now provides a placeholder (%s) for defining a target version of OpenShift Container Platform.

More data collected and recommendations added

The Insights Operator now gathers more data to detect the following scenarios, which other applications can use to generate remedial recommendations to proactively manage your OpenShift Container Platform deployments:

  • Collects resources from the nmstate.io/v1 API group.

  • Collects data from clusterrole.rbac.authorization.k8s.io/v1 instances.

Installation and update

Configuring the ovn-kubernetes join subnet during cluster installation

With this release, you can configure the IPv4 join subnet that is used internally by ovn-kubernetes when installing a cluster. You can set the internalJoinSubnet parameter in the install-config.yaml file and deploy the cluster into an existing Virtual Private Cloud (VPC).

For more information, see Network configuration parameters.

Introducing the oc adm upgrade recommend command (Technology Preview)

When updating your cluster, the oc adm upgrade command returns a list of the next available versions. As long as you are using 4.18 oc client binary, you can use the oc adm upgrade recommend command to narrow down your suggestions and recommend a new target release before you launch your update. This feature is available for OpenShift Container Platform version 4.16 and newer clusters that are connected to an update service.

For more information, see Updating a cluster by using the CLI

Support for Nutanix Cloud Clusters (NC2) on Amazon Web Services (AWS) and NC2 on Microsoft Azure

With this release, you can install OpenShift Container Platform on Nutanix Cloud Clusters (NC2) on AWS or NC2 on Azure.

For more information, see Infrastructure requirements.

Installing a cluster on Google Cloud Platform using the C4 and C4A machine series

With this release, you can deploy a cluster on GCP using the C4 and C4A machine series for compute or control plane machines. The supported disk type of these machines is hyperdisk-balanced. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage.

For more information about configuring machine types, see Installation configuration parameters for GCP, C4 machine series (Compute Engine docs), and C4A machine series (Compute Engine docs).

Provide your own private hosted zone when installing a cluster on Google Cloud Platform

With this release, you can provide your own private hosted zone when installing a cluster on GCP into a shared VPC. If you do, the requirements for the bring your own (BYO) zone are that the zone must use a DNS name such as <cluster_name>.<base_domain>. and that you bind the zone to the VPC network of the cluster.

Installing a cluster on Nutanix by using a preloaded RHCOS image object

With this release, you can install a cluster on Nutanix by using the named, preloaded RHCOS image object from the private cloud or the public cloud. Rather than creating and uploading a RHCOS image object for each OpenShift Container Platform cluster, you can use the preloadedOSImageName parameter in the install-config.yaml file.

For more information, see Additional Nutanix configuration parameters.

Single-stack IPv6 clusters on RHOSP

You can now deploy single-stack IPv6 clusters on RHOSP.

You must configure RHOSP prior to deploying your OpenShift Container Platform cluster. For more information, see Configuring a cluster with single-stack IPv6 networking.

Installing a cluster on Nutanix with multiple subnets

With this release, you can install a Nutanix cluster with more than one subnet for the Prism Element into which you are deploying an OpenShift Container Platform cluster.

For an existing Nutanix cluster, you can add multiple subnets by using compute or control plane machine sets.

Installing a cluster on VMware vSphere with multiple network interface controllers (Technology Preview)

With this release, you can install a VMware vSphere cluster with multiple network interface controllers (NICs) for a node.

For more information, see Configuring multiple NICs.

For an existing vSphere cluster, you can add multiple subnets by using compute machine sets.

Configuring 4 and 5 node control planes with the Agent-based Installer

With this release, if you are using the Agent-based Installer, you can now configure your cluster to be installed with either 4 or 5 nodes in the control plane. This feature is enabled by setting the controlPlane.replicas parameter to either 4 or 5 in the install-config.yaml file.

For more information, see Optional configuration parameters for the Agent-based Installer.

Minimal ISO image support for the Agent-based Installer

With this release, the Agent-based Installer supports creating a minimal ISO image on all supported platforms. Previously, minimal ISO images were supported only on the external platform.

This feature is enabled using the minimalISO parameter in the agent-config.yaml file.

For more information, see Optional configuration parameters for the Agent-based Installer.

Internet Small Computer System Interface (iSCSI) boot support for the Agent-based Installer

With this release, the Agent-based Installer supports creating assets that can be used to boot an OpenShift Container Platform cluster from an iSCSI target.

Postinstallation configuration

Migrating the x86 control plane to arm64 architecture on Amazon Web Services

With this release, you can migrate the control plane in your cluster from x86 to arm64 architecture on Amazon Web Services (AWS). For more information, see Migrating the x86 control plane to arm64 architecture on Amazon Web Services.

Configuring the image stream import mode behavior (Technology Preview)

This feature introduces a new field, imageStreamImportMode, in the image.config.openshift.io/cluster resource. The imageStreamImportMode field controls the import mode behavior of image streams. You can set the imageStreamImportMode field to either of the following values:

  • Legacy

  • PreserveOriginal

For more information, see Image controller configuration parameters.

You must enable the TechPreviewNoUpgrade feature set in the FeatureGate custom resource (CR) to enable the imageStreamImportMode feature. For more information, see Understanding feature gates.

Operator lifecycle

Existing version of Operator Lifecycle Manager now known as OLM (Classic)

With the release of Operator Lifecycle Manager (OLM) v1 as a General Availability (GA) feature, starting in OpenShift Container Platform 4.18, the existing version of OLM that has been included since the launch of OpenShift Container Platform 4 is now known as OLM (Classic).

OLM (Classic) remains enabled by default and fully supported throughout the OpenShift Container Platform 4 lifecycle.

For more information on the GA release of OLM v1, see the Extensions (OLM v1) release note sections. For full documentation focused on OLM v1, see the stand-alone Extensions guide.

For full documentation focused on OLM (Classic), continue referring to the Operators guide.

Managing machines with the Cluster API for Microsoft Azure (Technology Preview)

This release introduces the ability to manage machines by using the upstream Cluster API, integrated into OpenShift Container Platform, as a Technology Preview for Microsoft Azure clusters. This capability is in addition or an alternative to managing machines with the Machine API. For more information, see About the Cluster API.

Machine Config Operator

Updated boot images for AWS clusters promoted to GA

Updated boot images has been promoted to GA for Amazon Web Services (AWS) clusters. For more information, see Updated boot images.

Expanded image config nodes information (Technology Preview)

The image config nodes custom resource, that you can use to monitor the progress of machine configuration updates to nodes, now presents more information on the update. The output of the oc get machineconfignodes command now reports on the following and other conditions. You can use these statuses to follow the update, or troubleshoot the node if it experiences an error during the update:

  • If each node was cordoned and uncordoned

  • If each node was drained

  • If each node was rebooted

  • If a node had a CRI-O reload

  • If a node had the operating system and node files updated

On-cluster layering changes (Technology Preview)

There are several important changes to the on-cluster layering feature:

  • You can now install extensions onto an on-cluster customer layered image by using a MachineConfig object.

  • Updating the Containerfile in a MachineOSConfig object now triggers a build to be performed.

  • You can now revert an on-cluster custom layered image back to the base image by removing a label from the MachineOSConfig object.

  • The must-gather for the Machine Config Operator now includes data on the MachineOSConfig and MachineOSBuild objects.

For more information about on-cluster layering, see Using on-cluster layering to apply a custom layered image.

Management console

Checkbox for enabling cluster monitoring is marked by default

With this update, the checkbox for enabling cluster monitoring is now checked by default when installing the OpenShift Lightspeed Operator. (OCPBUGS-42381)

Monitoring

The in-cluster monitoring stack for this release includes the following new and modified features:

Updates to monitoring stack components and dependencies

This release includes the following version updates for in-cluster monitoring stack components and dependencies:

  • Metrics Server to 0.7.2

  • Prometheus to 2.55.1

  • Prometheus Operator to 0.78.1

  • Thanos to 0.36.1

Added scrape and evaluation intervals for user workload monitoring Prometheus

With this update, you can configure the intervals between consecutive scrapes and between rule evaluations for Prometheus for user workload monitoring.

Added early validation for the monitoring configurations in monitoring config maps

This update introduces early validation for changes to monitoring configurations in cluster-monitoring-config and user-workload-monitoring-config config maps to provide shorter feedback loops and enhance user experience.

Added the proxy environment variables to Alertmanager containers

With this update, Alertmanager uses the proxy environment variables. Therefore, if you configured an HTTP cluster-wide proxy, you can enable proxying by setting the proxy_from_environment parameter to true in your alert receivers or at the global config level in Alertmanager.

Added cross-project user workload alerting and recording rules

With this update, you can create user workload alerting and recording rules that query multiple projects at the same time.

Correlating cluster metrics with RHOSO metrics

You can now correlate observability metrics for clusters that run on Red Hat OpenStack Services on OpenShift (RHOSO). By collecting metrics from both environments, you can monitor and troubleshoot issues across the infrastructure and application layers.

For more information, see Monitoring clusters that run on RHOSO.

Network Observability Operator

The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the Network Observability release notes.

Networking

Holdover in a grandmaster clock with GNSS as the source

With this release, you can configure the holdover behavior in a grandmaster (T-GM) clock with Global Navigation Satellite System (GNSS) as the source. Holdover allows the T-GM clock to maintain synchronization performance when the GNSS source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions.

You can define the holdover behavior by configuring the following holdover parameters in the PTPConfig custom resource (CR):

  • MaxInSpecOffset

  • LocalHoldoverTimeout

  • LocalMaxHoldoverOffSet

Support for configuring a multi-network policy for IPVLAN and Bond CNI

With this release, you can configure a multi-network policy for the following network types:

  • IP Virtual Local Area Network (IPVLAN)

  • Bond Container Network Interface (CNI) over SR-IOV

For more information, see Configuring multi-network policy

Updated terminology for whitelist and blacklist annotations

The terminology for the ip_whitelist and ip_blacklist annotations have been updated to ip_allowlist and ip_denylist, respectively. Currently, OpenShift Container Platform still supports the ip_whitelist and ip_blacklist annotations. However, these annotations are planned for removal in a future release.

Checking OVN-Kubernetes network traffic with OVS sampling using the CLI

OVN-Kubernetes network traffic can be viewed with OVS sampling via the CLI for the following network APIs:

  • NetworkPolicy

  • AdminNetworkPolicy

  • BaselineNetworkPolicy

  • UserDefinedNetwork isolation

  • EgressFirewall

  • Multicast ACLs.

Checking OVN-Kubernetes network traffic with OVS sampling using the CLI is intended to help with packet tracing. It can also be used while the Network Observability Operator is installed.

User-defined network segmentation (Generally Available)

With OpenShift Container Platform 4.18, user-defined network segmentation is generally available. User-defined networks (UDN) introduce enhanced network segmentation capabilities by allowing administrators to define custom network topologies using namespace-scoped UserDefinedNetwork and cluster-scoped ClusterUserDefinedNetwork custom resources.

With UDNs, administrators can create tailored network topologies with enhanced isolation, IP address management for workloads, and advanced networking features. Supporting both Layer 2 and Layer 3 topology types, user-defined network segmentation enables a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. For more information on supported features, see UDN support matrix.

Use cases of UDN include providing virtual machines (VMs) with a lifetime duration for static IP addresses assignment as well as a Layer 2 primary pod network so that users can live migrate VMs between nodes. These features are all fully equipped in OpenShift Virtualization. Users can use UDNs to create a stronger, native multi-tenant environment, allowing you to secure your overlay Kubernetes network, which is otherwise open by default. For more information, see About user-defined networks.

The dynamic configuration manager is enabled by default (Technology Preview)

You can reduce your memory footprint by using the dynamic configuration manager on Ingress Controllers. The dynamic configuration manager propagates endpoint changes through a dynamic API. This process enables the underlying routers to adapt to changes (scale ups and scale downs) without reloads.

To use the dynamic configuration manager, enable the TechPreviewNoUpgrade feature set by running the following command:

$ oc patch featuregates cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge

Additional environments for the network flow matrix

With this release, you can view network information for ingress flows to OpenShift Container Platform services in the following environments:

  • OpenShift Container Platform on bare metal

  • Single-node OpenShift on bare metal

  • OpenShift Container Platform on Amazon Web Services (AWS)

  • Single-node OpenShift on AWS

MetalLB updates for Border Gateway Protocol

With this release, MetalLB includes a new field for the Border Gateway Protocol (BGP) peer custom resource. You can use the dynamicASN field to detect the Autonomous System Number (ASN) to use for the remote end of a BGP session. This is an alternative to explicitly setting an ASN in the spec.peerASN field.

Configuring an RDMA subsytem for SR-IOV

With this release, you can configure a Remote Direct Memory Access (RDMA) Container Network Interface (CNI) on Single Root I/O Virtualization (SR-IOV) to enable high-performance, low-latency communication between containers. When you combine RDMA with SR-IOV, you provide a mechanism to expose hardware counters of Mellanox Ethernet devices to be used inside Data Plane Development Kit (DPDK) applications.

Support configuring the SR-IOV Network Operator on a Secure-Boot-enabled environment for Mellanox cards

With this release, you can configure the Single Root I/O Virtualization (SR-IOV) Network Operator when the system has secure boot enabled. The SR-IOV Operator is configured after you first manually configure the firmware for Mellanox devices. With secure boot enabled, the resilience of your system is enhanced, and a crucial layer of defense for the overall security of your computer is provided.

Support for pre-created RHOSP floating IP addresses in the Ingress Controller

With this release, you can now specify pre-created floating IP addresses in the Ingress Controller for your clusters running on RHOSP.

SR-IOV Network Operator support extension

The SR-IOV Network Operator now supports Intel NetSec Accelerator Cards and Marvell Octeon 10 DPUs. (OCPBUGS-43451)

Using a Linux bridge interface as the OVS default port connection

The OVN-Kubernetes plugin can now use a Linux bridge interface as the Open vSwitch (OVS) default port connection. This means that a network interface controller, such as SmartNIC, can now bridge the underlying network with a host. (OCPBUGS-39226)

Cluster Network Operator exposing network overlap metrics for an issue

When you start the limited live migration method and an issue exists with network overlap, the Cluster Network Operator (CNO) can now expose network overlap metrics for the issue. This is possible because the openshift_network_operator_live_migration_blocked metric now includes the new NetworkOverlap label. (OCPBUGS-39096)

Nodes

crun is now the default container runtime

crun is now the default container runtime for new containers created in OpenShift Container Platform. The runC runtime is still supported and you can change the default runtime to runC, if needed. For more information on crun, see About the container engine and container runtime. For information on changing the default to runC, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters.

Updating from OpenShift Container Platform 4.17.z to OpenShift Container Platform 4.18 does not change your container runtime.

sigstore support (Technology Preview)

Available as a Technology Preview, you can use the sigstore project with OpenShift Container Platform to improve supply chain security. You can create signature policies at the cluster-wide level or for a specific namespace. For more information, see Manage secure signatures with sigstore.

Enhancements to process for adding nodes

Enhancements have been added to the process for adding worker nodes to an on-premise cluster that was introduced in OpenShift Container Platform 4.17. With this release, you can now generate Preboot Execution Environment (PXE) assets instead of an ISO image file, and you can configure reports to be generated regardless of whether the node creation process fails or not.

Node Tuning Operator properly selects kernel arguments

The Node Tuning Operator can now properly select kernel arguments and management options for Intel and AMD CPUs. (OCPBUGS-43664)

Default container runtime is not always set properly

The default container runtime that is set by the cluster Node Tuning Operator is always inherited from the cluster, and is not hard-coded by the Operator. Starting with this release, the default value is crun. (OCPBUGS-45450)

OpenShift CLI (oc)

oc-mirror plugin v2 (Generally Available)

oc-mirror plugin v2 is now generally available. To use it, add the --v2 flag when running oc-mirror commands. The previous version (oc-mirror plugin v1), which runs when the --v2 flag is not set, is now deprecated. It is recommended to transition to oc-mirror plugin v2 for continued support and improvements.

oc-mirror plugin v2 now supports mirroring helm charts. Also, oc-mirror plugin v2 can now be used in environments where HTTP/S proxy is enabled, ensuring broader compatibility with enterprise setups.

oc-mirror plugin v2 introduces v1 retro-compatible filtering of Operator catalogs and generates filtered catalogs. This feature allows cluster administrators to view only the Operators that have been mirrored, rather than the complete list from the origin catalog.

Oracle® Cloud Infrastructure (OCI)

Bare-metal support on Oracle® Cloud Infrastructure (OCI)

OpenShift Container Platform cluster installations on Oracle® Cloud Infrastructure (OCI) are now supported for bare-metal machines. You can install bare-metal clusters on OCI by using either the Assisted Installer or the Agent-based Installer. To install a bare-metal cluster on OCI, choose one of the following installation options:

Registry

Read-only registry enhancements

In previous versions of OpenShift Container Platform, storage mounted as read-only returned no specific metrics or information about storage errors. This could result in silent failures of a registry when the storage backend was read-only. With this release, the following alerts have been added to return storage information when the backend is set to read-only:

Alert Name Message

ImageRegistryStorageReadOnly

The image registry storage is read-only and no images will be committed to storage.

ImageRegistryStorageFull

The image registry storage disk is full and no images will be committed to storage.

Red Hat Enterprise Linux CoreOS (RHCOS)

RHCOS uses RHEL 9.4

RHCOS uses Red Hat Enterprise Linux (RHEL) 9.4 packages in OpenShift Container Platform 4.18. These packages ensure that your OpenShift Container Platform instances receive the latest fixes, features, enhancements, hardware support, and driver updates.

Scalability and performance

Cluster validation with the cluster-compare plugin

The cluster-compare plugin is an OpenShift CLI (oc) plugin that compares a cluster configuration with a target configuration. The plugin reports configuration differences while suppressing expected variations by using configurable validation rules and templates.

For example, the plugin can highlight unexpected differences, such as mismatched field values, missing resources, or version discrepancies, while ignoring expected differences, such as optional components or hardware-specific fields. This focused comparison makes it easier to assess cluster compliance with the target configuration.

You can use the cluster-compare plugin in development, production, and support scenarios.

For more information about the cluster-compare plugin, see Overview of the cluster-compare plugin.

Node Tuning Operator: Deferred Tuning Updates

In this release, the Node Tuning Operator introduces support for deferring tuning updates. Administrators can schedule updates to be applied during a maintenance window with this feature.

For more information, see Deferring application of tuning changes.

NUMA Resources Operator now uses default SELinux policy

With this release, the NUMA Resources Operator no longer creates a custom SELinux policy to enable the installation of Operator components on a target node. Instead, the Operator uses a built-in container SELinux policy. This change removes the additional node reboot that was previously required when applying a custom SELinux policy during an installation.

In clusters with an existing NUMA-aware scheduler configuration, upgrading to OpenShift Container Platform 4.18 might result in an additional reboot for each configured node. For further information about how to manage an upgrade in this scenario and limit disruption, see the Red Hat Knowledgebase article Managing an upgrade to OpenShift Container Platform 4.18 or later for a cluster with an existing NUMA-aware scheduler configuration

Node Tuning Operator platform detection

With this release, when you apply a performance profile, the Node Tuning Operator detects the platform and configures kernel arguments and other platform-specific options accordingly. This release adds support for detecting the following platforms:

  • AMD64

  • AArch64

  • Intel 64

Support for worker nodes with AMD EPYC Zen 4 CPUs

With this release, you can use the PerformanceProfile custom resource (CR) to configure worker nodes on machines equipped with AMD EPYC Zen 4 CPUs (Genoa and Bergamo). These CPUs are fully supported.

The per pod power management feature is not functional on AMD EPYC Zen 4 CPUs.

Storage

Over-provisioning ratio update after LVMCluster custom resource creation

Previously, the thinPoolConfig.overprovisionRatio field in the LVMCluster custom resource (CR) could be configured only during the creation of the LVMCluster CR. With this release, you can now update the thinPoolConfig.overprovisionRatio field even after creating the LVMCluster CR.

Support for configuring metadata size for the thin pool

This feature provides the following new optional fields in the LVMCluster custom resource (CR):

  • thinPoolConfig.metadataSizeCalculationPolicy: Specifies the policy to calculate the metadata size for the underlying volume group. You can set this field to either Static or Host. By default, this field is set to Host.

  • thinPoolConfig.metadataSize: Specifies the metadata size for the thin pool. You can configure this field only when the MetadataSizeCalculationPolicy field is set to Static.

For more information, see About the LVMCluster custom resource.

Persistent storage using CIFS/SMB CSI Driver Operator is generally available

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) with a Container Storage Interface (CSI) driver for the Common Internet File System (CIFS) dialect/Server Message Block (SMB) protocol. The CIFS/SMB CSI Driver Operator that manages this driver was introduced in OpenShift Container Platform 4.16 with Technology Preview status. In OpenShift Container Platform 4.18, it is now generally available.

For more information, see CIFS/SMB CSI Driver Operator.

Secret Store CSI Driver Operator is generally available

The Secrets Store Container Storage Interface (CSI) Driver Operator, secrets-store.csi.k8s.io, allows OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as an inline ephemeral volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container’s file system. The Secrets Store CSI Driver Operator was available in OpenShift Container Platform 4.14 as a Technology Preview feature. OpenShift Container Platform 4.18 introduces this feature as generally available.

For more information about the Secrets Store CSI driver, see Secrets Store CSI Driver Operator.

For information about using the Secrets Store CSI Driver Operator to mount secrets from an external secrets store to a CSI volume, see Providing sensitive data to pods by using an external secrets store.

Persistent volume last phase transition time parameter is generally available

OpenShift Container Platform 4.16 introduced a new parameter, LastPhaseTransitionTime, which has a timestamp that is updated every time a persistent volume (PV) transitions to a different phase (pv.Status.Phase). For OpenShift Container Platform 4.18, this feature is generally available.

For more information about using the persistent volume last phase transition time parameter, see Last phase transition time.

Multiple vCenter support for vSphere CSI is generally available

OpenShift Container Platform 4.17 introduced the ability to deploy OpenShift Container Platform across multiple vSphere clusters (vCenters) as a Technology Preview feature. In OpenShift Container Platform 4.18, Multiple vCenter support is now generally available.

Always honor persistent volume reclaim policy (Technical Preview)

Prior to OpenShift Container Platform 4.18, the persistent volume (PV) reclaim policy was not always applied.

For a bound PV and persistent volume claim (PVC) pair, the ordering of PV-PVC deletion determined whether the PV delete reclaim policy was applied or not. The PV applied the reclaim policy if the PVC was deleted prior to deleting the PV. However, if the PV was deleted prior to deleting the PVC, then the reclaim policy was not applied. As a result of that behavior, the associated storage asset in the external infrastructure was not removed.

With OpenShift Container Platform 4.18, the PV reclaim policy is consistently always applied. This feature has Technical Preview status.

For more information, see Reclaim policy for persistent volumes.

Improved ability to easily remove LVs or LVSs for LSO is generally available

For the Local Storage Operator (LSO), OpenShift Container Platform 4.18 improves the ability to remove Local Volumes (LVs) and Local Volume Sets (LVSs) by automatically removing artifacts, thus reducing the number of steps required.

For more information, see Removing a local volume or local volume set.

CSI volume group snapshots (Technology Preview)

OpenShift Container Platform 4.18 introduces Container Storage Interface (CSI) volume group snapshots as a Technology Preview feature. This feature needs to be supported by the CSI driver. CSI volume group snapshots use a label selector to group multiple persistent volume claims (PVCs) for snapshotting. A volume group snapshot represents copies from multiple volumes that are taken at the same point-in-time. This can be useful for applications that contain multiple volumes.

OpenShift Data Foundation supports volume group snapshots.

For more information about CSI volume group snapshots, see CSI volume group snapshots.

GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series is generally available

The Google Cloud Platform Persistent Disk (GCP PD) Container Storage Interface (CSI) driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks.

Additionally, hyperdisk storage pools are supported for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed.

For OpenShift Container Platform 4.18, this feature is generally available.

OpenStack Manila expanding persistent volumes is generally available

In OpenShift Container Platform 4.18, OpenStack Manila supports expanding Container Storage Interface (CSI) persistent volumes (PVs). This feature is generally available.

GCP Filestore supporting Workload Identity is generally available

In OpenShift Container Platform 4.18, Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) storage supports Workload Identity. This allows users to access Google Cloud resources using federated identities instead of a service account key. For OpenShift Container Platform 4.18, this feature is generally available.

Web console

Administrator perspective

This release introduces the following updates to the Administrator perspective of the web console:

  • A new setting for hiding the Getting started resources card on the Overview page allowing for maximum use of the dashboard.

  • A Start Job option was added to the CronJob List and Details pages, so you can start individual CronJobs manually directly in the web console without having to use the oc CLI.

  • The Import YAML button in the masthead is now a Quick Create button that you can use for the rapid deployment of workloads by imprting from YAML, Git, or using container images.

  • You can build your own generative-AI chat bot with a chat bot sample. The generative-AI chat bot sample is deployed with Helm and includes a full CI/CD pipeline. You can also run this sample on your cluster with no CPUs.

  • You can import YAML into the console using OpenShift Lightspeed.

Content Security Policy (CSP)

With this release, the console Content Security Policy (CSP) is deployed in report-only mode. CSP violations will be logged in the browser console, but the associated CSP directives will not be enforced. Dynamic plugin creators can add their own policies.

Additionally, you can report any plugins that break security policies. Administrators have the ability to disable any plugin breaking those policies. CSP violations will be logged in the browser console, but the associated CSP directives will not be enforced. This feature is behind a feature-gate, so you will need to manually enable it.

Developer Perspective

This release introduces the following updates to the Developer perspective of the web console:

  • Added a OpenShift Container Platform toolkit, Quarkus tools and JBoss EAP, and a Language Server Protocol Plugin for Visual Studio Code and IntelliJ.

  • Previously, when moving from light mode to dark mode in the Monaco editor, the console remained in dark mode. With this update, the Monaco code editor will match the selected theme.

Notable technical changes

Uninstalling the SR-IOV Network Operator changed

From OpenShift Container Platform 4.18, to successfully uninstall the SR-IOV Network Operator, you need to delete the sriovoperatorconfigs custom resource and custom resource definition too.

For more information, see Uninstalling the SR-IOV Network Operator.

Changes to the iSCSI initiator name and service

Previously, the /etc/iscsi/initiatorname.iscsi file was present by default on RHCOS images. With this release, the initiatorname.iscsi file is no longer present by default. Instead, it is created at run time when the iscsi.service and subsequent iscsi-init.service services start. This service is not enabled by default and might affect any CSI drivers that rely on reading the contents of the initiatorname.iscsi file prior to starting the service.

Operator SDK 1.38.0

OpenShift Container Platform 4.18 supports Operator SDK 1.38.0. See Installing the Operator SDK CLI to install or update to this latest version.

Operator SDK 1.38.0 now supports Kubernetes 1.30 and uses Kubebuilder v4.

Metrics endpoints are now secured using native Kubebuilder metrics configuration instead of kube-rbac-proxy, which is now removed.

The following support has also been removed from Operator SDK:

  • Scaffolding tools for Hybrid Helm-based Operator projects

  • Scaffolding tools for Java-based Operator projects

If you have Operator projects that were previously created or maintained with Operator SDK 1.36.1, update your projects to keep compatibility with Operator SDK 1.38.0:

Deprecated and removed features

Some features available in previous releases have been deprecated or removed.

Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.18, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table.

In the following tables, features are marked with the following statuses:

  • Not Available

  • Technology Preview

  • General Availability

  • Deprecated

  • Removed

Bare metal monitoring deprecated and removed features

Table 6. Bare Metal Event Relay Operator tracker
Feature 4.16 4.17 4.18

Bare Metal Event Relay Operator

Deprecated

Removed

Removed

Images deprecated and removed features

Table 7. Images deprecated and removed tracker
Feature 4.16 4.17 4.18

Cluster Samples Operator

Deprecated

Deprecated

Deprecated

Installation deprecated and removed features

Table 8. Installation deprecated and removed tracker
Feature 4.16 4.17 4.18

--cloud parameter for oc adm release extract

Deprecated

Deprecated

Deprecated

CoreDNS wildcard queries for the cluster.local domain

Deprecated

Deprecated

Deprecated

compute.platform.openstack.rootVolume.type for RHOSP

Deprecated

Deprecated

Deprecated

controlPlane.platform.openstack.rootVolume.type for RHOSP

Deprecated

Deprecated

Deprecated

ingressVIP and apiVIP settings in the install-config.yaml file for installer-provisioned infrastructure clusters

Deprecated

Deprecated

Deprecated

Package-based RHEL compute machines

Deprecated

Deprecated

Deprecated

Managing machines with the Cluster API for Microsoft Azure

Not Available

Not Available

Technology Preview

platform.aws.preserveBootstrapIgnition parameter for Amazon Web Services (AWS)

Deprecated

Deprecated

Deprecated

Installing a cluster on AWS with compute nodes in AWS Outposts

Deprecated

Deprecated

Deprecated

Machine management deprecated and removed features

Table 9. Machine management deprecated and removed tracker
Feature 4.16 4.17 4.18

Managing machine with Machine API for Alibaba Cloud

Removed

Removed

Removed

Cloud controller manager for Alibaba Cloud

Removed

Removed

Removed

Networking deprecated and removed features

Table 10. Networking deprecated and removed tracker
Feature 4.16 4.17 4.18

OpenShift SDN network plugin

Deprecated

Removed

Removed

iptables

Deprecated

Deprecated

Deprecated

Node deprecated and removed features

Table 11. Node deprecated and removed tracker
Feature 4.16 4.17 4.18

ImageContentSourcePolicy (ICSP) objects

Deprecated

Deprecated

Deprecated

Kubernetes topology label failure-domain.beta.kubernetes.io/zone

Deprecated

Deprecated

Deprecated

Kubernetes topology label failure-domain.beta.kubernetes.io/region

Deprecated

Deprecated

Deprecated

cgroup v1

Deprecated

Deprecated

Deprecated

OpenShift CLI (oc) deprecated and removed features

Table 12. OpenShift CLI (oc) deprecated and removed tracker
Feature 4.16 4.17 4.18

oc-mirror plugin v1

General Availability

General Availability

Deprecated

Operator lifecycle and development deprecated and removed features

Table 13. Operator lifecycle and development deprecated and removed tracker
Feature 4.16 4.17 4.18

Operator SDK

Deprecated

Deprecated

Deprecated

Scaffolding tools for Ansible-based Operator projects

Deprecated

Deprecated

Deprecated

Scaffolding tools for Helm-based Operator projects

Deprecated

Deprecated

Deprecated

Scaffolding tools for Go-based Operator projects

Deprecated

Deprecated

Deprecated

Scaffolding tools for Hybrid Helm-based Operator projects

Deprecated

Deprecated

Removed

Scaffolding tools for Java-based Operator projects

Deprecated

Deprecated

Removed

SQLite database format for Operator catalogs

Deprecated

Deprecated

Deprecated

Specialized hardware and driver enablement deprecated and removed features

Table 14. Specialized hardware and driver enablement deprecated and removed tracker
Feature 4.16 4.17 4.18

Storage deprecated and removed features

Table 15. Storage deprecated and removed tracker
Feature 4.16 4.17 4.18

AliCloud Disk CSI Driver Operator

General Availability

Removed

Removed

Shared Resources CSI Driver Operator

Technical Preview

Deprecated

Removed

Web console deprecated and removed features

Table 16. Web console deprecated and removed tracker
Feature 4.16 4.17 4.18

Patternfly 4

Deprecated

Deprecated

Deprecated

React Router 5

Deprecated

Deprecated

Deprecated

Workloads deprecated and removed features

Table 17. Workloads deprecated and removed tracker
Feature 4.16 4.17 4.18

DeploymentConfig objects

Deprecated

Deprecated

Deprecated

Removed features

The Shared Resource CSI Driver is removed

The Shared Resource CSI Driver feature was deprecated in OpenShift Container Platform 4.17, and is now removed from OpenShift Container Platform 4.18. This feature is now generally available in Builds for Red Hat OpenShift 1.1. To use this feature, ensure you are using Builds for Red Hat OpenShift 1.1 or later.

The selected bundles feature is removed in oc-mirror v2

The selected bundles feature is removed from the oc-mirror v2 Generally Available release. This change prevents issues where specifying the wrong Operator bundle version could break the Operators in a cluster. (OCPBUGS-49419)

Notice of future deprecation

Future Kubernetes API removals

The next minor release of OpenShift Container Platform is expected to use Kubernetes 1.32. Kubernetes 1.32 removed a deprecated API.

See the Deprecated API Migration Guide in the upstream Kubernetes documentation for the list of planned Kubernetes API removals.

See Navigating Kubernetes API deprecations and removals for information about how to check your cluster for Kubernetes APIs that are planned for removal.

Bug fixes

API Server and Authentication

  • Previously, API validation did not prevent an authorized client from decreasing the current revision of a static pod operand, such as kube-apiserver, or prevent the operand from progressing concurrently on two nodes. With this release, requests that attempt to do either are now rejected. (OCPBUGS-48502)

  • Previously, the oauth-server would crash when configuring an oath identity provider (IDP) with a callback path that contained spaces. With this release, the issue is resolved.(OCPBUGS-44099)

Bare Metal Hardware Provisioning

  • Previously, the Bare Metal Operator (BMO) created the HostFirmwareComponents custom resource for all Bare Metal hosts (BMH), including ones based on the intelligent platform management interface (IPMI), which did not support it. With this release, HostFirmwareComponents custom resources are only created for BMH that support it. (OCPBUGS-49699)

  • Previously, in bare-metal configurations where the provisioning network is disabled but the bootstrapProvisioningIP field is set, the bare-metal provisioning components might fail to start. These failures occur when the provisioning process reconfigures the external network interface on the bootstrap VM during the process of pulling container images. With this release, dependencies were added to ensure that interface reconfiguration only occurs when the network is idle, preventing conflicts with other processes. As a result, the bare-metal provisioning components now start reliably, even when the bootstrapProvisioningIP field is set and the provisioning network is disabled. (OCPBUGS-36869)

  • Previously, Ironic inspection failed if special or invalid characters existed in the serial number of a block device. This occurred because the lsblk command failed to escape the characters. With this release, the command now escapes the characters so this issue no longer persists. (OCPBUGS-36492)

  • Previously, a check for unexpected IP addresses on the provisioning interface during metal3 pod startup was triggered. This issue occurred because of the presence of an IP addresses supplied by DHCP from a previous version of the pod that existed on another node. With this release, a pod startup check now looks only for IP addresses that exist outside the provisioning network subnet, so that a metal3 pod starts immediately, even when if the node has moved to a different node. (OCPBUGS-38507)

Cloud Compute

  • Previously, the availability set fault domain count was hardcoded to 2. This value works in most regions in Microsoft Azure because the fault domain counts are typically at least 2, but failed in the centraluseuap and eastusstg regions. With this release, the availability set fault domain count in a region is set dynamically. (OCPBUGS-48659)

  • Previously, an updated zone API error message from Google Cloud Platform (GCP) with increased granularity caused the machine controller to mistakenly mark the machine as valid with a temporary cloud error instead of recognizing it as an invalid machine configuration error. This prevented the invalid machine from transitioning to a failed state. With this update, the machine controller handles the new error messages correctly, and machines with an invalid zone or project ID now transition properly to a failed state. (OCPBUGS-47790)

  • Previously, the certificate signing request (CSR) approver included certificates from other systems within its calculations for whether it was overwhelmed and should stop approving certificates. In larger clusters, with other subsystems using CSRs, the CSR approver counted unrelated unapproved CSRs towards its total and prevented further approvals. With this release, the CSR approver only includes CSRs that it can approve, by using the signerName property as a filter. As a result, the CSR approver only prevents new approvals when there are a large number of unapproved CSRs for the relevant signerName values. (OCPBUGS-46425)

  • Previously, some cluster autoscaler metrics were not initialized, and therefore were not available. With this release, these metrics are initialized and available. (OCPBUGS-46416)

  • Previously, if an informer watch stream missed an event because of a temporary disconnection, the informer might return a special signal type after it reconnected to the network, especially when the informer recognizes that an EndpointSlice object was deleted during the temporary disconnection. The returned signal type indicated that the state of the event has stalled and that the object was deleted. The returned signal type was not accurate and might have caused confusion for a OpenShift Container Platform user. With this release, the Cloud Controller Manager (CCM) handles unexpected signal types so that OpenShift Container Platform users do not receive confusing information from returned types. (OCPBUGS-45972)

  • Previously, when the AWS DHCP option set was configured to use a custom domain name that contains a trailing period (.), OpenShift Container Platform installation failed. With this release, the logic that extracts the hostname of EC2 instances and turns them into Kubelet node names is updated to trim trailing periods so that the resulting Kubernetes object name is valid. Trailing periods in the DHCP option set no longer cause installation to fail. (OCPBUGS-45889)

  • Previously, installation of an AWS cluster failed in certain environments on existing subnets when the publicIp parameter for the MachineSet object was explicity set to false. With this release, a configuration value set for publicIp no longer causes issues when the installation program provisions machines for your AWS cluster in certain environment. (OCPBUGS-45130)

  • Previously, enabling a provisioning network by editing the cluster-wide Provisioning resource was only possible on clusters with platform type baremetal, such as ones created by the IPI installer. On baremetal SNO and UPI clusters that would result in a validation error. The excessive validation has been removed, and enabling a provisioning network is now possible on baremetal clusters with platform type none. As with IPI, users are responsible for making sure that all networking requirements are met for this operation. (OCPBUGS-43371)

  • Previously, the installation program populated the network.devices, template and workspace fields in the spec.template.spec.providerSpec.value section of the VMware vSphere control plane machine set custom resource (CR). These fields should be set in the vSphere failure domain, and the installation program populating them caused unintended behaviors. Updating these fields did not trigger an update to the control plane machines, and these fields were cleared when the control plane machine set was deleted. With this release, the installation program is updated to no longer populate values that are included in the failure domain configuration. If these values are not defined in a failure domain configuration, for instance on a cluster that is updated to OpenShift Container Platform 4.18 from an earlier version, the values defined by the installation program are used. (OCPBUGS-32947)

  • Previously, the cluster autoscaler would occasionally leave a node with a PreferNoSchedule taint during deletion. With this release, the maximum bulk deletion limit is disabled so that nodes with this taint no longer remain after deletion. (OCPBUGS-42132)

  • Previously, the Cloud Controller Manager (CCM) liveness probe used on IBM Cloud cluster installations could not use loopback and this caused the probe to continuously restart. With this release, the probe can use loopback so that this issue not longer occurs. (OCPBUGS-41936)

  • Previously, the approval mechanism for certificate signing requests (CSRs) failed because the node name and internal DNS entry for a CSR did not match in terms of character case differences. With this release, an update to the approval mechanism for CSRs skips case-sensitive checks so that a CSR with a matching node name and internal DNS entry does not fail the check because of character case differences. (OCPBUGS-36871)

  • Previously, the cloud node manager had permission to update any node object when it needed to update only the node on which it was running. With this release, restrictions have been put in place to prevent the node manager from one node updating the node object of another node.(OCPBUGS-22190)

Cloud Credential Operator

  • Previously, the aws-sdk-go-v2 software development kit (SDK) failed to authenticate an AssumeRoleWithWebIdentity API operation on an Amazon Web Services (AWS) Security Token Service (STS) cluster. With this release, pod-identity-webhook now includes a default region so that this issue no longer persists. (OCPBUGS-45937)

  • Previously, secrets in the cluster were fetched in a single call. When there were a large number of secrets, this caused the API to time out. With this release, the Cloud Credential Operator fetches secrets in batches limited to 100 secrets. This change prevents timeouts when there are large number of secrets in the cluster. (OCPBUGS-39531)

Cluster Resource Override Admission Operator

  • Previously, if you specified the forceSelinuxRelabel field in a ClusterResourceOverride custom resource (CR), and then modified it afterwards, the change would not be reflected in the clusterresourceoverride-configuration config map, which is used to apply the SELinux re-labeling workaround feature. With this update, the Cluster Resource Override Operator can track the change to the forceSelinuxRelabel feature in order to reconcile the config map object. As a result, the config map object is correctly updated when you change the ClusterResourceOverride CR field. (OCPBUGS-48692)

Cluster Version Operator

  • Previously, a custom security context constraint (SCC) impacted any pod that was generated by the Cluster Version Operator from receiving a cluster version upgrade. With this release, OpenShift Container Platform now sets a default SCC to each pod, so that any custom SCC created does not impact a pod. (OCPBUGS-46410)

  • Previously, the Cluster Version Operator (CVO) did not filter internal errors that were propogated to the ClusterVersion Failing condition message. As a result, errors that did not negatively impact the update were shown in the "ClusterVersion Failing" condition message. With this release, the errors that are propogated to the ClusterVersion Failing condition message are filtered. (OCPBUGS-15200)

Developer Console

  • Previously, if a PipelineRun was using a resolver, rerunning that PipelineRun resulted in an error. With this fix, a user can rerun PipelineRun if it is using resolver. (OCPBUGS-45228)

  • Previously, on the if you edited a deployment config in Form view, the ImagePullSecrets values were duplicated. With this update, editing the form does not add duplicate entries. (OCPBUGS-45227)

  • Previously, when you searched on the OperatorHub or another catalog, you would experience periods of latency between each key press. With this update, the input on the catalog search bars are debounced. (OCPBUGS-43799)

  • Previously, no option existed to close the Getting started resources section in the Administrator perspective. With this change, user can close the Getting started resources section. (OCPBUGS-38860)

  • Previously, when cronjobs were created, the creation of pods happens too quickly, causing the component that fetches new pods off the cronjob to fail. With this update, a 3 second delay was added before starting to fetch the pods of the cronjob. (OCPBUGS-37584)

  • Previously, resources created when a new user is created were not removed automatically when the user was deleted. This caused clutter on the cluster with configuration maps, roles, and role-bindings. With this update, ownerRefs was added to the resources, so they are cleared once the user is deleted and the cluster no longer clutters with users. (OCPBUGS-37560)

  • Previously, when importing a Git repository using the serverless import strategy, the environment variables from the func.yaml were not automatically loaded into the form. With this update, the environment variables are now loaded upon import. (OCPBUGS-34764)

  • Previously, users would erroneously see an option to import a repository using the pipeline build strategy when the devfile import strategy was selected; however, this was not possible. With this update, the pipeline strategy has been removed when the devfile import strategy is selected. (OCPBUGS-32526)

  • Previously, when using a custom template, you could not enter multi-line parameters, such as private keys. With this release, you can switch between single-line and multi-line modes so you can fill out template fields with multi-line inputs. (OCPBUGS-23080)

Image Registry

  • Previously, you could not install a cluster on AWS in the ap-southeast-5 region or other regions because the OpenShift Container Platform internal registry did not support these regions. With this release, the internal registry is updated to include the following regions so that this issue no longer occurs:

    • ap-southeast-5

    • ap-southeast-7

    • ca-west-1

    • il-central-1

    • mx-central-1

  • Previously, when the Image Registry Operator was configured with networkAccess: Internal in Microsoft Azure, it would not be possible to successfully set managementState to Removed in the Operator configuration. This occurred because of an authorization error when the Operator tried to delete the storage container. With this update, the Image Registry Operator continues with the deletion of the storage account, which automatically deletes the storage container, resulting in a successful change into the Removed state. (OCPBUGS-42732)

  • Previously, when configuring the image registry to use an Microsoft Azure storage account located in a resource group other than the cluster’s resource group, the Image Registry Operator would become degraded due to a validation error. This update changes the Image Registry Operator to allow for authentication by only storage account key without validating for other authentication requirements. (OCPBUGS-42514)

  • Previously, installation with the OpenShift installer used the cluster API. Virtual networks created by the cluster API use a different tag template. Consequently, setting .spec.storage.azure.networkAccess.type: Internal in the Image Registry Operator’s config.yaml file resulted in the Image Registry Operator unable to discover the virtual network. With this update, the Image Registry Operator searches for both new and old tag templates, resolving the issue. (OCPBUGS-42196)

  • Previously, the image registry would, in some cases, panic when attempting to purge failed uploads from s3-compatible storage providers. This was caused by the image registry’s s3 driver mishandling empty directory paths. With this update, the image registry properly handles empty directory paths, fixing the panic. (OCPBUGS-39108)

Installer

Insights Operator

  • Previously, during entitled builds on a Red Hat OpenShift Container Platform cluster running on IBM Z hardware, repositories were not enabled. This issue has been resolved. You can now enable repositories during entitled builds on a Red Hat OpenShift Container Platform cluster running on IBM Z hardware. (OCPBUG-32233)

Machine Config Operator

  • Previously, Red Hat Enterprise Linux (RHEL) CoreOS templates that were shipped by the Machine Config Operator (MCO) caused node scaling to fail on Red Hat OpenStack Platform (RHOSP). This issue happened because of an issue with systemd and the presence of a legacy boot image from older versions of OpenShift Container Platform. With this release, a patch fixes the issue with systemd and removes the legacy boot image, so that node scaling can continue as expected. (OCPBUGS-42324)

  • Previously, if you enabled on-cluster layering for your cluster and you attempted to configure kernel arguments in the machine configuration, machine config pools (MCPs) and nodes entered a degraded state. This happened because of a configuration mismatch. With this release, a check for kernel arguments for a cluster with OCL-enabled ensures that the arguments are configured and applied to nodes in the cluster. This update prevents any mismatch that previously occurred between the machine configuration and the node configuration. (OCPBUGS-34647)

Management Console

  • Previously, clicking the "Don’t show again" link in the Lightspeed modal dialog did not correctly navigate to the general User Preference tab when one of the other User Preference tabs was displayed. After this update, clicking the "Don’t show again" link correctly navigates to the general User Preference tab. (OCPBUGS-48106)

  • Previously, multiple external link icons might show in the primary action button of the OperatorHub modal. With this update, only a single external link icon appears. (OCPBUGS-47742)

  • Previously, the web console was disabled when the authorization type was set to None in the cluster authentication configuration. With this update, the web console no longer disables when the authorization type was set to None. (OCPBUGS-46068)

  • Previously, the MachineConfig Details tab displayed an error when one or more spec.config.storage.file did not include optional data. With this update, the error no longer occurs and the Details tab renders as expected. (OCPBUGS-44049)

  • Previously, an extra name property was passed into resource list page extensions used to list related Operands on the CSV details page. As a result, the Operand list was filtered by the cluster service version (CSV) name and often returned an empty list. With this update, Operands are listed as expected. (OCPBUGS-42796)

  • Previously, the Sample tab did not show when creating a new ConfigMap with one or more ConfigMap ConsoleYAMLSamples present on the cluster. After this update, the Sample tab shows with one or more ConfigMap ConsoleYAMLSamples present. (OCPBUGS-41492)

  • Previously, the Events page resource type filter incorrectly reported the number of resources when three or more resources were selected. With this update, the filter always reports the correct number of resources. (OCPBUGS-38701)

  • Previously, the version number text in the updates graph on the Cluster Settings page appeared as black text on a dark background while viewing the page using Firefox in dark mode. With this update, the text appears as white text. (OCPBUGS-37988)

  • Previously, Alerting pages did not show resource information in their empty state. With this update, resource information is available on the Alerting pages. (OCPBUGS-36921)

  • Previously, the Operator Lifecycle Manager (OLM) CSV annotation contained unexpected JSON, which was successfully parsed, but then threw a runtime error when attempting to use the resulting value. With this update, JSON values from OLM annotations are validated before use, errors are logged, and the console does not fail when unexpected JSON is received in an annotation. (OCPBUGS-35744)

  • Previously, silenced alerts were visible on the Overview page of the OpenShift Container Platform web console. This occurred because the alerts did not include any external labels. With this release, silenced alerts include the external labels so they are filtered out and are not viewable. (OCPBUGS-31367)

Monitoring

  • Previously, if the SMTP smarthost or from fields under the emailConfigs object were not specified at the global or receiver level in the AlertmanagerConfig custom resource (CR), Alertmanager would crash because these fields are required. With this release, the Prometheus Operator fails reconciliation if these fields are not specified. Therefore, the Prometheus Operator no longer pushes invalid configurations to Alertmanager, preventing it from crashing. (OCPBUGS-48050)

  • Previously, the Cluster Monitoring Operator (CMO) did not mark configurations in cluster-monitoring-config and user-workload-monitoring-config config maps as invalid for unknown (for example, no longer supported) or duplicated fields. With this release, stricter validation is added that helps identify such errors. (OCPBUGS-42671)

  • Previously, it was not possible for a user to query the user workload monitoring Thanos API endpoint with POST requests. With this update, a cluster admin can bind a new pod-metrics-reader cluster role with a role binding or cluster role binding to allow POST queries for a user or service account. (OCPBUGS-41158)

  • Previously, an invalid config map configuration for core platform monitoring, user workload monitoring, or both caused Cluster Monitoring Operator (CMO) to report an InvalidConfiguration error. With this release, if only the user workload monitoring configuration is invalid, CMO reports UserWorkloadInvalidConfiguration, making it clear where the issue is located. (OCPBUGS-33863)

  • Previously, telemeter-client containers showed a TelemeterClientFailures Warnings message in multiple clusters. With this release, a runbook is added for the TelemeterClientFailures alert to explain the cause of the alert triggering and the alert provides resolution steps. (OCPBUGS-33285)

  • Previously, AlertmanagerConfig objects with invalid child routes generated invalid Alertmanager configuration leading to Alertmanager disruption. With this release, Prometheus Operator rejects such AlertmanagerConfig objects, and users receive a warning about the invalid child routes in logs. (OCPBUGS-30122)

  • Previously, the config-reloader for Prometheus for user-defined projects would fail if unset environment variables were used in the ServiceMonitor configuration, which resulted in Prometheus pods failing. With this release, the reloader no longer fails when an unset environment variable is encountered. Instead, unset environment variables are left as they are, while set environment variables are expanded as usual. Any expansion errors, suppressed or otherwise, can be tracked through the reloader_config_environment_variable_expansion_errors variable. (OCPBUGS-23252)

Networking

  • Previously, enabling encapsulated security payload (ESP) offload hardware when using IPSec on Open vSwitch attached interfaces would break connectivity in your cluster. To resolve this issue, OpenShift Container Platform by default disables ESP offload hardware on Open vSwitch attached interfaces. This fixes the issue. (OCPBUGS-42987)

  • Previously, if you deleted the default sriovOperatorConfig custom resource (CR), you could not recreate the default sriovOperatorConfig CR, because the ValidatingWebhookConfiguration was not initially deleted. With this release, the Single Root I/O Virtualization (SR-IOV) Network Operator removes validating webhooks when you delete the sriovOperatorConfig CR, so that you can create a new sriovOperatorConfig CR. (OCPBUGS-41897)

  • Previously, if you set custom annotations in a custom resource (CR), the SR-IOV Operator would override all the default annotations in the SriovNetwork CR. With this release, when you define custom annotations in a CR, the SR-IOV Operator does not override the default annotations. (OCPBUGS-41352)

  • Previously, bonds that were configured in active-backup mode would have IPsec Encapsulating Security Payload (ESP) offload active even if underlying links did not support ESP offload. This caused IPsec associations to fail. With this release, ESP offload is disabled for bonds so that IPsec associations pass. (OCPBUGS-39438)

  • Previously, the Machine Config Operator (MCO)'s vSphere resolve-prepender script used systemd directives that were incompatible with old bootimage versions used in OpenShift Container Platform 4. With this release, nodes can scale using newer bootimage versions 4.18 4.13 and above, through manual intervention, or by upgrading to a release that includes this fix. (OCPBUGS-38012)

  • Previously, the Ingress Controller status incorrectly displayed as Degraded=False because of a migration time issue with the CanaryRepetitiveFailures condition. With this release, the Ingress Controller status is correctly marked as Degraded=True for the appropriate length of time that the CanaryRepetitiveFailures condition exists. (OCPBUGS-37491)

  • Previously, when a pod was running on a node on which egress IPv6 is assigned, the pod was not able to communicate with the Kubernetes service in a dual stack cluster. This resulted in the traffic with the IP family, that the egressIP is not applicable to, being dropped. With this release, only the source network address translation (SNAT) for the IP family that the egress IPs applied to is deleted, eliminating the risk of traffic being dropped. (OCPBUGS-37193)

  • Previously, the Single-Root I/O Virtualization (SR-IOV) Operator did not expire the acquired lease during the Operator’s shutdown operation. This impacted a new instance of the Operator, because the new instance had to wait for the lease to expire before the new instance was operational. With this release, an update to the Operator shutdown logic ensures that the Operator expires the lease when the Operator is shutting down. (OCPBUGS-23795)

  • Previously, for an Ingress resource with an IngressWithoutClassName alert, the Ingress Controller did not delete the alert along with deletion of the resource. The alert continued to show on the OpenShift Container Platform web console. With this release, the Ingress Controller resets the openshift_ingress_to_route_controller_ingress_without_class_name metric to 0 before the controller deletes the Ingress resource, so that the alert is deleted and no longer shows on the web console. (OCPBUGS-13181)

  • Previously, when either the clusterNetwork or serviceNetwork IP address pools overlapped with the default transit_switch_subnet 100.88.0.0/16 IP address and the custom value of transit_switch_subnet did not take effect, ovnkube-node pods crashed after the live migration operation. With this release, the custom value of transit_switch_subnet can be passed to ovnkube node pods, so that this issue no longer persists. (OCPBUGS-43740)

  • Previously, a change in OVN-Kubernetes that standardized the appProtocol value h2c to kubernetes.io/h2c was not recognized by OpenShift router. Consequently, specifying appProtocol: kubernetes.io/h2c on a service did not cause OpenShift router to use clear-text HTTP/2 to connect to the service endpoints. With this release, OpenShift router was changed to handle appProtocol: kubernetes.io/h2c the same way as it handles appProtocol: h2c resolving the issue. (OCPBUGS-42972)

  • Previously, instructions that guided the user after changing the LoadBalancer parameter from External to Internal were missing for IBM Power Virtual Server, Alibaba Cloud, and Red Hat OpenStack Platform (RHOSP). This caused the Ingress Controller to be put in a permanent Progressing state. With this release the message The IngressController scope was changed from Internal to External is followed by To effectuate this change, you must delete the service resolving the permanent Progressing state. (OCPBUGS-39151)

  • Previously, there was no event logged when an error occurred from failed conversion from ingress to route conversion. With this update, this error appear in the event logs. (OCPBUGS-29354)

  • Previously, an ovnkube-node pod on a node that uses cgroup v1 was failing because it could not find the kubelet cgroup path. With this release, an ovnkube-node pod no longer fails if the node uses cgroup v1. However, the OVN-Kubernetes network plugin outputs an UDNKubeletProbesNotSupported event notification. If you enable cgroup v2 for each node, OVN-Kubernetes no longer outputs the event notification.(OCPBUGS-50513)

  • Previously, when you finished the live migration for a kubevirt virtual machine (VM) that uses the Layer 2 topology, an old node still transmits IPv4 egress traffic to the virtual machine. With this release, the OVN-Kubernetes plugin updates the gateway MAC address for a kubevirt virtual machine (VM) during the live migration process so that this issue no longer occurs. (OCPBUGS-49857)

  • Previously, the DNS-based egress firewall incorrectly prevented creation of a firewall rule that contained a DNS name in uppercase characters. With this release, an fix to the egress firewall no longer prevents creation of a firewall rule that contains a DNS name in uppercase characters. (OCPBUGS-49589)

  • Previously, when you attempted to use the Cluster Network Operator (CNO) to upgrade a cluster with existing localnet networks, ovnkube-control-plane pods failed to run. This happened because the ovnkube-cluster-manager container could not process an OVN-Kubernetes localnet topology network that did not have subnets defined. With this release, a fix ensures that the ovnkube-cluster-manager container can process an OVN-Kubernetes localnet topology network that does not have subnets defined. (OCPBUGS-44195)

  • Previously, the SR-IOV Network Operator could not retrieve metadata when cloud-native network (CNF) workers were deployed with a configuration drive on Red Hat OpenStack Platform (RHOSP). A configuration drive is often unmounted after a boot operation on immutable systems, so now the Operator dynamically mounts a configuration drive when required. The Operator can now retrieve the metadata and then unmount the configuration drive. This means that you no longer need to manually mount and unmount the configuration drive. (OCPBUGS-41829)

  • Previously, when you switched your cluster to use a different load balancer, the Ingress Operator did not remove the values from the classicLoadBalancer and networkLoadBalancer parameters in the IngressController custom resource (CR) status. This situation caused the status of the CR to report wrong information from the classicLoadBalancer and networkLoadBalancer parameters. With this release, after you switch your cluster to use a different load balancer, the Ingress Operator removes values from these parameters so that the CR reports a more accurate and less confusing message status. (OCPBUGS-38217)

  • Previously, a duplicate feature gate, ExternalRouteCertificate, was added to the FeatureGate CR. With this release, ExternalRouteCertificate is removed because a OpenShift Container Platform cluster does not use this feature gate. (OCPBUGS-36479)

  • Previously, after a user created a route, the user needed both create and update permissions on the routes/custom-host sub-resource to edit the .spec.tls.externalCertificate field of a route. With this release, this permission requirement has been fixed, so that a user only needs the create permission to edit the .spec.tls.externalCertificate field of a route. The update permission is now marked as an optional permission. (OCPBUGS-34373)

Node

Node Tuning Operator (NTO)

  • Previously, CPU masks for interrupt and network handling CPU affinity were computed incorrectly on machines with more than 256 CPUs. This issue prevented proper CPU isolation and caused systemd unit failures during internal node configuration. This fix ensures accurate CPU affinity calculations, enabling correct CPU isolation on machines with more than 256 CPUs. (OCPBUGS-36431)

  • Previously, entering an invalid value in any cpuset field under spec.cpu in the PerformanceProfile resource caused the webhook validation to crash. With this release, improved error handling for the PerformanceProfile validation webhook ensures that invalid values for these fields return an informative error. (OCPBUGS-45616)

  • Previously, users could enter an invalid string for any CPU set in the performance profile, resulting in a broken cluster. With this release, the fix ensures that only valid strings can be entered, eliminating the risk of cluster breakage. (OCPBUGS-47678)

  • Previously, configuring the Node Tuning Operator (NTO) using PerformanceProfiles created the ocp-tuned-one-shot systemd service, which ran before kubelet and blocked its execution. The systemd service invoked Podman, which used the NTO image. When the NTO image was not present, Podman tried to fetch the image. With this release, support for cluster-wide proxy environment variables defined in /etc/mco/proxy.env is added. This support allows Podman to pull the NTO image in environments that need to use http(s) proxy for out-of-cluster connections. (OCPBUGS-39005)

  • Previously, CPU masks for interrupt and network handling CPU affinity were computed incorrectly on machines with more than 256 CPUs. This issue prevented proper CPU isolation and caused systemd unit failures during internal node configuration. With this release, a fix ensures accurate CPU affinity calculations, enabling correct CPU isolation on machines with more than 256 CPUs. (OCPBUGS-36431)

Observability

  • Previously, a namespace was passed to a full cluster query on the alerts graph, and this caused the tenancy API path to be used. The API lacked permissions to retrieve data so no data was shown on the alerts graph. With this release, the namespace is no longer passed to a full cluster query for an alert graph. A non-tenancy API path is now used because this API has the correct permissions to retrieve data. Data is not available on an alert graph. (OCPBUGS-46371)

  • Previously, bounds were based on the first bar in a bar chart. If a bar was larger in size than the first bar, the bar would extend beyond the bar chart boundary. With this release, the bound for a bar chart is based on the largest bar, so no bars extend outside the boundary of a bar chart. (OCPBUGS-46059)

  • Previously, a Red Hat Advanced Cluster Management (RHACM) Alerting UI refactor update caused an isEmpty check to go missing on the Observe → Metrics menu. The missing check inverted the behavior of the Show all Series and Hide all Series states. This release readds isEmpty check so that Show all Series is now visible when series are hidden and Hide all Series is now visible when the series are shown. (OCPBUGS-46047)

  • Previously, on the Observe → Alerting → Silences tab, the DateTime component changed the ordering of an event and its value. Because of this issue, you could not edit the until parameter for a silent alert in either the Developer or the Administrator perspective. With this release, a fix means to the DateTime component means that you can now edit the until parameter for a silent alert. (OCPBUGS-46021)

  • Previously, when using the Developer perspective with custom editors, clicking the n key caused the Namespace menu unexpectedly opened. The issue happened because the keyboard shortcut did not account for custom editors. With this release, the Namespace menu accounts for custom editors and does not open if you type 'n' key. (OCPBUGS-38775)

  • Previously, on the Observe → Alerting → Silences tab, the creator field was not autopopulated and was not designated as mandatory. This issue happened when the API made the field empty from OpenShift Container Platform 4.15 and onwards. With this update, the field is marked as mandatory and populated with the current user for correct validation. (OCPBUGS-35048)

oc-mirror

  • Previously, when using oc-mirror --v2 delete --generate command, the contents of the working-dir/cluster-resources directory were cleared. With this fix, the working-dir/cluster-resources directory is not cleaned when the delete feature is used. (OCPBUGS-48430)

  • Previously, release images were signed using a SHA-1 key. On RHEL 9 FIPS STIG-compliant machines, verification of release signatures using the old SHA-1 key failed due to security restrictions on weak keys. With this release, release images are signed using a new SHA-256 trusted key so that the release signatures no longer fail. (OCPBUGS-48314)

  • Previously, when using the --force-cache-delete flag to delete images from a remote registry, the deletion process did not work as expected. With this update, the issue has been resolved, ensuring that images are deleted properly when the flag is used. (OCPBUGS-47690)

  • Previously, oc-mirror plugin v2 could not delete the graph image when the mirroring uses a partially disconnected mirroring workflow (mirror-to-mirror). With this update, graph images can now be deleted regardless of the mirroring workflow used. (OCPBUGS-46145)

  • Previously, if the same image was used by multiple OpenShift Container Platform release components, oc-mirror plugin v2 attempted to delete the image multiple times, but failed after the first attempt. This issue has been resolved by ensuring oc-mirror plugin v2 generates a list of unique images during the delete --generate phase. (OCPBUGS-45299)

  • Previously, oci catalogs on disk were not mirrored correctly in the oc-mirror plugin v2. With this update, oci catalogs are now successfully mirrored. (OCPBUGS-44225)

  • Previously, if you reran the oc-mirror command, the rebuild of the oci catalog failed and an error was generated. With this release, if you rerun the oc-mirror command, the wrokspace file is deleted so that the failed catalog issue does not happen. (OCPBUGS-45171)

  • Previously, if you ran the oc adm node-image create command on the first attempt, sometimes an image can’t be pulled error message was generated. With this release, a retry mechanism addresses temporary failures when pulling the image from the release payload. (OCPBUGS-44388)

  • Previously, duplicate entries could appear in the signature ConfigMap YAML and JSON files created in the clusterresource object, leading to issues when applying them to the cluster. This update ensures that the generated files do not contain duplicates. (OCPBUGS-42428)

  • Previously, the release signature ConfigMap for oc-mirror plugin v2 was incorrectly stored in an archived TAR file instead of in the cluster-resources folder. This caused mirror2disk to fail. With this release. the release signature ConfigMap for oc-mirror plugin v2 that is in JSON format or YAML format, compatible with oc-mirror plugin v1, now get stored in the cluster-resources folder. (OCPBUGS-38343) and (OCPBUGS-38233)

  • Previously, using an invalid log-level flag caused oc-mirror plugin v2 to panic. This update ensures that the oc-mirror plugin v2 handles invalid log levels gracefully. Additionally, the loglevel flag has been renamed to log-level to align with tools like Podman for the convenience of the user. (OCPBUGS-37740)

OpenShift CLI (oc)

  • Previously, the oc adm node-image create --pxe generated command did not create only the Preboot Execution Environment (PXE) artifacts. Instead, the command created the PXE artifacts with other artifacts from a node-joiner pod and stored them all in the wrong subdirectory. Additionally, the PXE artifacts were incorrectly prefixed with agent instead of node. With this release, generated PXE artifacts are stored in the correct directory and receive the correct prefix. (OCPBUGS-46449)

Operator Lifecycle Manager (OLM)

  • Previously, concurrent reconciliation of the same namespace in Operator Lifecycle Manager (OLM) Classic led to ConstraintsNotSatisfiable errors on subscriptions. This update resolves the issue. (OCPBUGS-48660)

  • Previously, excessive catalog source snapshots caused severe performance regressions. This update fixes the issue. (OCPBUGS-48644)

  • Previously, when the kubelet terminated catalog registry pods with the TerminationByKubelet message, the registry pods were not recreated by the catalog Operator. This update fixes the issue. (OCPBUGS-46474)

  • Previously, OLM (Classic) failed to upgrade Operator cluster service versions (CSVs) due to a TLS validation error. This update fixes the issue. (OCPBUGS-43581)

  • Previously, service account tokens for Operator groups failed to generate automatically in Operator Lifecycle Manager (OLM) Classic. This update fixes the issue. (OCPBUGS-42360)

  • Previously when Operator Lifecycle Manager (OLM) v1 validated custom resource definition (CRD) upgrades, the message output when detecting changed default values was rendered in bytes instead of human-readable language. With this update, related messages are now updated to show human-readable values. (OCPBUGS-41726)

  • Previously, the status update function did not return an error when a connection error occurred in the Catalog Operator. As a result, the Operator might crash because the IP address returned a nil status. This update resolves the issue so that an errro message is returned and the Operator no longer crashes. (OCPBUGS-37637)

  • Previously, catalog source registry pods did not recover from cluster node failures. This update fixes the issue. (OCPBUGS-36661)

  • Previously, Operators with many custom resources (CRs) exceeded API server timeouts. As a result, the install plan for the Operator got stuck in a pending state. This update fixes the issue by adding a page view for list CRs deployed on the cluster. (OCPBUGS-35358)

Performance Addon Operator

  • Previously, the Performance Profile Creator (PPC) failed to build a performance profile for compute nodes that had different core ID numbering (core per socket) for their logical processors and the nodes existed under the same node pool. For example, the PPC failed in a situation for two compute nodes that have logical processors 2 and 18, where one node groups them as core ID 2 and the other node groups them as core ID 9.

    With this release, PPC no longer fails to create the performance profile because PPC can now build a performance profile for a cluster that has compute nodes that each have different core ID numbering for their logical processors. The PPC now outputs a warning message that indicates to use the generated performance profile with caution, because different core ID numbering might impact system optimization and isolated management of tasks. (OCPBUGS-44644)

  • Previously, if you specified a long string of isolated CPUs in a performance profile, such as 0,1,2,…​,512, the tuned, Machine Config Operator and rpm-ostree components failed to process the string as expected. As a consequence, after you applied the performance profile, the expected kernel arguments were missing. The system failed silently with no reported errors. With this release, the string for isolated CPUs in a performance profile is converted to sequential ranges, such as 0-512. As a result, the kernel arguments are applied as expected in most scenarios. (OCPBUGS-45264)

    The issue might still occur with some combinations of input for isolated CPUs in a performance profile, such as a long list of odd numbers 1,3,5,…​,511.

Red Hat Enterprise Linux CoreOS (RHCOS)

  • Previously, the kdump initramfs would stop responding when trying to open a local encrypted disk. This occurred even when the kdump destination was a remote machine that did not need access to the local disk. With this release, the issue is fixed and the kdump initramfs successfully opens a local encrypted disk. (OCPBUGS-43040)

  • Previously, explicitly disabling FIPS mode with fips=0 caused some systemd services, that assume FIPS mode was requested, to run and consequently fail. This issue resulted in RHCOS failing to boot. With this release, the relevant systemd services now only run if FIPS mode is enabled by specifying fips=1. As a result, RHCOS now correctly boots without FIPS mode enabled when fips=0 is specified. (OCPBUGS-39536)

Scalability and performance

  • Previously, you could configure the NUMA Resources Operator to map a nodeGroup to more than one MachineConfigPool. This implementation is contrary to the intended design of the Operator, which assumed a one-to-one mapping between a nodeGroup and a MachineConfigPool. With this release, if a nodeGroup maps to more than one MachineConfigPool, the Operator accepts the configuration, but the Operator state moves to Degraded. To retain the previous behavior, you can apply the config.node.openshift-kni.io/multiple-pools-per-tree: enabled annotation to the NUMA Resources Operator. However, the ability to assign a nodeGroup to more than one MachineConfigPool will be removed in a future release. (OCPBUGS-42523)

Storage

  • Previously, Portworx plugin Container Storage Interface (CSI) migration failed without the inclusion of an upstream patch. With this release, the Portworx plugin CSI translation now copies the secret name and namespace to Kubernetes version to 1.31 so that an upstream patch is not required. (OCPBUGS-49437)

  • Previously, the VSphere Problem Detector Operator waited up to 24 hours to reflect a change in the clustercsidrivers.managementState parameter from Managed to Removed for a VMware vSphere cluster. With this release, the VSphere Problem Detector Operator now reflects this state change in about 1 hour. (OCPBUGS-39358)

  • Previously, the Azure File Driver attempted to reuse existing storage accounts. With this release, the Azure File Driver creates storage accounts during dynamic provisioning. This means that updated clusters using newly-created Persistent Volumes (PVs) also use a new storage account. PVs that were previously provisioned continue using the same storage account used before the cluster update. (OCPBUGS-38922)

  • Previously, the configuration loader logged YAML unmarshall errors when the INI succeeded. With this release, the unmarshall errors are no longer logged when the INI succeeds. (OCPBUGS-38368)

  • Previously, the Storage Operator counted an incorrect number of control plane nodes that existed in a cluster. This count is needed for the Operator to determine the number of replicas for controllers. With this release, the Storage Operator now counts the correct number of control plane nodes, leading to a more accurate count of replica controllers. (OCPBUGS-36233)

  • Previously, the manila-csi-driver and node registrar pods had missing health checks because of a configuration issue. With this release, the health checks are now added to both of these resources. (OCPBUGS-29240)

Technology Preview features status

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:

In the following tables, features are marked with the following statuses:

  • Not Available

  • Technology Preview

  • General Availability

  • Deprecated

  • Removed

Authentication and authorization Technology Preview features

Table 18. Authentication and authorization Technology Preview tracker
Feature 4.16 4.17 4.18

Pod security admission restricted enforcement

Technology Preview

Technology Preview

Technology Preview

Edge computing Technology Preview features

Table 19. Edge computing Technology Preview tracker
Feature 4.16 4.17 4.18

Accelerated provisioning of GitOps ZTP

Technology Preview

Technology Preview

Technology Preview

Enabling disk encryption with TPM and PCR protection

Not Available

Technology Preview

Technology Preview

Installation Technology Preview features

Table 20. Installation Technology Preview tracker
Feature 4.16 4.17 4.18

Adding kernel modules to nodes with kvc

Technology Preview

Technology Preview

Technology Preview

Enabling NIC partitioning for SR-IOV devices

Technology Preview

General Availability

General Availability

User-defined labels and tags for Google Cloud Platform (GCP)

Technology Preview

General Availability

General Availability

Installing a cluster on Alibaba Cloud by using Assisted Installer

Technology Preview

Technology Preview

Technology Preview

Mount shared entitlements in BuildConfigs in RHEL

Technology Preview

Technology Preview

Technology Preview

OpenShift Container Platform on Oracle® Cloud Infrastructure (OCI)

General Availability

General Availability

General Availability

Selectable Cluster Inventory

Technology Preview

Technology Preview

Technology Preview

Installing a cluster on GCP using the Cluster API implementation

Technology Preview

General Availability

General Availability

OpenShift Container Platform on Oracle Compute Cloud@Customer (C3)

Not Available

Not Available

General Availability

OpenShift Container Platform on Oracle Private Cloud Appliance (PCA)

Not Available

Not Available

General Availability

Installing a cluster on VMware vSphere with multiple network interface controllers

Not Available

Not Available

Technology Preview

Machine Config Operator Technology Preview features

Table 21. Machine Config Operator Technology Preview tracker
Feature 4.16 4.17 4.18

Improved MCO state reporting (oc get machineconfigpool)

Technology Preview

Technology Preview

Technology Preview

On-cluster RHCOS image layering

Technology Preview

Technology Preview

Technology Preview

Node disruption policies

Technology Preview

General Availability

General Availability

Updating boot images for GCP clusters

Technology Preview

General Availability

General Availability

Updating boot images for AWS clusters

Technology Preview

Technology Preview

General Availability

Machine management Technology Preview features

Table 22. Machine management Technology Preview tracker
Feature 4.16 4.17 4.18

Managing machines with the Cluster API for Amazon Web Services

Technology Preview

Technology Preview

Technology Preview

Managing machines with the Cluster API for Google Cloud Platform

Technology Preview

Technology Preview

Technology Preview

Managing machines with the Cluster API for VMware vSphere

Technology Preview

Technology Preview

Technology Preview

Cloud controller manager for IBM Power® Virtual Server

Technology Preview

Technology Preview

Technology Preview

Defining a vSphere failure domain for a control plane machine set

General Availability

General Availability

General Availability

Cloud controller manager for Alibaba Cloud

Removed

Removed

Removed

Adding multiple subnets to an existing VMware vSphere cluster by using compute machine sets

Not Available

Not Available

Technology Preview

Monitoring Technology Preview features

Table 23. Monitoring Technology Preview tracker
Feature 4.16 4.17 4.18

Metrics Collection Profiles

Technology Preview

Technology Preview

Technology Preview

Web console Technology Preview features

Table 24. Web console Technology Preview tracker
Feature 4.16 4.17 4.18

Red Hat OpenShift Lightspeed in the OpenShift Container Platform web console

Technology Preview

Technology Preview

Technology Peview

Multi-Architecture Technology Preview features

Table 25. Multi-Architecture Technology Preview tracker
Feature 4.16 4.17 4.18

kdump on arm64 architecture

Technology Preview

Technology Preview

Technology Preview

kdump on s390x architecture

Technology Preview

Technology Preview

Technology Preview

kdump on ppc64le architecture

Technology Preview

Technology Preview

Technology Preview

Multiarch Tuning Operator

General Availability

General Availability

General Availability

Support for configuring the image stream import mode behavior

Not Available

Not Available

Technology Preview

Networking Technology Preview features

Table 26. Networking Technology Preview tracker
Feature 4.16 4.17 4.18

eBPF manager Operator

N/A

Technology Preview

Technology Preview

Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses

Technology Preview

Technology Preview

Technology Preview

Updating the interface-specific safe sysctls list

Technology Preview

Technology Preview

Technology Preview

Egress service custom resource

Technology Preview

Technology Preview

Technology Preview

VRF specification in BGPPeer custom resource

Technology Preview

Technology Preview

Technology Preview

VRF specification in NodeNetworkConfigurationPolicy custom resource

Technology Preview

Technology Preview

Technology Preview

Host network settings for SR-IOV VFs

Technology Preview

General Availability

General Availability

Integration of MetalLB and FRR-K8s

Technology Preview

General Availability

General Availability

Automatic leap seconds handling for PTP grandmaster clocks

Not Available

General Availability

General Availability

PTP events REST API v2

Not Available

General Availability

General Availability

Customized br-ex bridge needed by OVN-Kuberenetes to use NMState

Technology Preview

Technology Preview

General Availability

Live migration to OVN-Kubernetes from OpenShift SDN

Not Available

General Availability

Not Available

User defined network segmentation

Not Available

Technology Preview

General Availablity

Dynamic configuration manager

Not Available

Not Available

Technology Preview

SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset

Not Available

Not Available

Technology Preview

Node Technology Preview features

Table 27. Nodes Technology Preview tracker
Feature 4.16 4.17 4.18

MaxUnavailableStatefulSet featureset

Technology Preview

Technology Preview

Technology Preview

sigstore support

Not Available

Technology Preview

Technology Preview

OpenShift CLI (oc) Technology Preview features

Table 28. OpenShift CLI (oc) Technology Preview tracker
Feature 4.16 4.17 4.18

oc-mirror plugin v2

Technology Preview

Technology Preview

General Availability

oc-mirror plugin v2 enclave support

Technology Preview

Technology Preview

General Availability

oc-mirror plugin v2 delete functionality

Technology Preview

Technology Preview

General Availability

Extensions Technology Preview features

Table 29. Extensions Technology Preview tracker
Feature 4.16 4.17 4.18

Operator Lifecycle Manager (OLM) v1

Technology Preview

Technology Preview

General Availability

OLM v1 runtime validation of container images using sigstore signatures

Not Available

Not Available

Technology Preview

Operator lifecycle and development Technology Preview features

Table 30. Operator lifecycle and development Technology Preview tracker
Feature 4.16 4.17 4.18

Operator Lifecycle Manager (OLM) v1

Technology Preview

Technology Preview

General Availability

Scaffolding tools for Hybrid Helm-based Operator projects

Deprecated

Deprecated

Removed

Scaffolding tools for Java-based Operator projects

Deprecated

Deprecated

Removed

Red Hat OpenStack Platform (RHOSP) Technology Preview features

Table 31. RHOSP Technology Preview tracker
Feature 4.16 4.17 4.18

RHOSP integration into the Cluster CAPI Operator

Technology Preview

Technology Preview

Technology Preview

Control Plane with rootVolumes and etcd on local disk

Technology Preview

General Availability

General Availability

Scalability and performance Technology Preview features

Table 32. Scalability and performance Technology Preview tracker
Feature 4.16 4.17 4.18

factory-precaching-cli tool

Technology Preview

Technology Preview

Technology Preview

Hyperthreading-aware CPU manager policy

Technology Preview

Technology Preview

Technology Preview

Mount namespace encapsulation

Technology Preview

Technology Preview

Technology Preview

Node Observability Operator

Technology Preview

Technology Preview

Technology Preview

Increasing the etcd database size

Technology Preview

Technology Preview

Technology Preview

Using RHACM PolicyGenerator resources to manage GitOps ZTP cluster policies

Technology Preview

Technology Preview

Technology Preview

Pinned Image Sets

Technology Preview

Technology Preview

Technology Preview

Storage Technology Preview features

Table 33. Storage Technology Preview tracker
Feature 4.16 4.17 4.18

AWS EFS storage CSI usage metrics

Not Available

General Availability

General Availability

Automatic device discovery and provisioning with Local Storage Operator

Technology Preview

Technology Preview

Technology Preview

Azure File CSI snapshot support

Not Available

Technology Preview

Technology Preview

Read Write Once Pod access mode

General Availability

General Availability

General Availability

Shared Resources CSI Driver in OpenShift Builds

Technology Preview

Technology Preview

Technology Preview

Secrets Store CSI Driver Operator

Technology Preview

Technology Preview

General Availability

CIFS/SMB CSI Driver Operator

Technology Preview

Technology Preview

General Availability

VMware vSphere multiple vCenter support

Not Available

Technology Preview

General Availability

Disabling/enabling storage on vSphere

Not Available

Technology Preview

Technology Preview

RWX/RWO SELinux Mount

Not Available

Developer Preview

Developer Preview

Migrating CNS Volumes Between Datastores

Not Available

Developer Preview

Developer Preview

CSI volume group snapshots

Not Available

Not Available

Technology Preview

GCP PD supports C3/N4 instance types and hyperdisk-balanced disks

Not Available

Not Available

General Availability

GCP Filestore supports Workload Identity

Not Available

General Availability

General Availability

OpenStack Manila support for CSI resize

Not Available

Not Available

General Availability

Known issues

  • oc-mirror plugin v2 currently returns an exit status of 0, meaning "success", even when mirroring errors occur. As a result, do not rely on the exit status in automated workflows. Until this issue is resolved, manually check the mirroring_errors_XXX_XXX.txt file generated by oc-mirror for errors. (OCPBUGS-49880)

  • The DNF package manager included in Red Hat Enterprise Linux CoreOS (RHCOS) images cannot be used at runtime, because DNF relies on additional packages to access entitled nodes in a cluster that are under a Red Hat subscription. As a workaround, use the rpm-ostree command instead. (OCPBUGS-35247)

  • A regression in the behaviour of libreswan caused some nodes with IPsec enabled to lose communication with pods on other nodes in the same cluster. To resolve this issue, consider disabling IPsec for your cluster. (OCPBUGS-43713)

  • There is a known issue in OpenShift Container Platform version 4.18 that prevents configuring multiple subnets in the failure domain of a Nutanix cluster during installation. There is no workaround for this issue. (OCPBUGS-49885)

  • The following known issues exist for configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set:

    • Adding subnets above the existing subnet in the subnets stanza causes a control plane node to become stuck in the Deleting state. As a workaround, only add subnets below the existing subnet in the subnets stanza.

    • Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the OpenShift Container Platform cluster is unreachable. There is no workaround for this issue.

    These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification. (OCPBUGS-50904)

  • There is a known issue with nodes that use cgroupv1 Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes: UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1. An event of type Warning UDNKubeletProbesNotSupported is triggered for impacted nodes. As a workaround, users must reconfigure their nodes from cgroupv1 to cgroupv2 before creating a user-defined network. For more information, see Configuring Linux cgroup. (OCPBUGS-49933)

  • There is a known issue with RHEL 8 worker nodes that use cgroupv1 Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes: UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1. As a workaround, users should migrate worker nodes from cgroupv1 to cgroupv2. (OCPBUGS-49933)

  • The current PTP grandmaster clock (T-GM) implementation has a single National Marine Electronics Association (NMEA) sentence generator sourced from the GNSS without a backup NMEA sentence generator. If NMEA sentences are lost before reaching the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. A proposed fix is to report a FREERUN event when the NMEA string is lost. Until this limitation is addressed, T-GM does not support PTP clock holdover state. (OCPBUGS-19838)

  • There is a known issue with a Layer 2 network topology on clusters running on Google Cloud Platform (GCP). At this time, the egress IP addresses being used in the Layer 2 network that is created by a UserDefinedNetwork (UDN) resource are using the wrong source IP address. Consequentially, UDN is not supported on Layer 2 on GCP. Currently, there is no fix for this issue. (OCPBUGS-48301)

  • There is a known issue with user-defined networks (UDN) that causes OVN-Kubernetes to delete any routing table ID equal or higher to 1000 that it does not manage. Consequently, any Virtual Routing and Forwarding (VRF) instance created outside OVN-Kubernetes is deleted. This issue impacts users who have created user-defined VRFs with a table ID greater than or equal to 1000. As a workaround, users must change their VRFs to a table ID lower than 1000 as these are reserved for OpenShift Container Platform. (OCPBUGS-50855)

  • If you attempted to log in to a OpenShift Container Platform 4.17 server by using the OpenShift CLI (oc) that you installed as part of the OpenShift Container Platform 4.18, you would see the following warning message in your terminal:

    Warning: unknown field "metadata"
    You don't have any projects. You can try to create a new project, by running
    
        oc new-project <projectname>

    This warning message is a known issue but does not indicate any functionality issues with OpenShift Container Platform. You can safely ignore the warning message and continue to use OpenShift Container Platform as intended. (OCPBUGS-44833)

  • There is a known issue in OpenShift Container Platform 4.18 which causes the cluster’s masquerade subnet to be set to 169.254.169.0/29 if the ovnkube-node daemon set is deleted. When the masquerade subnet is set to 169.254.169.0/29, UserDefinedNetwork custom resources (CRs) cannot be created.

    • If your masquerade subnet has been configured at Day 2 by making changes to the network.operator CR, it will not be reverted to 169.254.169.0/29.

    • If a cluster has been upgraded from OpenShift Container Platform 4.16, the masquerade subnet remains 169.254.169.0/29 for backward compatibility. The masquerade subnet should be changed to a subnet with more IPs, for example, 169.254.0.0/17, to use the user-defined networks feature.

    This known issue occurs after performing one of the following actions:

    Action Consequence

    You have restarted the ovnkube-node DaemonSet object.

    The masquerade subnet is set to 169.254.169.0/29, which does not support UserDefinedNetwork CRs.

    You have deleted the ovnkube-node DaemonSet object.

    The masquerade subnet is set to 169.254.169.0/29, which does not support UserDefinedNetwork CRs. Additionally, ovnkube-node pods crash and remain in a CrashLoopBackOff state.

    As a temporary workaround, you can delete the UserDefinedNetwork CR and then restart all ovnkube-node pods by running the following command:

    $ oc delete pod -l app=ovnkube-node -n openshift-ovn-kubernetes

    The ovnkube-node pods automatically restart, which re-stabilizes the cluster. Then, you can set the masquerade subnet to a larger IP address, for example, 169.254.0.0/17 for IPv4. As a result, NetworkAttachmentDefinition or UserDefinedNetwork CRs can be created.

    Do not delete the ovnkube-node DaemonSet object when deleting ovnkube-node pods. Doing so sets the masquerade subnet to 169.254.169.0/29.

  • Adding or removing nodes from the cluster can cause ownership contention over the node status. This can cause new nodes to take an extended period of time to appear. As a workaround, you can restart the kube-apiserver-operator pod in the openshift-kube-apiserver-operator namespace to expedite the process. (OCPBUGS-50587)

  • For dual-stack networking clusters that run on RHOSP, when a Virtual IP (VIP) that is attached to a Floating IP (FIP) moves between master nodes, the association between VIP and FIP might stop working if the new master is on a different compute node. This issue occurs because OVN assumes that both IPv4 and IPv6 addresses on a shared Neutron port belong to the same node. (OCPBUGS-50599)

  • When you run Cloud-native Network Functions (CNF) latency tests on an OpenShift Container Platform cluster, the test can sometimes return results greater than the latency threshold for the test; for example, 20 microseconds for cyclictest testing. This results in a test failure. (OCPBUGS-42328)

  • There is a known issue when the grandmaster clock (T-GM) transitions to the Locked state too soon. This happens before the Digital Phase-Locked Loop (DPLL) completes its transition to the Locked-HO-Acquired state, and after the Global Navigation Satellite Systems (GNSS) time source is restored. (OCPBUGS-49826)

  • Due to an issue with Kubernetes, the CPU Manager is unable to return CPU resources from the last pod admitted to a node to the pool of available CPU resources. These resources are allocatable if a subsequent pod is admitted to the node. However, this pod then becomes the last pod, and again, the CPU manager cannot return this pod’s resources to the available pool.

    This issue affects CPU load-balancing features, which depend on the CPU Manager releasing CPUs to the available pool. Consequently, non-guaranteed pods might run with a reduced number of CPUs. As a workaround, schedule a pod with a best-effort CPU Manager policy on the affected node. This pod will be the last admitted pod and this ensures the resources will be correctly released to the available pool. (OCPBUGS-46428)

  • When a pod uses the CNI plugin for DHCP address assignment in conjunction with other CNI plugins, the network interface for the pod might be unexpectedly deleted. As a result, when the DHCP lease for the pod expires, the DHCP proxy enters a loop when trying to re-create a new lease, leading to the node becoming unresponsive. There is currently no workaround. (OCPBUGS-45272)

  • The GCP PD CSI driver does not support hyperdisk-balanced volumes with RWX mode. Attempting to provision hyperdisk-balanced volumes with RWX mode using the GCP PD CSI driver produces errors and does not mount the volumes with the desired access mode. (OCPBUGS-44769)

  • Currently, a GCP PD cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16. This issue may prevent you from successfully creating or attaching volumes to your pods. (OCPBUGS-39258)

Asynchronous errata updates

Security, bug fix, and enhancement updates for OpenShift Container Platform 4.18 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.18 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata.

Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released.

Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.

This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.18. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.18.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.

For any OpenShift Container Platform release, always review the instructions on updating your cluster properly.

RHSA-2024:XXXX - OpenShift Container Platform 4.18.0 image release, bug fix, and security update advisory

Issued: TBD

OpenShift Container Platform release 4.18.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2024:XXXX advisory. The RPM packages that are included in the update are provided by the RHSA-2024:XXXX advisory.

Space precluded documenting all of the container images for this release in the advisory.

You can view the container images in this release by running the following command:

$ oc adm release info 4.18.0 --pullspecs

Updating

To update an OpenShift Container Platform 4.17 cluster to this latest release, see Updating a cluster using the CLI.