...
hub:
params:
- name: enable-devconsole-integration
value: 'false'
...
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
OpenShift Container Platform (RHSA-2022:0056) is now available. This release uses Kubernetes 1.23 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.10 are included in this topic.
Red Hat did not publicly release OpenShift Container Platform 4.10.0 as the GA version and, instead, is releasing OpenShift Container Platform 4.10.3 as the GA version.
OpenShift Container Platform 4.10 clusters are available at https://console.redhat.com/openshift. The Red Hat OpenShift Cluster Manager application for OpenShift Container Platform allows you to deploy OpenShift clusters to either on-premise or cloud environments.
OpenShift Container Platform 4.10 is supported on Red Hat Enterprise Linux (RHEL) 8.4 through 8.7, as well as on Red Hat Enterprise Linux CoreOS (RHCOS) 4.10.
You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines.
The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.
This release adds improvements related to the following components and concepts.
OpenShift Container Platform 4.10 now includes a getting started guide. Getting Started with OpenShift Container Platform defines basic terminology and provides role-based next steps for developers and administrators.
The tutorials walk new users through the web console and the OpenShift CLI (oc
) interfaces. New users can accomplish the following tasks through the Getting Started:
Create a project
Grant view permissions
Deploy a container image from Quay
Examine and scale an application
Deploy a Python application from GitHub
Connect to a database from Quay
Create a secret
Load and view your application
For more information, see Getting Started with OpenShift Container Platform.
The coreos-installer
utility now has iso customize
and pxe customize
subcommands for more flexible customization when installing RHCOS on bare metal from the live ISO and PXE images.
This includes the ability to customize the installation to fetch Ignition configs from HTTPS servers that use a custom certificate authority or self-signed certificate.
The OpenShift Container Platform 4.10 installer uses new default component types for installations on AWS. The installation program uses the following components by default:
AWS EC2 M6i instances for both control plane and compute nodes, where available
AWS EBS gp3 storage
install-config.yaml
filePreviously, when a user installed OpenShift Container Platform on a bare metal installer-provisioned infrastructure, they had nowhere to configure custom network interfaces, such as static IPs or vLANs to communicate with the Ironic server.
When configuring a Day 1 installation on bare metal only, users can now use the API in the install-config.yaml
file to customize the network configuration (networkConfig
). This configuration is set during the installation and provisioning process and includes advanced options, such as setting static IPs per host.
OpenShift Container Platform 4.10 is now supported on ARM based AWS EC2 and bare-metal platforms. Instance availability and installation documentation can be found in Supported installation methods for different platforms.
The following features are supported for OpenShift Container Platform on ARM:
OpenShift Cluster Monitoring
RHEL 8 Application Streams
OVNKube
Elastic Block Store (EBS) for AWS
AWS .NET applications
NFS storage on bare metal
The following Operators are supported for OpenShift Container Platform on ARM:
Node Tuning Operator
Node Feature Discovery Operator
Cluster Samples Operator
Cluster Logging Operator
Elasticsearch Operator
Service Binding Operator
OpenShift Container Platform 4.10 introduces support for installing a cluster on IBM Cloud using installer-provisioned infrastructure in Technology Preview.
The following limitations apply for IBM Cloud using IPI:
Deploying IBM Cloud using IPI on a previously existing network is not supported.
The Cloud Credential Operator (CCO) can use only Manual mode. Mint mode or STS are not supported.
IBM Cloud DNS Services is not supported. An instance of IBM Cloud Internet Services is required.
Private or disconnected deployments are not supported.
For more information, see Preparing to install on IBM Cloud.
OpenShift Container Platform 4.10 introduces support for thin-provisioned disks when you install a cluster using installer-provisioned infrastructure. You can provision disks as thin
, thick
, or eagerZeroedThick
. For more information about disk provisioning modes in VMware vSphere, see Installation configuration parameters.
Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMIs) are now available for AWS GovCloud regions. The availability of these AMIs improves the installation process because you are no longer required to upload a custom RHCOS AMI to deploy a cluster.
For more information, see Installing a cluster on AWS into a government region.
Beginning with OpenShift Container Platform 4.10, if you configure a cluster with an existing IAM role, the installation program no longer adds the shared
tag to the role when deploying the cluster. This enhancement improves the installation process for organizations that want to use a custom IAM role, but whose security policies prevent the use of the shared
tag.
To install a CSI driver on a cluster running on vSphere, you must have the following components installed:
Virtual hardware version 15 or later
vSphere version 6.7 Update 3 or later
VMware ESXi version 6.7 Update 3 or later
Components with versions earlier than those above are still supported, but are deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform will require vSphere virtual hardware version 15 or later.
If your cluster is deployed on vSphere, and the preceding components are lower than the version mentioned above, upgrading from OpenShift Container Platform 4.9 to 4.10 on vSphere is supported, but no vSphere CSI driver will be installed. Bug fixes and other upgrades to 4.10 are still supported, however upgrading to 4.11 will be unavailable. |
OpenShift Container Platform 4.10 introduces the ability for installing a cluster on Alibaba Cloud using installer-provisioned infrastructure in Technology Preview. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.
OpenShift Container Platform 4.10 introduces support for installing a cluster on Azure Stack Hub using installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.
Beginning with OpenShift Container Platform 4.10.14, you can deploy control plane and compute nodes with the |
For more information, see Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure.
OpenShift Container Platform 4.10 adds support for consuming conditional update paths provided by the OpenShift Update Service.
Conditional update paths convey identified risks and the conditions under which those risks apply to clusters.
The Administrator perspective on the web console only offers recommended upgrade paths for which the cluster does not match known risks.
However, OpenShift CLI (oc
) 4.10 or later can be used to display additional upgrade paths for OpenShift Container Platform 4.10 clusters.
Associated risk information including supporting documentation references is displayed with the paths.
The administrator may review the referenced materials and choose to perform the supported, but no longer recommended, upgrade.
For more information, see Conditional updates and Updating along a conditional upgrade path.
This release introduces the oc-mirror OpenShift CLI (oc
) plugin as a Technology Preview. You can use the oc-mirror plugin to mirror images in a disconnected environment.
For more information, see Mirroring images for a disconnected installation using the oc-mirror plugin.
You can now install a cluster on Red Hat OpenStack Platform (RHOSP) for which compute machines run on Open vSwitch with the Data Plane Development Kit (OVS-DPDK) networks. Workloads that run on these machines can benefit from the performance and latency improvements of OVS-DPDK.
For more information, see Installing a cluster on RHOSP that supports DPDK-connected compute machines.
With this update, you can specify the name of a service binding connector in the Topology view while making a binding connection.
With this update, creating pipelines workflow has now been enhanced:
You can now choose a user-defined pipeline from a drop-down list while importing your application from the Import from Git pipeline workflow.
Default webhooks are added for the pipelines that are created using Import from Git workflow and the URL is visible in the side panel of the selected resources in the Topology view.
You can now opt out of the default Tekton Hub integration by setting the parameter enable-devconsole-integration
to false
in the TektonConfig
custom resource.
TektonConfig
CR to opt out of Tekton Hub integration...
hub:
params:
- name: enable-devconsole-integration
value: 'false'
...
Pipeline builder contains the Tekton Hub tasks that are supported by the cluster, all other unsupported tasks are excluded from the list.
With this update, the application export workflow now displays the export logs dialog or alert while the export is in progress. You can use the dialog to cancel or restart the exporting process.
With this update, you can add your new Helm Chart Repository to the Developer Catalog by creating a custom resource. Refer to the quick start guides in the Developer perspective to add a new ProjectHelmChartRepository.
With this update, you can now access community devfiles samples using the Developer Catalog.
Starting with OpenShift Container Platform 4.10, the ability to create OpenShift console dynamic plugins is now available as a Technology Preview feature. You can use this feature to customize your interface at runtime in many ways, including:
Adding custom pages
Adding perspectives and updating navigation items
Adding tabs and actions to resource pages
For more information about the dynamic plugin, see Adding a dynamic plugin to the OpenShift Container Platform web console.
Starting with OpenShift Container Platform 4.10, the multicluster console is now available as a Technology Preview feature. By enabling this feature, you can connect to remote clusters' API servers from a single OpenShift Container Platform console. You must have Red Hat Advanced Cluster Management (ACM) or the multicluster engine (MCE) Operator installed to enable this feature.
With this update, you can now view debug terminals in the web console. When a pod has a container that is in a CrashLoopBackOff
state, a debug pod can be launched. A terminal interface is displayed and can be used to debug the crash looping container.
This feature can be accessed by the pod status pop-up window, which is accessed by clicking on the status of a pod, provides links to debug terminals for each crash looping container within that pod.
You can also access this feature on the Logs tab of the pod details page. A debug terminal link is displayed above the log window when a crash looping container is selected.
Additionally, the pod status pop-up window now provides links to the Logs and Events tabs of the pod details page.
With this update, you can customize workload notifications on the User Preferences page. User workload notifications under the Notifications tab allows you to hide user workload notifications that appear on the Cluster Overview page or in your drawer.
With this update, non-admin users are now able to view their usage of the AppliedClusterResourceQuota
on the Project Overview, ResourceQuotas, and API Explorer pages to determine the cluster-scoped quota available for use. Additionally, AppliedClusterResourceQuota
details can now be found on the Search page.
OpenShift Container Platform now enables you to view support level information about your cluster on the Overview → Details card, in the Cluster Settings, in the About modal, and adds a notification to your notifications drawer when your cluster is unsupported. From the Overview page, you can manage subscription settings under the Service Level Agreement (SLA).
With this release, IBM Z and LinuxONE are now compatible with OpenShift Container Platform 4.10. The installation can be performed with z/VM or RHEL KVM. For installation instructions, see the following documentation:
The following new features are supported on IBM Z and LinuxONE with OpenShift Container Platform 4.10:
Horizontal pod autoscaling
The following Multus CNI plugins are supported:
Bridge
Host-device
IPAM
IPVLAN
Compliance Operator 0.1.49
NMState Operator
OVN-Kubernetes IPsec encryption
Vertical Pod Autoscaler Operator
The following features are also supported on IBM Z and LinuxONE:
Currently, the following Operators are supported:
Cluster Logging Operator
Compliance Operator 0.1.49
Local Storage Operator
NFD Operator
NMState Operator
OpenShift Elasticsearch Operator
Service Binding Operator
Vertical Pod Autoscaler Operator
Encrypting data stored in etcd
Helm
Multipathing
Persistent storage using iSCSI
Persistent storage using local volumes (Local Storage Operator)
Persistent storage using hostPath
Persistent storage using Fibre Channel
Persistent storage using Raw Block
OVN-Kubernetes
Support for multiple network interfaces
Three-node cluster support
z/VM Emulated FBA devices on SCSI disks
4K FCP block device
These features are available only for OpenShift Container Platform on IBM Z and LinuxONE for 4.10:
HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD storage
The following restrictions impact OpenShift Container Platform on IBM Z and LinuxONE:
The following OpenShift Container Platform Technology Preview features are unsupported:
Precision Time Protocol (PTP) hardware
The following OpenShift Container Platform features are unsupported:
Automatic repair of damaged machines with machine health checking
CodeReady Containers (CRC)
Controlling overcommit and managing container density on nodes
CSI volume cloning
CSI volume snapshots
FIPS cryptography
NVMe
OpenShift Metering
OpenShift Virtualization
Tang mode disk encryption during OpenShift Container Platform deployment
Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS)
Persistent shared storage must be provisioned by using either OpenShift Data Foundation or other supported storage protocols
Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO with DASD, FCP, or EDEV/FBA
With this release, IBM Power is now compatible with OpenShift Container Platform 4.10. For installation instructions, see the following documentation:
The following new features are supported on IBM Power with OpenShift Container Platform 4.10:
Horizontal pod autoscaling
The following Multus CNI plugins are supported:
Bridge
Host-device
IPAM
IPVLAN
Compliance Operator 0.1.49
NMState Operator
OVN-Kubernetes IPsec encryption
Vertical Pod Autoscaler Operator
The following features are also supported on IBM Power:
Currently, the following Operators are supported:
Cluster Logging Operator
Compliance Operator 0.1.49
Local Storage Operator
NFD Operator
NMState Operator
OpenShift Elasticsearch Operator
SR-IOV Network Operator
Service Binding Operator
Vertical Pod Autoscaler Operator
Encrypting data stored in etcd
Helm
Multipathing
Multus SR-IOV
NVMe
OVN-Kubernetes
Persistent storage using iSCSI
Persistent storage using local volumes (Local Storage Operator)
Persistent storage using hostPath
Persistent storage using Fibre Channel
Persistent storage using Raw Block
Support for multiple network interfaces
Support for Power10
Three-node cluster support
4K Disk Support
The following restrictions impact OpenShift Container Platform on IBM Power:
The following OpenShift Container Platform Technology Preview features are unsupported:
Precision Time Protocol (PTP) hardware
The following OpenShift Container Platform features are unsupported:
Automatic repair of damaged machines with machine health checking
CodeReady Containers (CRC)
Controlling overcommit and managing container density on nodes
FIPS cryptography
OpenShift Metering
OpenShift Virtualization
Tang mode disk encryption during OpenShift Container Platform deployment
Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS)
Persistent storage must be of the Filesystem type that uses local volumes, OpenShift Data Foundation, Network File System (NFS), or Container Storage Interface (CSI)
Information regarding new features, enhancements, and bug fixes for security and compliance components can be found in the Compliance Operator and File Integrity Operator release notes.
For more information about security and compliance, see OpenShift Container Platform security and compliance.
When you create a service that uses multiple IP address families, you must explicitly specify ipFamilyPolicy: PreferDualStack
or ipFamilyPolicy: RequireDualStack
in your Service object definition. This change breaks backward compatibility with earlier releases of OpenShift Container Platform.
For more information, see BZ#2045576.
After cluster installation, if you are using the OpenShift SDN cluster network provider or the OVN-Kubernetes cluster network provider, you can change your hardware MTU and your cluster network MTU values. Changing the MTU across the cluster is disruptive and requires that each node is rebooted several times. For more information, see Changing the cluster network MTU.
The OVN-Kubernetes CNI network provider adds support for configuring how egress traffic is sent to the node gateway. By default, egress traffic is processed in OVN to exit the cluster and traffic is not affected by specialized routes in the kernel routing table.
This enhancement adds a gatewayConfig.routingViaHost
field.
With this update, the field can be set at runtime as a post-installation activity and when it is set to true
, egress traffic is sent from pods to the host networking stack.
This update benefits highly specialized installations and applications that rely on manually configured routes in the kernel routing table.
This enhancement has an interaction with the Open vSwitch hardware offloading feature.
With this update, when the gatewayConfig.routingViaHost
field is set to true
, you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack.
For configuration information, see Configuration for the OVN-Kubernetes CNI cluster network provider.
The following metrics are now available for clusters.
The metric names that start with sdn_controller
are unique to the OpenShift SDN CNI network provider.
The metric names that start with ovn
are unique to the OVN-Kubernetes CNI network provider:
network_attachment_definition_instances{networks="egress-router"}
openshift_unidle_events_total
ovn_controller_bfd_run
ovn_controller_ct_zone_commit
ovn_controller_flow_generation
ovn_controller_flow_installation
ovn_controller_if_status_mgr
ovn_controller_if_status_mgr_run
ovn_controller_if_status_mgr_update
ovn_controller_integration_bridge_openflow_total
ovn_controller_ofctrl_seqno_run
ovn_controller_patch_run
ovn_controller_pinctrl_run
ovnkube_master_ipsec_enabled
ovnkube_master_num_egress_firewall_rules
ovnkube_master_num_egress_firewalls
ovnkube_master_num_egress_ips
ovnkube_master_pod_first_seen_lsp_created_duration_seconds
ovnkube_master_pod_lsp_created_port_binding_duration_seconds
ovnkube_master_pod_port_binding_chassis_port_binding_up_duration_seconds
ovnkube_master_pod_port_binding_port_binding_chassis_duration_seconds
sdn_controller_num_egress_firewall_rules
sdn_controller_num_egress_firewalls
sdn_controller_num_egress_ips
The ovnkube_master_resource_update_total
metric is removed for the 4.10 release.
Previously, changes were not retained when switching between YAML view and Form view on the web console. Additionally, after switching to YAML view, you could not return to Form view. With this update, you can now easily switch between YAML view and Form view on the web console without losing changes.
When using the network policy functionality in the OpenShift Container Platform web console, the pods affected by a policy are listed. The list changes as the combined namespace and pod selectors in these policy sections are modified:
Peer definition
Rule definition
Ingress
Egress
The list of impacted pods includes only those pods accessible by the user.
The oc adm must-gather
command is enhanced in a way that simplifies collecting network packet captures.
Previously, oc adm must-gather
could start a single debug pod only.
With this enhancement, you can start a debug pod on multiple nodes at the same time.
You can use the enhancement to run packet captures on multiple nodes at the same time to simplify troubleshooting network communication issues.
A new --node-selector
argument provides a way to identify which nodes you are collecting packet captures for.
For more information, see Network trace methods and Collecting a host network trace.
Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver.
Scenarios where pod-level bonding is required include creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability at pod level.
The current functionality of Bond CNI is available only in active-backup mode. For further details, see BZ#2037214. |
As a cluster administrator, you can associate one or more egress IP addresses with a namespace. An egress IP address ensures that a consistent source IP address is associated with traffic from a particular namespace that is leaving the cluster.
For the OVN-Kubernetes and OpenShift SDN cluster network providers, you can configure an egress IP address on the following public cloud providers:
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure
To learn more, refer to the respective documentation for your cluster network provider:
If you use the OpenShift SDN cluster network provider, you can now use egress rules in network policy with ipBlock
and ipBlock.except
. You define egress policies in the egress
array of the NetworkPolicy
object.
For more information, refer to About network policy.
This enhancement adds the ability to configure global HTTP traffic compression on the HAProxy Ingress Controller for specific MIME types. This update enables gzip-compression of your ingress workloads when there are large amounts of compressible routed traffic.
For more information, see Using router compression.
A cluster administrator can now configure DNS servers to allow DNS name resolution through the configured servers for the default domain. A DNS forwarding configuration can have both the default servers specified in the /etc/resolv.conf
file and the upstream DNS servers.
For more information, see Using DNS forwarding.
This enhancement adds the ability to manually change the log level for an Operator individually or a cluster as a whole.
For more information, see Setting the CoreDNS log level
You can now set the maximum length of the syslog message in the Ingress Controller to any value between 480 and 4096 bytes.
For more information, see Ingress Controller configuration parameters.
You can now set the CoreDNS forwarding policy through the DNS Operator. The default value is Random
, and you can also set the value to RoundRobin
or Sequential
.
For more information, see Using DNS forwarding.
You can now configure Open vSwitch hardware offloading to increase data processing performance on compatible bare metal nodes. Hardware offloading is a method for processing data that removes data processing tasks from the CPU and transfers them to the dedicated data processing unit of a network interface controller. Benefits of this feature include faster data processing, reduced CPU workloads, and lower computing costs.
For more information, see Configuring hardware offloading.
You can now create DNS records by using the Red Hat External DNS Operator on cloud providers such as AWS, Azure, and GCP. You can install the External DNS Operator using OperatorHub. You can use parameters to configure ExternalDNS
as required.
For more information, see Understanding the External DNS Operator.