$ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged \
kubevirt.kubevirt.io/jsonpatch='[{"op": "add","path": "/spec/configuration/developerConfiguration/featureGates/-", \
"value": "DisableMDEVConfiguration"}]'
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the icon.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
Learn more about OpenShift Virtualization architecture and deployments.
Prepare your cluster for OpenShift Virtualization.
OpenShift Virtualization 4.13 is supported for use on OpenShift Container Platform 4.13 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
Updating to OpenShift Virtualization 4.13 from OpenShift Virtualization 4.12.2 is not supported. |
To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
OpenShift Virtualization is FIPS ready. However, OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. Red Hat expects, though cannot commit to a specific timeframe, to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Updates will be available in Compliance Activities and Government Standards.
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9.
Intel and AMD CPUs.
OpenShift Virtualization now adheres to the restricted
Kubernetes pod security standards profile. To learn more, see the OpenShift Virtualization security policies documentation.
OpenShift Virtualization is now based on Red Hat Enterprise Linux (RHEL) 9.
There is a new RHEL 9 machine type for VMs: machineType: pc-q35-rhel9.2.0
.
All VM templates that are included with OpenShift Virtualization now use this machine type by default.
For more information, see OpenShift Virtualization on RHEL 9.
You can now obtain the VirtualMachine
, ConfigMap
, and Secret
manifests from the export server after you export a VM or snapshot. For more information, see accessing exported VM manifests.
The "Logging, events, and monitoring" documentation is now called Support. The monitoring tools documentation has been moved to Monitoring.
You can view and filter aggregated OpenShift Virtualization logs in the web console by using the LokiStack.
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtualization
keyword in the Filter field.
You can now send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network when you use the OVN-Kubernetes CNI plugin.
OpenShift Virtualization storage resources now migrate automatically to the beta API versions. Alpha API versions are no longer supported.
On the VirtualMachine details page, the Scheduling, Environment, Network interfaces, Disks, and Scripts tabs are displayed on the new Configuration tab.
You can now paste a string from your client’s clipboard into the guest when using the VNC console.
The VirtualMachine details → Details tab now provides a new SSH service type SSH over LoadBalancer to expose the SSH service over a load balancer.
The option to make a hot-plug volume a persistent volume is added to the Disks tab.
There is now a VirtualMachine details → Diagnostics tab where you can view the status conditions of VMs and the snapshot status of volumes.
You can now enable headless mode for high performance VMs in the web console.
Deprecated features are included and supported in the current release. However, they will be removed in a future release and are not recommended for new deployments.
Support for virtctl
command line tool installation for Red Hat Enterprise Linux (RHEL) 7 and RHEL 9 by an RPM is deprecated and is planned to be removed in a future release.
Removed features are not supported in the current release.
Red Hat Enterprise Linux 6 is no longer supported on OpenShift Virtualization.
Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.13, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a previous version of OpenShift Virtualization.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
You can now use Prometheus to monitor the following metrics:
kubevirt_vmi_cpu_system_usage_seconds
returns the physical system CPU time consumed by the hypervisor.
kubevirt_vmi_cpu_user_usage_seconds
returns the physical user CPU time consumed by the hypervisor.
kubevirt_vmi_cpu_usage_seconds
returns the total CPU time used in seconds by calculating the sum of the vCPU and the hypervisor usage.
You can now run a checkup to verify if your OpenShift Container Platform cluster node can run a virtual machine with a Data Plane Development Kit (DPDK) workload with zero packet loss.
You can configure your virtual machine to run DPDK workloads to achieve lower latency and higher throughput for faster packet processing in the user space.
You can now access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN).
You can now create OpenShift Container Platform clusters with worker nodes that are hosted by OpenShift Virtualization VMs. For more information, see Managing hosted control plane clusters on OpenShift Virtualization in the Red Hat Advanced Cluster Management (RHACM) documentation.
You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.13 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide.
The virtual machine snapshot restore operation no longer hangs indefinitely due to some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI). (BZ#2070366)
With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret
(EMS) extension (RFC 7627) is mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected.
Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2.
As a workaround, upgrade legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern
TLS security profile type, for FIPS mode.
If you enabled the DisableMDEVConfiguration
feature gate by editing the HyperConverged
custom resource in OpenShift Virtualization 4.12.4, you must re-enable the feature gate after you upgrade to versions 4.13.0 or 4.13.1 by creating a JSON Patch annotation (BZ#2184439):
$ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged \
kubevirt.kubevirt.io/jsonpatch='[{"op": "add","path": "/spec/configuration/developerConfiguration/featureGates/-", \
"value": "DisableMDEVConfiguration"}]'
OpenShift Virtualization versions 4.12.2 and earlier are not compatible with OpenShift Container Platform 4.13. Updating OpenShift Container Platform to 4.13 is blocked by design in OpenShift Virtualization 4.12.1 and 4.12.2, but this restriction could not be added to OpenShift Virtualization 4.12.0. If you have OpenShift Virtualization 4.12.0, ensure that you do not update OpenShift Container Platform to 4.13.
Your cluster becomes unsupported if you run incompatible versions of OpenShift Container Platform and OpenShift Virtualization. |
Enabling descheduler evictions on a virtual machine is a Technical Preview feature and might cause failed migrations and unstable scheduling.
You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs
storage class fail to migrate and the VM status changes to Paused
. This is because both pods try to access the shared ReadWriteMany
CephFS volume at the same time. (BZ#2092271)
As a workaround, use the ocs-storagecluster-ceph-rbd
storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
If you clone more than 100 VMs using the csi-clone
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. (BZ#2055595)
As a workaround, you can restart the ceph-mgr
to purge the VM clones.
If you stop a node on a cluster and then use the Node Health Check Operator to bring the node back up, connectivity to Multus might be lost. (OCPBUGS-8398)
The TopoLVM
provisioner name string has changed in OpenShift Virtualization 4.12. As a result, the automatic import of operating system images might fail with the following error message (BZ#2158521):
DataVolume.storage spec is missing accessMode and volumeMode, cannot get access mode from StorageProfile.
As a workaround:
Update the claimPropertySets
array of the storage profile:
$ oc patch storageprofile <storage_profile> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Block"}, \
{"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}'
Delete the affected data volumes in the openshift-virtualization-os-images
namespace. They are recreated with the access mode and volume mode from the updated storage profile.
When restoring a VM snapshot for storage whose binding mode is WaitForFirstConsumer
, the restored PVCs remain in the Pending
state and the restore operation does not progress.
As a workaround, start the restored VM, stop it, and then start it again. The VM will be scheduled, the PVCs will be in the Bound
state, and the restore operation will complete. (BZ#2149654)
VMs created from common templates on a Single Node OpenShift (SNO) cluster display a VMCannotBeEvicted
alert because the template’s default eviction strategy is LiveMigrate
. You can ignore this alert or remove the alert by updating the VM’s eviction strategy. (BZ#2092412)
Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io
node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036)
Windows 11 virtual machines do not boot on clusters running in FIPS mode. Windows 11 requires a TPM (trusted platform module) device by default. However, the swtpm
(software TPM emulator) package is incompatible with FIPS. (BZ#2089301)
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring
sends a PodDisruptionBudgetAtLimit
alert every 60 minutes for virtual machine images that use the LiveMigrate
eviction strategy. (BZ#2026733)
As a workaround, silence alerts.
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
VMs that use logical volume management (LVM) with block storage devices require additional configuration to avoid conflicts with Red Hat Enterprise Linux CoreOS (RHCOS) hosts.
As a workaround, you can create a VM, provision an LVM, and restart the VM. This creates an empty system.lvmdevices
file. (OCPBUGS-5223)