-
NodeNetworkState
-
NodeNetworkConfigurationPolicy
-
NodeNetworkConfigurationEnactment
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the icon.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
Learn more about OpenShift Virtualization architecture and deployments.
Prepare your cluster for OpenShift Virtualization.
OpenShift Virtualization 4.11 is supported for use on OpenShift Container Platform 4.11 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.
You can now deploy OpenShift Virtualization on a three-node cluster with zero compute nodes.
Virtual machines run as unprivileged workloads in session mode by default. This feature improves cluster security by mitigating escalation-of-privilege attacks.
Red Hat Enterprise Linux (RHEL) 9 is now supported as a guest operating system.
The link for installing the Migration Toolkit for Virtualization (MTV) Operator in the OpenShift Container Platform web console has been moved. It is now located in the Related operators section of the Getting started resources card on the Virtualization → Overview page.
You can configure the verbosity level of the virtLauncher
, virtHandler
, virtController
, virtAPI
, and virtOperator
pod logs to debug specific components by editing the HyperConverged
custom resource (CR).
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtualization
keyword in the Filter field.
New metrics are available that provide information about virtual machine snapshots.
You can reduce the number of logs on disconnected environments or reduce resource usage by disabling the automatic imports and updates for a boot source.
You can set the boot mode of templates and virtual machines to BIOS, UEFI, or UEFI (secure) by using the web console.
You can now enable and disable the descheduler from the web console on the Scheduling tab of the VirtualMachine details page.
You can access virtual machines by navigating to Virtualization → VirtualMachines in the side menu. Each virtual machine now has an updated Overview tab that provides information about the virtual machine configuration, alerts, snapshots, network interfaces, disks, usage data, and hardware devices.
The Create a Virtual Machine wizard in the web console is now replaced by the Catalog page, which lists available templates that you can use to create a virtual machine. You can use a template with an available boot source to quickly create a virtual machine or you can customize a template to create a virtual machine.
If your Windows virtual machine has a vGPU attached, you can now switch between the default display and the vGPU display by using the web console.
You can access virtual machine templates by navigating to Virtualization → Templates in the side menu. The updated VirtualMachine Templates page now provides useful information about each template, including workload profile, boot source, and CPU and memory configuration.
The Create Template wizard has been removed from the VirtualMachine Templates page. You create a virtual machine template by editing a YAML file example.
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in OpenShift Virtualization 4.11, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to create a storage class for the CSI driver as part of your migration strategy.
Removed features are not supported in the current release.
OpenShift Virtualization 4.11 removes support for nmstate, including the following objects:
NodeNetworkState
NodeNetworkConfigurationPolicy
NodeNetworkConfigurationEnactment
To preserve and support your existing nmstate configuration, install the Kubernetes NMState Operator before updating to OpenShift Virtualization 4.11. You can install it from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (oc
).
The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (oc
).
You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later releases:
Move all nodes out of maintenance mode.
Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io
custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io
CR.
You can no longer mark virtual machine templates as favorites.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.11 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide.
You can now deploy OpenShift Virtualization on AWS bare metal nodes.
OpenShift Virtualization has critical alerts that inform you when a problem occurs that requires immediate attention. Now, each alert has a corresponding description of the problem, a reason for why the alert is occurring, a troubleshooting process to diagnose the source of the problem, and steps for resolving the alert.
Administrators can now declaratively create and expose mediated devices such as virtual graphics processing units (vGPUs) by editing the HyperConverged
CR. Virtual machine owners can then assign these devices to VMs.
You can transfer the static IP configuration of the NIC attached to the bridge by applying a single NodeNetworkConfigurationPolicy
manifest to the cluster.
You can now install OpenShift Virtualization on IBM Cloud bare-metal servers. Bare-metal servers offered by other cloud providers are not supported.
You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate
and ocp4-moderate-node
profiles.
OpenShift Virtualization now includes a diagnostic framework to run predefined checkups that can be used for cluster maintenance and troubleshooting. You can run a predefined checkup to check network connectivity and latency for virtual machines on a secondary network.
You can create live migration policies with specific parameters, such as bandwidth usage, maximum number of parallel migrations, and timeout, and apply the policies to groups of virtual machines by using virtual machine and namespace labels.
Previously, on a large cluster, the OpenShift Virtualization MAC pool manager would take too much time to boot and OpenShift Virtualization might not become ready. With this update, the pool initialization and startup latency is reduced. As a result, VMs can now be successfully defined. (BZ#2035344)
If a Windows VM crashes or hangs during shutdown, you can now manually issue a force shutdown request to stop the VM. (BZ#2040766)
The YAML examples in the VM wizard have now been updated to contain the latest upstream changes. (BZ#2055492)
The Add Network Interface button on the VM Network Interfaces tab is no longer disabled for non-privileged users. (BZ#2056420)
A non-privileged user can now successfully add disks to a VM without getting a RBAC rule error. (BZ#2056421)
The web console now successfully displays virtual machine templates that are deployed to a custom namespace. (BZ#2054650)
Previously, updating a Single Node OpenShift (SNO) cluster failed if the spec.evictionStrategy
field was set to LiveMigrate
for a VMI. For live migration to succeed, the cluster must have more than one compute node. With this update, the spec.evictionStrategy
field is removed from the virtual machine template in a SNO environment. As a result, cluster update is now successful. (BZ#2073880)
You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs
storage class fail to migrate and the VM status changes to Paused
. This is because both pods try to access the shared ReadWriteMany
CephFS volume at the same time. (BZ#2092271)
As a workaround, use the ocs-storagecluster-ceph-rbd
storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
Restoring a VM snapshot fails if you update OpenShift Container Platform to version 4.11 without also updating OpenShift Virtualization. This is due to a mismatch between the API versions used for snapshot objects. (BZ#2159442)
As a workaround, update OpenShift Virtualization to the same minor version as OpenShift Container Platform. To ensure that the versions are kept in sync, use the recommended Automatic approval strategy.
Uninstalling OpenShift Virtualization does not remove the node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036)
The OVN-Kubernetes cluster network provider crashes from peak RAM and CPU usage if you create a large number of NodePort
services. This can happen if you use NodePort
services to expose SSH access to a large number of virtual machines (VMs). (OCPBUGS-1940)
As a workaround, use the OpenShift SDN cluster network provider if you want to expose SSH access to a large number of VMs via NodePort
services.
Updating to OpenShift Virtualization 4.11 from version 4.10 is blocked until you install the standalone Kubernetes NMState Operator. This occurs even if your cluster configuration does not use any nmstate resources. (BZ#2126537)
As a workaround:
Verify that there are no node network configuration policies defined on the cluster:
$ oc get nncp
Choose the appropriate method to update OpenShift Virtualization:
If the list of node network configuration policies is not empty, exit this procedure and install the Kubernetes NMState Operator to preserve and support your existing nmstate configuration.
If the list is empty, go to step 3.
Annotate the HyperConverged
custom resource (CR). The following command overwrites any existing JSON patches:
$ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[{"op": "replace","path": "/spec/nmstate", "value": null}]'
The |
Update OpenShift Virtualization.
After the update completes, remove the annotation by running the following command:
$ oc annotate -n openshift-cnv hco kubevirt-hyperconverged networkaddonsconfigs.kubevirt.io/jsonpatch-
Optional: Add back any previously configured JSON patches that were overwritten.
Some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI) can cause the virtual machine snapshot restore operation to hang indefinitely. (BZ#2070366)
As a workaround, you can remove the annotations manually:
Obtain the VirtualMachineSnapshotContent custom resource (CR) name from the status.virtualMachineSnapshotContentName
value in the VirtualMachineSnapshot
CR.
Edit the VirtualMachineSnapshotContent
CR and remove all lines that contain k8s.io/cloneRequest
.
If you did not specify a value for spec.dataVolumeTemplates
in the VirtualMachine
object, delete any DataVolume
and PersistentVolumeClaim
objects in this namespace where both of the following conditions are true:
The object’s name begins with restore-
.
The object is not referenced by virtual machines.
This step is optional if you specified a value for spec.dataVolumeTemplates
.
Repeat the restore operation with the updated VirtualMachineSnapshot
CR.
Windows 11 virtual machines do not boot on clusters running in FIPS mode. Windows 11 requires a TPM (trusted platform module) device by default. However, the swtpm
(software TPM emulator) package is incompatible with FIPS. (BZ#2089301)
In a Single Node OpenShift (SNO) cluster, a VMCannotBeEvicted
alert occurs on virtual machines that are created from common templates that have the eviction strategy set to LiveMigrate
. (BZ#2092412)
The QEMU guest agent on a Fedora 35 virtual machine is blocked by SELinux and does not report data. Other Fedora versions might be affected. (BZ#2028762)
As a workaround, disable SELinux on the virtual machine, run the QEMU guest agent commands, and then re-enable SELinux.
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
If you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage, cloning more than 100 VMs at once might fail. (BZ#1989527)
As a workaround, you can perform a host-assisted copy by setting spec.cloneStrategy: copy
in the storage profile manifest. For example:
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
name: <provisioner_class>
# ...
spec:
claimPropertySets:
- accessModes:
- ReadWriteOnce
volumeMode: Filesystem
cloneStrategy: copy (1)
status:
provisioner: <provisioner>
storageClass: <provisioner_class>
1 | The default cloning method set to copy . |
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring
sends a PodDisruptionBudgetAtLimit
alert every 60 minutes for virtual machine images that use the LiveMigrate
eviction strategy. (BZ#2026733)
As a workaround, silence alerts.
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
If you configure the HyperConverged
custom resource (CR) to enable mediated devices before drivers are installed, the new device configuration does not take effect. This issue can be triggered by updates. For example, if virt-handler
is updated before daemonset
, which installs NVIDIA drivers, then nodes cannot provide virtual machine GPUs. (BZ#2046298)
As a workaround:
Remove mediatedDevicesConfiguration
and permittedHostDevices
from the HyperConverged
CR.
Update both mediatedDevicesConfiguration
and permittedHostDevices
stanzas with the configuration you want to use.
If you clone more than 100 VMs using the csi-clone
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)
As a workaround, you can restart the ceph-mgr
to purge the VM clones.