Red Hat OpenShift Virtualization is supported for use on OpenShift Container Platform 4.5 clusters. Previously known as container-native virtualization, OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by a new logo:
OpenShift Virtualization is a feature of OpenShift Container Platform that you can use to run and manage virtual machine workloads alongside container workloads.
OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster via Kubernetes custom resources to enable virtualization tasks. These tasks include:
Creating and managing Linux and Windows virtual machines
Connecting to virtual machines through a variety of consoles and CLI tools
Importing and cloning existing virtual machines
Managing network interface controllers and storage disks attached to virtual machines
Live migrating virtual machines between nodes
An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
OpenShift Virtualization is tested with OpenShift Container Storage (OCS) and designed to use with OCS features for the best experience.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
You can now install OpenShift Virtualization by using the CLI to apply manifests to your OpenShift Container Platform cluster.
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
Red Hat Enterprise Linux CoreOS 8 workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
Intel and AMD CPUs.
OpenShift Virtualization rotates and renews TLS certificates at regular intervals. This automatic process does not disrupt any operations.
This release features significant security enhancements. OpenShift Virtualization now supports SELinux with Mandatory Access Control (MAC) for isolating virtual machines (VMs). Previously, all VMs were managed by using privileged Security Context Constraints (SCC). Now, you can use less privileged custom SCCs for VMs and limit the use of privileged SCCs to infrastructure containers in the cluster.
You can now enable access to your Red Hat Enterprise Linux entitlement for RHEL virtual machines. Configure the virt-who
daemon to report the running VMs in your OpenShift Container Platform cluster. This gives the Red Hat Subscription Manager in the RHEL VM access to your entitlements.
OpenShift Virtualization guests can use the following operating systems:
Red Hat Enterprise Linux 6, 7, and 8.
Microsoft Windows Server 2012 R2, 2016, and 2019.
Microsoft Windows 10.
Other operating system templates shipped with OpenShift Virtualization are not supported.
OpenShift Virtualization is now integrated with the OpenShift Container Platform Single Root I/O Virtualization (SR-IOV) Operator. You can now attach virtual machines to SR-IOV networks in your cluster.
MAC address pool is now supported in OpenShift Virtualization. It is disabled by default in the cluster and can be enabled per namespace.
You can now configure the Volume Mode and Access Mode for a virtual disk when you add a disk to a virtual machine in the web console. This is also possible when adding a disk to a new virtual machine using the wizard.
Using OpenShift Container Storage (OCS) with OpenShift Virtualization gives you the benefits of fault-tolerant storage and the ability to live migrate between nodes.
You can now use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces
that are subject to CPU and memory resource restrictions. The default compute resource limits are set to 0
but administrators can configure
the resource limits applied to CDI worker Pods.
The virtctl
tool can now use a DataVolume
when uploading virtual machine disks to the cluster. This helps prevent virtual machines
from being inadvertently started before an upload has completed.
OpenShift Container Storage DataVolumes have been enhanced with conditions and events that make it easier to understand the state of virtual disk imports, clones, and upload operations. Conditions and events also simplify troubleshooting.
In the web console, the sidebar items Virtual Machines and Virtual Machine Templates have been replaced by a single sidebar menu item labeled Virtualization. When you click Virtualization, you have access to two tabs: Virtual Machines and Virtual Machine Templates.
You can now configure the scheduling properties of virtual machines by accessing the Scheduling and resources requirements section of the Virtual Machine Details page. For example, you can view and manage affinity rules, dedicated resources, and tolerations for tainted nodes. You can also search for nodes with labels that match specific key/value pairs by using the Node Selector.
You can now add secrets, ConfigMaps, and service accounts to a virtual machine on the Virtual Machine Overview → Environment page of the OpenShift Container Platform web console. You can also remove these resources on the same page.
OpenShift Virtualization can be installed on disconnected clusters in restricted network environments that do not have Internet connectivity. You can create local mirrors for the OpenShift Virtualization Operator and install the Operator from a local catalog image that is accessible to the disconnected cluster. Learn more about Installing OpenShift Virtualization on a restricted network cluster.
You can import a single Red Hat Virtualization virtual machine by using the virtual machine wizard or the CLI.
Every component in OpenShift Virtualization now uses its own API subgroup, <component_name>.kubevirt.io
.
If you enable a MAC address pool for a namespace by applying the KubeMacPool label and using the io
attribute for virtual machines in that namespace, the io
attribute configuration is not retained for the VMs. As a workaround, do not use the io
attribute for VMs. Alternatively, you can disable KubeMacPool for the namespace. (BZ#1869527)
If container-native virtualization 2.3 is installed on your OpenShift Container Platform 4.4 cluster, upgrading the cluster to version 4.5 may cause a migrating virtual machine instance (VMI) to fail.
This is because the virt-launcher Pod does not successfully notify the virt-handler Pod that migration has failed.
The result is that the source VMI migrationState
is not updated.
(BZ#1859661)
As a workaround, delete the virt-handler Pod on the source node where the VMI is running. This restarts the virt-handler Pod, which updates the VMI status and restarts VMI migration:
Find the name of the source node where the VMI is running:
$ oc get vmi -o wide
Delete the virt-handler Pod on the source node:
$ oc delete pod -n openshift-cnv --selector=kubevirt.io=virt-handler --field-selector=spec.nodeName=<source-node-name> (1)
1 | Where <source-node-name> is the name of the source node that the VMI is migrating from. |
Common templates in previous versions of OpenShift Virtualization had a default spec.terminationGracePeriodSeconds
value of 0
. Virtual machines created from these older common templates can encounter disk issues from being forcefully terminated.
If you upgrade to OpenShift Virtualization 2.4, both older and newer versions of common templates are available for each combination of operating system, workload, and flavor. When you create a virtual machine by using a common template, you must use the newer version of the template. Disregard the older version to avoid issues. (BZ#1859235)
To verify if a virtual machine is affected by this bug, run the following command in the namespace of the virtual machine to determine the spec.terminationGracePeriodSeconds
value:
$ oc get vm <virtual-machine-name> -o yaml | grep "terminationGracePeriodSeconds"
If the virtual machine has a terminationGracePeriodSeconds
value of 0
, patch the virtual machine config with a spec.terminationGracePeriodSeconds
value of 180
for Linux virtual machines, or a value of 3600
for Windows virtual machines.
$ oc patch vm <virtual-machine-name> --type merge -p '{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":180}}}}'
Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath-provisioner storage or SR-IOV network interfaces.
As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the spec
section of the virtual machine configuration file:
Remove the evictionStrategy: LiveMigrate
field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy.
Set the runStrategy
field to Always
.
For unknown reasons, memory consumption for the containerDisk
volume type might gradually increase until it exceeds the memory limit. To resolve this issue, restart the VM. (BZ#1855067)
Sometimes, when attempting to edit the subscription channel of the OpenShift Virtualization Operator in the web console, clicking the Channel button of the Subscription Overview results in a JavaScript error. (BZ#1796410)
As a workaround, trigger the upgrade process to OpenShift Virtualization 2.4
from the CLI by running the following oc
patch command:
$ export TARGET_NAMESPACE=openshift-cnv CNV_CHANNEL=2.4 && oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"'${CNV_CHANNEL}'"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]'
This command points your subscription to upgrade channel 2.4
and enables automatic updates.
After migration, a virtual machine is assigned a new IP address. However, the
commands oc get vmi
and oc describe vmi
still generate output containing the
obsolete IP address. (BZ#1686208)
As a workaround, view the correct IP address by running the following command:
$ oc get pod -o wide
Live migration fails when nodes have different CPU models. Even in cases where nodes have the same physical CPU model, differences introduced by microcode updates have the same effect. This is because the default settings trigger host CPU passthrough behavior, which is incompatible with live migration. (BZ#1760028)
As a workaround, set the default CPU model in the kubevirt-config
ConfigMap,
as shown in the following example:
You must make this change before starting the virtual machines that support live migration. |
Open the kubevirt-config
ConfigMap for editing by running the following command:
$ oc edit configmap kubevirt-config -n openshift-cnv
Edit the ConfigMap:
kind: ConfigMap
metadata:
name: kubevirt-config
data:
default-cpu-model: "<cpu-model>" (1)
1 | Replace <cpu-model> with the actual CPU model value. You can determine this
value by running oc describe node <node> for all nodes and looking at the
cpu-model-<name> labels. Select the CPU model that is present on all of your
nodes. |
OpenShift Virtualization cannot reliably identify node drains that are triggered by
running either oc adm drain
or kubectl drain
. Do not run these commands on
the nodes of any clusters where OpenShift Virtualization is deployed. The nodes might not
drain if there are virtual machines running on top of them.
The current solution is to put nodes into maintenance.
You must create a custom ConfigMap in order to import a Red Hat Virtualization (RHV) VM into OpenShift Virtualization.
You cannot import a RHV VM if the target VM name exceeds 63 characters. (BZ#1857165)
If the OpenShift Virtualization storage PV is not suitable for importing a RHV VM, the progress bar remains at 10% and the import does not complete. The VM Import Controller Pod log displays the following error message: Failed to bind volumes: provisioning failed for PVC
. (BZ#1857784)
If you enter the wrong credentials for the RHV Manager while importing a RHV VM, the Manager might lock the admin user account because the vm-import-operator
tries repeatedly to connect to the RHV API. To unlock the account, log in to the Manager and enter the following command:
$ ovirt-aaa-jdbc-tool user unlock admin
If a RHV VM disk is in a Locked
state, you must unlock the disk before you can import it.
cloud-init
settings are not imported with a RHV virtual machine. You must recreate cloud-init
after the import process.
OpenShift Virtualization does not support UEFI. If you import a VMware VM with UEFI BIOS into OpenShift Virtualization, the VM will not boot. (BZ#1880083)