About container-native virtualization

What you can do with container-native virtualization

Container-native virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.

Container-native virtualization adds new objects into your OpenShift Container Platform cluster via Kubernetes custom resources to enable virtualization tasks. These tasks include:

  • Creating and managing Linux and Windows virtual machines

  • Connecting to virtual machines through a variety of consoles and CLI tools

  • Importing and cloning existing virtual machines, including VMware virtual machines

  • Managing network interface controllers and storage disks attached to virtual machines

  • Live migrating virtual machines between nodes

An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.

Container-native virtualization support

container-native virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

New and changed features

Supported binding methods

  • Open vSwitch (OVS) is no longer recommended and should not be used in container-native virtualization 2.0.

  • For the default Pod network, masquerade is the only recommended binding method. It is not supported for non-default networks.

  • For secondary networks, use the bridge binding method.

Web console improvements

  • You can now view all services associated with a virtual machine in the Virtual Machine Details screen.

Resolved issues

  • Deleting a PVC after a CDI import fails no longer results in the importer Pod getting stuck in a CrashLoopBackOff state. The PVC is now deleted normally. (BZ#1673683)

Known issues

  • Some KubeVirt resources are improperly retained when removing container-native virtualization. As a workaround, you must manually remove them by running the command oc delete apiservices v1alpha3.subresources.kubevirt.io -n kubevirt-hyperconverged. These resources will be removed automatically after (BZ#1712429) is resolved.

  • When using an older version of virtctl with container-native virtualization 2.0, virtctl cannot connect to the requested virtual machine. On the client, update the virtctl RPM package to the latest version to resolve this issue. (BZ#1706108)

  • Interfaces connected to the default Pod network lose connectivity after live migration. As a workaround, use an additional multus-backed network. (BZ#1693532)

  • Container-native virtualization cannot reliably identify node drains that are triggered by running either oc adm drain or kubectl drain. Do not run these commands on the nodes of any clusters where container-native virtualization is deployed. The nodes might not drain if there are virtual machines running on top of them. The current solution is to put nodes into maintenance. (BZ#1707427)

  • If you create a virtual machine with the Pod network connected in bridge mode and use a cloud-init disk, the virtual machine will lose its network connectivity after being restarted. As a workaround, remove the HWADDR line in the file /etc/sysconfig/network-scripts/ifcfg-eth0. (BZ#1708680)

  • Masquerade does not currently work with CNV. Due to an upstream issue, you cannot connect a virtual machine to the default Pod network while in Masquerade mode. (BZ#1725848)

  • Creating a NIC with Masquerade in the wizard does not allow you to specify the port option. (BZ#1725848)

  • If a virtual machine uses guaranteed CPUs, it will not be scheduled, because the label cpumanager=true is not automatically set on nodes. As a workaround, remove the CPUManager entry from the kubevirt-config ConfigMap. Then, manually label the nodes with cpumanager=true before running virtual machines with guaranteed CPUs on your cluster. (BZ#1718944)

  • If you use the web console to create a virtual machine template that has the same name as an existing virtual machine, the operation fails and the message Name is already used by another virtual machine is displayed. As a workaround, create the template from the command line. (BZ#1717930)

  • ReadWriteMany (RWX) is the only supported storage access mode for live migration, importing VMware virtual machines, and creating virtual machines by using the wizard. (BZ#1724654)