$ oc get vmi testvm -o=jsonpath='{.status.conditions[?(@.type=="Paused")].message}'
Container-native virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.
Container-native virtualization adds new objects into your OpenShift Container Platform cluster via Kubernetes custom resources to enable virtualization tasks. These tasks include:
Creating and managing Linux and Windows virtual machines
Connecting to virtual machines through a variety of consoles and CLI tools
Importing and cloning existing virtual machines
Managing network interface controllers and storage disks attached to virtual machines
Live migrating virtual machines between nodes
An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
You can use container-native virtualization with either the OVN-Kubernetes or the OpenShiftSDN network provider.
Container-native virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Container-native virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
SVVP Certification applies to Intel and AMD CPUs.
SVVP Certificate Red Hat Enterprise Linux CoreOS 8 workers are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
New templates are available that are compatible with Microsoft Windows 10.
You can now use container-native virtualization with either the OVN-Kubernetes or the OpenShiftSDN network provider.
Container-native virtualization uses nmstate
to report on and configure the state of the node network. You can now modify network policy configuration, such as creating or removing Linux bridges, bonds, and VLAN devices on all nodes, by applying a single configuration manifest to the cluster.
For more information, see the node networking chapter.
You can now import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
The virtctl
tool can now monitor server-side upload post-processing asynchronously and more accurately reports the status of virtual machine disk uploads.
You can now view the Paused status of a virtual machine in the web console. The web console displays this status on the Virtual Machines dashboard and on the Virtual Machine Details page.
You can also view the Paused status of a virtual machine by using the $ oc get vmi testvm -o=jsonpath='{.status.conditions[?(@.type=="Paused")].message}' |
If a virtual machine has a status of Paused, you can now unpause it from the web console.
You can view and manage virtual machine instances that are independent of virtual machines within the virtual machine lists.
Container-native virtualization enables you to configure a CD-ROM in the virtual machine wizard. You can select the type of CD-ROM configuration from a drop-down list: Container, URL or Attach Disk. You can also edit CD-ROM configurations in the Virtual Machine Details page.
The boot order list is now available in the Virtual Machine Details. You can add items, remove items, and modify the boot order list.
The virtual machine and virtual machine template wizards now validate CPU and memory sizes and disk bus requirements for different operating systems. If you attempt to create a virtual machine or virtual machine template that does not meet the requirements for a particular operating system, the wizard raises a resource warning.
You can now view and configure dedicated resources for virtual machines in the Virtual Machine Details page. When you enable dedicated resources, you ensure that your virtual machine’s workload runs on CPUs that are not used by other processes. This can improve your virtual machine’s performance and the accuracy of latency predictions.
You must deploy container-native virtualization in a single namespace that is named
openshift-cnv
. If it does not already exist, the openshift-cnv
namespace is
now automatically created during deployment.
To prevent errors, you can no longer rename the CNV Operator Deployment custom
resource that you create during container-native virtualization deployment. If you try to create
a custom resource that is not named kubevirt-hyperconverged
, which is the default
name, creation fails and an error message displays in the web console.
To prevent unintentional data loss, you can no longer uninstall container-native virtualization if your cluster has a virtual machine or DataVolume defined.
You must manually delete all virtual machines (VMs), virtual machine instances (VMIs), and DataVolumes (DVs) before uninstalling container-native virtualization.
If VM, VMI, or DV objects are present when you attempt to uninstall container-native virtualization, the uninstallation process does not complete until you remove the remaining objects.
To confirm that uninstallation is paused due to a pending object, view the Events tab. |
KubeMacPool is disabled in container-native virtualization 2.3. This means that a secondary interface of a Pod or virtual machine obtains a randomly generated MAC address rather than a unique one from a pool. Although rare, randomly assigned MAC addresses can conflict. (BZ#1816971)
Sometimes, when attempting to edit the subscription channel of the Container-native virtualization Operator in the web console, clicking the Channel button of the Subscription Overview results in a JavaScript error. (BZ#1796410)
As a workaround, trigger the upgrade process to container-native virtualization 2.3
from the CLI by running the following oc
patch command:
$ export TARGET_NAMESPACE=openshift-cnv CNV_CHANNEL=2.3 && oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"'${CNV_CHANNEL}'"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]'
This command points your subscription to upgrade channel 2.3
and enables automatic updates.
In the virtual machine and virtual machine template wizards, virtIO is the default interface when you attach a CD-ROM. However, a virtIO CD-ROM does not pass virtual machine validation and cannot be created. (BZ#1817394)
As a workaround, select SATA as the CD-ROM interface when you create virtual machines and virtual machine templates.
The Containerized Data Importer (CDI) does not always use the scratchSpaceStorageClass
setting in the CDIConfig object for importing and uploading operations.
Instead, the CDI uses the default storage class to allocate scratch space.
(BZ#1828198)
As a workaround, ensure you have defined a default storage class for your cluster. The following command can be used to apply the necessary annotation:
$ oc patch storageclass <STORAGE_CLASS_NAME> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
If you renamed the Operator deployment custom resource when you deployed
an earlier version of container-native virtualization, you cannot upgrade directly to container-native virtualization 2.3.
The custom resource must be named kubevirt-hyperconverged
, which is the default
name. (BZ#1822266)
As a workaround, you can either:
Rename the existing custom resource to kubevirt-hyperconverged
.
Create a new custom resource that is named the default kubevirt-hyperconverged
.
Then, delete the custom resource that is not named kubevirt-hyperconverged
.
The OpenShift Container Platform 4.4 web console includes slirp as an option when you add a NIC to a virtual machine, but slirp is not a valid NIC type. Do not select slirp when adding a NIC to a virtual machine. (BZ#1828744)
After migration, a virtual machine is assigned a new IP address. However, the
commands oc get vmi
and oc describe vmi
still generate output containing the
obsolete IP address. (BZ#1686208)
As a workaround, view the correct IP address by running the following command:
$ oc get pod -o wide
Users without administrator privileges cannot add a network interface to a project in an L2 network using the virtual machine wizard. This issue is caused by missing permissions that allow users to load network attachment definitions. (BZ#1743985)
As a workaround, provide the user with permissions to load the network attachment definitions.
Define ClusterRole
and ClusterRoleBinding
objects to the YAML configuration
file, using the following examples:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cni-resources
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources: ["*"]
verbs: ["*"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <role-binding-name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cni-resources
subjects:
- kind: User
name: <user to grant the role to>
namespace: <namespace of the user>
As a cluster-admin
user, run the following command to create the ClusterRole
and ClusterRoleBinding
objects you defined:
$ oc create -f <filename>.yaml
Live migration fails when nodes have different CPU models. Even in cases where nodes have the same physical CPU model, differences introduced by microcode updates have the same effect. This is because the default settings trigger host CPU passthrough behavior, which is incompatible with live migration. (BZ#1760028)
As a workaround, set the default CPU model in the kubevirt-config
ConfigMap,
as shown in the following example:
You must make this change before starting the virtual machines that support live migration. |
Open the kubevirt-config
ConfigMap for editing by running the following command:
$ oc edit configmap kubevirt-config -n openshift-cnv
Edit the ConfigMap:
kind: ConfigMap
metadata:
name: kubevirt-config
data:
default-cpu-model: "<cpu-model>" (1)
1 | Replace <cpu-model> with the actual CPU model value. You can determine this
value by running oc describe node <node> for all nodes and looking at the
cpu-model-<name> labels. Select the CPU model that is present on all of your
nodes. |
When attempting to create and launch a virtual machine using a Haswell CPU, the launch of the virtual machine can fail due to incorrectly labeled nodes. This is a change in behavior from previous versions of container-native virtualization, where virtual machines could be successfully launched on Haswell hosts. (BZ#1781497)
As a workaround, select a different CPU model, if possible.
The container-native virtualization upgrade process occasionally fails due to an interruption from the Operator Lifecycle Manager (OLM). This issue is caused by the limitations associated with using a declarative API to track the state of container-native virtualization Operators. Enabling automatic updates during installation decreases the risk of encountering this issue. (BZ#1759612)
Container-native virtualization cannot reliably identify node drains that are triggered by
running either oc adm drain
or kubectl drain
. Do not run these commands on
the nodes of any clusters where container-native virtualization is deployed. The nodes might not
drain if there are virtual machines running on top of them.
The current solution is to put nodes into maintenance.
(BZ#1707427)
If you navigate to the Subscription tab on the Operators → Installed Operators page and click the current upgrade channel to edit it, there might be no visible results. If this occurs, there are no visible errors. (BZ#1796410)
As a workaround, trigger the upgrade process to container-native virtualization 2.3
from the CLI by running the following oc
patch command:
$ export TARGET_NAMESPACE=openshift-cnv CNV_CHANNEL=2.3 && oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"'${CNV_CHANNEL}'"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]'
This command points your subscription to upgrade channel 2.3
and enables automatic updates.