risk of data loss updating hyperconvergeds.hco.kubevirt.io: new CRD removes
version v1alpha1 that is listed as a stored version on the existing CRD
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the logo.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
OpenShift Virtualization 4.8 is supported for use on OpenShift Container Platform 4.8 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
OpenShift Virtualization guests can use the following operating systems:
Red Hat Enterprise Linux 6, 7, and 8.
Microsoft Windows Server 2012 R2, 2016, and 2019.
Microsoft Windows 10.
Other operating system templates shipped with OpenShift Virtualization are not supported.
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS.
Intel and AMD CPUs.
The Containerized Data Importer (CDI) now uses the OpenShift Container Platform cluster-wide proxy configuration.
OpenShift Virtualization now supports third-party Container Network Interface (CNI) plug-ins that are certified by Red Hat for use with OpenShift Container Platform.
OpenShift Virtualization now provides metrics for monitoring how infrastructure resources are consumed in the cluster. You can use the OpenShift Container Platform monitoring dashboard to query metrics for the following resources:
vCPU
Network
Storage
Guest memory swapping
OpenShift Virtualization now provides a unified API to configure certificate rotation.
If a Windows virtual machine is created from a template or has predefined Hyper-V capabilities, it can now only be scheduled to Hyper-V capable nodes.
The --proxy-only
option for the virtctl vnc
command allows you to manually connect to a virtual machine instance through a Virtual Network Client (VNC) connection by using any VNC viewer.
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtualization
keyword in the Filter field.
You can use the Kubernetes NMstate Operator to configure and manage IP addresses on your cluster nodes.
OpenShift Virtualization now supports live migration of virtual machines that are attached to an SR-IOV network interface if the sriovLiveMigration
feature gate is enabled in the HyperConverged
custom resource (CR).
Cloning a data volume into a different namespace is now faster and more efficient when using storage that supports Container Storage Interface (CSI) snapshots. The Containerized Data Importer (CDI) uses CSI snapshots, when they are available, to improve performance when you create a virtual machine from a template.
When the fstrim
or blkdiscard
commands are run on a virtual disk, the discard requests are passed to the underlying storage device. If the storage provider supports the Pass Discard feature, the discard requests free up storage capacity.
You can now specify data volumes by using the storage API. The storage API, unlike the PVC API, allows the system to optimize accessModes
, volumeMode
, and storage capacity when allocating storage.
You can now clone virtual machine disks between different data volume modes if they have the content type kubevirt
. For example, you can clone a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem.
You can create a custom disk image as a boot source for any template that has a defined source by running a wizard in the OpenShift Virtualization console.
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
Importing a single virtual machine from Red Hat Virtualization (RHV) or VMware is deprecated in the current release and will be removed in OpenShift Virtualization 4.9. This feature is replaced by the Migration Toolkit for Virtualization.
OpenShift Virtualization now configures IPv6 addresses when running on clusters that have dual-stack networking enabled. You can create a service that uses IPv4, IPv6, or both IP address families, if dual-stack networking is enabled for the underlying OpenShift Container Platform cluster.
KubeMacPool is now enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore
label to the namespace. Re-enable KubeMacPool for the namespace by removing the label.
The HyperConverged
custom resource (CR) is now the central point of configuration for OpenShift Virtualization. By editing the HyperConverged
CR, you can:
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
You can now hot-plug and hot-unplug virtual disks when you want to add or remove them from your virtual machine without stopping the virtual machine instance.
Updating to OpenShift Virtualization 4.8.7 causes some virtual machines (VMs) to get stuck in a live migration loop. This occurs if the spec.volumes.containerDisk.path
field in the VM manifest is set to a relative path.
As a workaround, delete and recreate the VM manifest, setting the value of the spec.volumes.containerDisk.path
field to an absolute path. You can then update OpenShift Virtualization.
If you initially deployed OpenShift Virtualization version 2.4.z or earlier, upgrading to version 4.8 fails with the following message:
risk of data loss updating hyperconvergeds.hco.kubevirt.io: new CRD removes
version v1alpha1 that is listed as a stored version on the existing CRD
This bug does not affect clusters where OpenShift Virtualization was initially deployed at version 2.5.0 or later. (BZ#1986989)
As a workaround, remove the v1alpha1
version from the HyperConverged
custom resource definition (CRD) and resume the upgrade process:
Open a proxy connection to the cluster by running the following command:
$ oc proxy &
Remove the v1alpha1
version from .status.storedVersions
on the HyperConverged
CRD by running the following command:
$ curl --header "Content-Type: application/json-patch+json" --request PATCH http://localhost:8001/apis/apiextensions.k8s.io/v1/customresourcedefinitions/hyperconvergeds.hco.kubevirt.io/status --data '[{"op": "replace", "path": "/status/storedVersions", "value":["v1beta1"]}]'
Resume the upgrade process by running the following command:
$ curl --header "Content-Type: application/json-patch+json" --request PATCH http://localhost:8001/apis/operators.coreos.com/v1alpha1/namespaces/openshift-cnv/installplans/$(oc get installplan -n openshift-cnv | grep kubevirt-hyperconverged-operator.v4.8.0 | cut -d' ' -f1)/status --data '[{"op": "remove", "path": "/status/conditions"},{"op": "remove", "path": "/status/message"},{"op": "replace", "path": "/status/phase", "value": "Installing"}]'
Kill the oc proxy
process by running the following command:
$ kill $(ps -C "oc proxy" -o pid=)
Optional: Monitor the upgrade status by running the following command:
$ oc get csv
If you delete OpenShift Virtualization-provided templates in version 4.8 or later, the templates are automatically recreated by the OpenShift Virtualization Operator. However, if you delete OpenShift Virtualization-provided templates created before version 4.8, those earlier templates are not automatically recreated after deletion. As a result, any edit or update to a virtual machine referencing a deleted earlier template will fail.
If a cloning operation is initiated before the source is available to be cloned, the operation stalls indefinitely. This is because the clone authorization expires before the cloning operation starts. (BZ#1855182)
As a workaround, delete the DataVolume
object that is requesting the clone. When the source is available, recreate the DataVolume
object that you deleted so that the cloning operation can complete successfully.
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath-provisioner storage or SR-IOV network interfaces. (BZ#1858777)
As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the spec
section of the virtual machine configuration file:
Remove the evictionStrategy: LiveMigrate
field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy.
Set the runStrategy
field to Always
.
Live migration fails when nodes have different CPU models. Even in cases where nodes have the same physical CPU model, differences introduced by microcode updates have the same effect. This is because the default settings trigger host CPU passthrough behavior, which is incompatible with live migration. (BZ#1760028)
As a workaround, set the default CPU model by running the following command:
You must make this change before starting the virtual machines that support live migration. |
$ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[
{
"op": "add",
"path": "/spec/configuration/cpuModel",
"value": "<cpu_model>" (1)
}
]'
1 | Replace <cpu_model> with the actual CPU model value. You can determine this value by running oc describe node <node> for all nodes and looking at the cpu-model-<name> labels. Select the CPU model that is present on all of your nodes. |
If you enter the wrong credentials for the RHV Manager while importing a RHV VM, the Manager might lock the admin user account because the vm-import-operator
tries repeatedly to connect to the RHV API. (BZ#1887140)
To unlock the account, log in to the Manager and enter the following command:
$ ovirt-aaa-jdbc-tool user unlock admin
If you run OpenShift Virtualization 2.6.5 with OpenShift Container Platform 4.8, various issues occur. You can avoid these issues by upgrading OpenShift Virtualization to version 4.8.
In the web console, if you navigate to the Virtualization page and select Create → With YAML the following error message is displayed:
The server doesn't have a resource type "kind: VirtualMachine, apiVersion: kubevirt.io/v1"
As a workaround, edit the VirtualMachine
manifest so the apiVersion
is kubevirt.io/v1alpha3
. For example:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
...
If you use the Customize wizard to create a VM, the following error message is displayed:
Error creating virtual machine
As a workaround, copy the manifest and create the virtual machine from the CLI.
When connecting to the VNC console by using the OpenShift Virtualization web console, the VNC console always fails to respond.
As a workaround, create the virtual machine from the CLI or upgrade to OpenShift Virtualization 4.8.