You can import a single Red Hat Virtualization (RHV) virtual machine into your OpenShift Container Platform cluster by using the virtual machine wizard or the CLI.

Prerequisites for importing a virtual machine

Importing a virtual machine from Red Hat Virtualization (RHV) into OpenShift Virtualization has the following prerequisites.

Red Hat Virtualization prerequisites

The Red Hat Virtualization (RHV) environment has the following prerequisites for VM import:

  • Network:

    • The VM network must be mapped to a single network in the OpenShift Container Platform environment. The networks must either have the same name or be mapped to each other.

    • The network interface must be e1000, rtl8139, or virtio.

  • Disk:

    • The disk interface must be sata, virtio_scsi, or virtio.

    • The disk must not be configured as a direct LUN.

    • The disk status must not be illegal or locked.

    • The storage type must be image.

    • SCSI reservation must be disabled.

    • ScsiGenericIO must be disabled.

  • Configuration:

    • If the VM uses GPU resources, the nodes providing the GPUs must be configured.

    • The VM must not be configured for vGPU resources.

    • The BIOS type must be Q35 Chipset with Legacy BIOS.

    • The custom emulated machine must be Q35.

      Virtual machines created with RHV 4.4 emulate the Intel Q35 chipset by default. However, you must update older virtual machines in the RHV 4.4 cluster.

    • The VM must not have snapshots with disks in an illegal state.

    • The VM must not have been created with OpenShift Container Platform and subsequently added to RHV.

    • The VM must not be configured for USB devices.

    • The watchdog model must not be diag288.

OpenShift Virtualization prerequisites

The OpenShift Virtualization environment has the following prerequisites for VM import:

  • You must be an admin user.

  • The local and shared persistent storage must support VM import.

OpenShift Virtualization storage feature matrix

The following table describes local and shared persistent storage that support VM import.

Table 1. OpenShift Virtualization storage feature matrix
RHV VM import

OpenShift Container Storage: RBD block-mode volumes

No

OpenShift Virtualization hostpath provisioner

No

Other multi-node writable storage

Yes [1]

Other single-node writable storage

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.

  2. PVCs must request a ReadWriteOnce access mode.

Checking the default storage class

You must check the default storage class to ensure that it is NFS.

Cinder, the default storage class, does not support VM import. See (BZ#1856439) for details.

Checking the default storage class in the OpenShift Container Platform console

You can check the default storage class in the OpenShift Container Platform console. If the default storage class is not NFS, you can change the default storage class so that it is no longer the default and change the NFS storage class so that it is the default.

If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.

Procedure
  1. Navigate to StorageStorage Classes.

  2. Check the default storage class in the Storage Classes list.

  3. If the default storage class is not NFS, edit the default storage class so that it is no longer the default:

    1. Click the Options menu kebab of the default storage class and select Edit Storage Class.

    2. In the Details tab, click the Edit button beside Annotations.

    3. Click the Delete button delete on the right side of the storageclass.kubernetes.io/is-default-class annotation and then click Save.

  4. Change an existing NFS storage class to be the default:

    1. Click the Options menu kebab of an existing NFS storage class and select Edit Storage Class.

    2. In the Details tab, click the Edit button beside Annotations.

    3. Enter storageclass.kubernetes.io/is-default-class in the Key field and true in the Value field and then click Save.

  5. Navigate to StorageStorage Classes to verify that the NFS storage class is the only default storage class.

Checking the default storage class from the CLI

You can check the default storage class from the CLI.

If the default storage class is not NFS, you must change the default storage class to NFS and change the existing default storage class so that it is not the default. If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.

Procedure
  • Get the storage classes by entering the following command:

    $ oc get sc

The default storage class is displayed in the output:

Example output
NAME                PROVISIONER           RECLAIMPOLICY  VOLUMEBINDINGMODE     ALLOWVOLUMEEXPANS
...
standard (default)  kubernetes.io/cinder  Delete         WaitForFirstConsumer  true

Changing the default StorageClass

If you are using AWS, use the following process to change the default StorageClass. This process assumes you have two StorageClasses defined, gp2 and standard, and you want to change the default StorageClass from gp2 to standard.

  1. List the StorageClass:

    $ oc get storageclass
    Example output
    NAME                 TYPE
    gp2 (default)        kubernetes.io/aws-ebs (1)
    standard             kubernetes.io/aws-ebs
    1 (default) denotes the default StorageClass.
  2. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default StorageClass:

    $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Make another StorageClass the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true.

    $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
  4. Verify the changes:

    $ oc get storageclass
    Example output
    NAME                 TYPE
    gp2                  kubernetes.io/aws-ebs
    standard (default)   kubernetes.io/aws-ebs

Creating a ConfigMap for importing a Red Hat Virtualization virtual machine

You can create a ConfigMap to map the Red Hat Virtualization (RHV) virtual machine operating system to an OpenShift Virtualization template if you want to override the default vm-import-controller mapping or to add additional mappings.

The default vm-import-controller ConfigMap contains the following RHV operating systems and their corresponding common OpenShift Virtualization templates.

Table 2. Operating system and template mapping
RHV VM operating system OpenShift Virtualization template

rhel_6_9_plus_ppc64

rhel6.9

rhel_6_ppc64

rhel6.9

rhel_6

rhel6.9

rhel_6x64

rhel6.9

rhel_7_ppc64

rhel7.7

rhel_7_s390x

rhel7.7

rhel_7x64

rhel7.7

rhel_8x64

rhel8.1

sles_11_ppc64

opensuse15.0

sles_11

opensuse15.0

sles_12_s390x

opensuse15.0

ubuntu_12_04

ubuntu18.04

ubuntu_12_10

ubuntu18.04

ubuntu_13_04

ubuntu18.04

ubuntu_13_10

ubuntu18.04

ubuntu_14_04_ppc64

ubuntu18.04

ubuntu_14_04

ubuntu18.04

ubuntu_16_04_s390x

ubuntu18.04

windows_10

win10

windows_10x64

win10

windows_2003

win10

windows_2003x64

win10

windows_2008R2x64

win2k8

windows_2008

win2k8

windows_2008x64

win2k8

windows_2012R2x64

win2k12r2

windows_2012x64

win2k12r2

windows_2016x64

win2k16

windows_2019x64

win2k19

windows_7

win10

windows_7x64

win10

windows_8

win10

windows_8x64

win10

windows_xp

win10

Procedure
  1. In a web browser, identify the REST API name of the RHV VM operating system by navigating to http://<RHV_Manager_FQDN>/ovirt-engine/api/vms/<VM_ID>. The operating system name appears in the <os> section of the XML output, as shown in the following example:

    ...
    <os>
    ...
    <type>rhel_8x64</type>
    </os>
  2. View a list of the available OpenShift Virtualization templates:

    $ oc get templates -n openshift --show-labels | tr ',' '\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\1#g' | sort -u
    Example output
    fedora31
    fedora32
    ...
    rhel8.1
    rhel8.2
    ...
  3. If an OpenShift Virtualization template that matches the RHV VM operating system does not appear in the list of available templates, create a template with the OpenShift Virtualization web console.

  4. Create a ConfigMap to map the RHV VM operating system to the OpenShift Virtualization template:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: os-configmap
      namespace: default (1)
    data:
      guestos2common: |
        "Red Hat Enterprise Linux Server": "rhel"
        "CentOS Linux": "centos"
        "Fedora": "fedora"
        "Ubuntu": "ubuntu"
        "openSUSE": "opensuse"
      osinfo2common: |
        "<rhv-operating-system>": "<vm-template>" (2)
    EOF
    1 Optional: You can change the value of the namespace parameter.
    2 Specify the REST API name of the RHV operating system and its corresponding VM template as shown in the following example.
    ConfigMap example
    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: os-configmap
      namespace: default
    data:
      osinfo2common: |
        "other_linux": "fedora31"
    EOF
  5. Verify that the custom ConfigMap was created:

    $ oc get cm -n default os-configmap -o yaml
  6. Edit the kubevirt-hyperconverged-operator.v2.4.1.yaml file:

    $ oc edit clusterserviceversion -n openshift-cnv kubevirt-hyperconverged-operator.v2.4.1
  7. Update the following parameters of the vm-import-operator deployment manifest:

                ...
                spec:
                  containers:
                  - env:
                    ...
                    - name: OS_CONFIGMAP_NAME
                      value: os-configmap (1)
                    - name: OS_CONFIGMAP_NAMESPACE
                      value: default (2)
    1 Add value: os-configmap to the name: OS_CONFIGMAP_NAME parameter.
    2 Optional: You can add this value if you changed the namespace in the ConfigMap.
  8. Save the kubevirt-hyperconverged-operator.v2.4.1.yaml file.

    Updating the vm-import-operator deployment updates the vm-import-controller ConfigMap.

  9. Verify that the template appears in the OpenShift Virtualization web console:

    1. Click WorkloadsVirtualization from the side menu.

    2. Click the Virtual Machine Templates tab and find the template in the list.

Importing a virtual machine

Importing a Red Hat Virtualization virtual machine with the virtual machine wizard

You can import a Red Hat Virtualization (RHV) virtual machine by using the virtual machine wizard.

The OpenShift Virtualization storage class must be NFS. Cinder, the default storage class, is not supported for VM import.

Procedure
  1. In the web console, click WorkloadsVirtual Machines.

  2. Click Create Virtual Machine and select Import with Wizard.

  3. Select Red Hat Virtualization (RHV) from the Provider list.

  4. Select Connect to New Instance or a saved RHV instance.

    • If you select Connect to New Instance, fill in the following fields:

      • API URL: For example, https://<RHV_Manager_FQDN>/ovirt-engine/api

      • CA certificate: Click Browse to upload the RHV Manager CA certificate or paste the CA certificate into the field.

        View the CA certificate by running the following command:

        $ openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null

        The CA certificate is the second certificate in the output.

      • Username: RHV Manager user name, for example, admin@internal

      • Password: RHV Manager password

    • If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials.

  5. Click Check and Save and wait for the connection to complete.

  6. Select a cluster and a virtual machine to import.

  7. Click Next.

  8. In the Review screen, review your settings.

  9. Optional: You can select Start virtual machine on creation.

  10. Click Edit to update the following settings:

    • GeneralName: The VM name is limited to 63 characters. (BZ#1857165)

    • GeneralDescription: Optional description of the VM.

    • StorageStorage Class: Select NFS.

    • NetworkingNetwork: You can select a network from a list of available NetworkAttachmentDefinition objects.

  11. Click Import or Review and Import, if you have edited the import settings.

    A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in WorkloadsVirtual Machines.

Virtual machine wizard fields

Name Parameter Description

Template

Template from which to create the virtual machine. Selecting a template will automatically complete other fields.

Source

PXE

Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster.

URL

Provision virtual machine from an image available from an HTTP or S3 endpoint.

Container

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

Disk

Provision virtual machine from a disk.

Operating System

The primary operating system that is selected for the virtual machine.

Flavor

small, medium, large, tiny, Custom

Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system.

Memory

Size in GiB of the memory allocated to the virtual machine.

CPUs

The amount of CPU allocated to the virtual machine.

Workload Profile

High Performance

A virtual machine configuration that is optimized for high-performance workloads.

Server

A profile optimized to run server workloads.

Desktop

A virtual machine configuration for use on a desktop.

Name

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

Optional description field.

Start virtual machine on creation

Select to automatically start the virtual machine upon creation.

Networking fields

Name Description

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Storage fields

Name Description

Source

Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs).

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size (GiB)

Size, in GiB, of the disk.

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The StorageClass that is used to create the disk.

Advanced → Volume Mode

Defines whether the persistent volume uses a formatted filesystem or raw block state. Default is Filesystem.

Advanced → Access Mode

Access mode of the persistent volume. Supported access modes are ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.

Advanced storage settings

The following advanced storage settings are available for Blank, URL, and Attach Cloned Disk disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults ConfigMap.

Name Parameter Description

Volume Mode

Filesystem

Stores the virtual disk on a filesystem-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

This is required for some features, such as live migration of virtual machines between nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

Importing a Red Hat Virtualization virtual machine with the CLI

You can import a Red Hat Virtualization (RHV) virtual machine with the CLI by creating the Secret and VirtualMachineImport Custom Resources (CRs). The Secret CR stores the RHV Manager credentials and CA certificate. The VirtualMachineImport CR defines the parameters of the VM import process.

Optional: You can create a ResourceMapping CR that is separate from the VirtualMachineImport CR. A ResourceMapping CR provides greater flexibility, for example, if you import additional RHV VMs.

The default target storage class must be NFS. Cinder does not support RHV VM import. See (BZ#1856439).

Procedure
  1. Create the Secret CR by running the following command:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: rhv-credentials
      namespace: default (1)
    type: Opaque
    stringData:
      ovirt: |
        apiUrl: "https://<RHVM-FQDN>:8443/ovirt-engine/api" (2)
        username: admin@internal
        password: (3)
        caCert: |
          -----BEGIN CERTIFICATE-----
          (4)
          -----END CERTIFICATE-----
    EOF
    1 Optional. You can specify a different namespace in all the CRs.
    2 Specify the FQDN of the RHV Manager.
    3 Specify the password for admin@internal.
    4 Specify the RHV Manager CA certificate. You can obtain the CA certificate by running the following command:
    $ openssl s_client -connect :443 -showcerts < /dev/null
  2. Optional: Create the ResourceMapping CR if you want to separate the resource mapping from the VirtualMachineImport CR by running the following command:

    $ cat <<EOF | kubectl create -f -
    apiVersion: v2v.kubevirt.io/v1alpha1
    kind: ResourceMapping
    metadata:
      name: resourcemapping-example
      namespace: default
    spec:
      ovirt:
        networkMappings:
          - source:
              name: <rhv-logical-network>/<vnic-profile> (1)
            target:
              name: <target-network> (2)
            type: pod
        storageMappings: (3)
          - source:
              name: <rhv-storage-domain> (4)
            target:
              name: <target-storage-class> (5)
    EOF
    1 Specify the RHV logical network and vNIC profile.
    2 Specify the OpenShift Virtualization network.
    3 If storageMappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence.
    4 Specify the RHV storage domain.
    5 The default storage class must be NFS.
  3. Create the VirtualMachineImport CR by running the following command:

    $ cat <<EOF | oc create -f -
    apiVersion: v2v.kubevirt.io/v1alpha1
    kind: VirtualMachineImport
    metadata:
      name: vm-import
      namespace: default
    spec:
      providerCredentialsSecret:
        name: rhv-credentials
        namespace: default
    # resourceMapping: (1)
    #   name: resourcemapping-example
    #   namespace: default
      targetVmName: vm-example (2)
      startVm: true
      source:
        ovirt:
          vm:
            id: <source-vm-id> (3)
            name: <source-vm-name> (4)
          cluster:
            name: <source-cluster-name> (5)
          mappings: (6)
            networkMappings:
              - source:
                  name: <source-logical-network>/<vnic-profile> (7)
                target:
                  name: <target-network> (8)
                type: pod
            storageMappings: (9)
              - source:
                  name: <source-storage-domain> (10)
                target:
                  name: <target-storage-class> (11)
            diskMappings:
              - source:
                  id: <source-vm-disk-id> (12)
                target:
                  name: <target-storage-class> (13)
    EOF
    1 If you create a ResourceMapping CR, uncomment the resourceMapping section.
    2 Specify the target VM name.
    3 The UUID of the source VM. If you specify the source VM ID, for example, 80554327-0569-496b-bdeb-fcbbf52b827b, the source VM name and cluster are ignored. Alternatively, you can specify the source VM name and cluster.
    4 If you specify the source VM name, you must also specify the source cluster. Do not specify the source VM ID.
    5 If you specify the source cluster, you must also specify the source VM name. Do not specify the source VM ID.
    6 If you create a ResourceMapping CR, comment out the mappings section.
    7 Specify the logical network and vNIC profile of the source VM.
    8 Specify the OpenShift Virtualization network.
    9 If storageMappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence.
    10 Specify the source storage domain.
    11 Specify the target storage class, which must be NFS.
    12 Specify the source VM disk UUID, for example, 8181ecc1-5db8-4193-9c92-3ddab3be7b05.
    13 Specify the disk target storage class, which must be NFS.
  4. Follow the progress of the virtual machine import to verify that the import was successful:

    $ oc get vmimports vm-import -n default

    The output indicating a successful import resembles the following example:

    Example output
    ...
    status:
      conditions:
      - lastHeartbeatTime: "2020-07-22T08:58:52Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: Validation completed successfully
        reason: ValidationCompleted
        status: "True"
        type: Valid
      - lastHeartbeatTime: "2020-07-22T08:58:52Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: 'VM specifies IO Threads: 1, VM has NUMA tune mode specified: interleave'
        reason: MappingRulesVerificationReportedWarnings
        status: "True"
        type: MappingRulesVerified
      - lastHeartbeatTime: "2020-07-22T08:58:56Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: Copying virtual machine disks
        reason: CopyingDisks
        status: "True"
        type: Processing
      dataVolumes:
      - name: fedora32-b870c429-11e0-4630-b3df-21da551a48c0
      targetVmName: fedora32

Canceling a virtual machine import

You can cancel a virtual machine import in progress by using the web console.

Procedure
  1. Click WorkloadsVirtual Machines.

  2. Click the Options menu kebab of the virtual machine you are importing and select Delete Virtual Machine.

  3. In the Delete Virtual Machine window, click Delete.

    The virtual machine is removed from the list of virtual machines.

Troubleshooting a virtual machine import

Logs

You can check the VM Import Controller Pod log for errors.

Procedure
  1. View the VM Import Controller Pod name by running the following command:

    $ oc get pods -n <namespace> | grep import (1)
    1 Specify the namespace of your imported virtual machine.
    Example output
    vm-import-controller-f66f7d-zqkz7            1/1     Running     0          4h49m
  2. View the VM Import Controller Pod log by running the following command:

    $ oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> (1)
    1 Specify the VM Import Controller Pod name and the namespace.

Error messages

The following error messages might appear:

  • The following error message is displayed in the VM Import Controller Pod log if the system settings of the VM do not emulate the Intel Q35 chipset:

    The virtual machine could not be imported.
    MappingRulesVerificationFailed: VM uses unsupported bios type: i440fx_sea_bios
  • The following error message is displayed in the VM Import Controller Pod log if the target VM name exceeds 63 characters (BZ#1857165):

    Message:               Error while importing disk image
    Reason:                ProcessingFailed
  • The following error message is displayed in the VM Import Controller Pod log and the progress bar stops at 10% if the OpenShift Virtualization storage PV is not suitable:

    Failed to bind volumes: provisioning failed for PVC

    You must use the NFS storage class. Cinder storage is not supported. (BZ#1857784)

  • The following error message is displayed in the Virtual Machines tab of the Virtualization screen in the OpenShift Virtualization console if the vm-import-controller cannot find a matching template for the RHV VM operating system:

    The virtual machine could not be imported.
    VMTemplateMatchingFailed: Couldn't find matching template

    You can perform the following actions to fix this problem:

    • Change the RHV VM operating system to an operating system that exists in the default vm-import-controller ConfigMap.

    • If you created a custom ConfigMap, check the ConfigMap to verify that the RHV VM operating system is mapped to a matching OpenShift Virtualization common template.

    • If there is no matching OpenShift Virtualization common template, create an appropriate VM template in the OpenShift Virtualization console and then create a custom ConfigMap to map the RHV VM operating system to the new template.

  • The migration will hang at the Starting Red Hat Virtualization (RHV) controller message in the OpenShift Virtualization console if a non-admin user tries to import a VM. Only an admin user has permission to import a VM.