Table of Contents

Product overview

Introduction to Container-native Virtualization

Container-native Virtualization is an add-on to OpenShift Container Platform that allows virtual machine workloads to run and be managed alongside container workloads. You can create virtual machines from disk images imported using the containerized data importer (CDI) controller, or from scratch within OpenShift Container Platform.

Container-native Virtualization introduces two new objects to OpenShift Container Platform:

  • Virtual Machine: The virtual machine in OpenShift Container Platform

  • Virtual Machine Instance: A running instance of the virtual machine

With the Container-native Virtualization add-on, virtual machines run in pods and have the same network and storage capabilities as standard pods.

Existing virtual machine disks are imported into persistent volumes (PVs), which are made accessible to Container-native Virtualization virtual machines using persistent volume claims (PVCs). In OpenShift Container Platform, the virtual machine object can be modified or replaced as needed, without affecting the persistent data stored on the PV.

Container-native Virtualization is currently a Technology Preview feature. For details about Red Hat support for Container-native Virtualization, see the Container-native Virtualization - Technology Preview Support Policy.

Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Before you begin

OpenShift Container Platform client commands

The oc client is a command-line utility for managing OpenShift Container Platform resources. The following table contains the oc commands that you use with Container-native Virtualization.

Table 1. oc commands
Command Description

oc get <object_type>

Display a list of objects for the specified object type in the project.

oc describe <object_type> <resource_name>

Display details of the specific resource.

oc create -f <config>

Create a resource from a filename or from stdin.

oc process -f <config>

Process a template into a configuration file. Templates have ``parameters'', which are either generated on creation or set by the user, as well as metadata describing the template.

oc apply -f <file>

Apply a configuration to a resource by filename or stdin.

See the OpenShift Container Platform CLI Reference Guide, or run the oc --help command, for definitive information on the OpenShift Container Platform client.

Virtctl commands

The virtctl client is a command-line utility for managing Container-native Virtualization resources. The following table contains the virtctl commands used throughout this document.

Table 2. Virtctl client
Command Description

virtctl start <vm>

Start a virtual machine, creating a virtual machine instance.

virtctl stop <vmi>

Stop a virtual machine instance.

virtctl expose <vm>

Create a service that forwards a designated port of a virtual machine or virtual machine instance and expose the service on the specified port of the node.

virtctl console <vmi>

Connect to a serial console of a virtual machine instance.

virtctl vnc <vmi>

Open a VNC connection to a virtual machine instance.

virtctl image-upload <…​>

Upload a virtual machine disk from a client machine to the cluster.

Ensure correct OpenShift Container Platform project

Before you modify objects using the shell or web console, ensure you use the correct project. In the shell, use the following commands:

Command Description

oc projects

List all available projects. The current project is marked with an asterisk.

oc project <project_name>

Switch to another project.

oc new-project <project_name>

Create a new project.

In the Web Console click the Project list and select the appropriate project or create a new one.

Importing and uploading virtual machines and disk images

Uploading a local disk image to a new PVC

You can use virtctl image-upload to upload a virtual machine disk image from a client machine to your OpenShift Container Platform cluster. This creates a PVC that can be associated with a virtual machine after the upload has completed.

Prerequisites
  • A virtual machine disk image, in RAW or QCOW2 format. It can be compressed using xz or gzip.

  • kubevirt-virtctl must be installed on the client machine.

Procedure
  1. Identify the following items:

    • File location of the VM disk image you want to upload

    • Name and size desired for the resulting PVC

  2. Expose the cdi-uploadproxy service so that you can upload data to your cluster:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Route
    metadata:
      name: cdi-uploadproxy
      namespace: kube-system
    spec:
      to:
        kind: Service
        name: cdi-uploadproxy
      tls:
        termination: passthrough
    EOF
  3. Use the virtctl image-upload command to upload your VM image, making sure to include your chosen parameters. For example:

    $ virtctl image-upload --uploadproxy-url=https://$(oc get route cdi-uploadproxy -o=jsonpath='{.status.ingress[0].host}') --pvc-name=upload-fedora-pvc --pvc-size=10Gi --image-path=/images/fedora28.qcow2

    To allow insecure server connections when using HTTPS, use the --insecure parameter.

  4. To verify that the PVC was created, view all PVC objects:

    $ oc get pvc

Next, you can create a virtual machine object to bind to the PVC.

Importing an existing virtual machine image with DataVolumes

DataVolume objects provide orchestration of import, clone, and upload operations associated with an underlying PVC. DataVolumes are integrated with KubeVirt and they can prevent a virtual machine from being started before the PVC has been prepared.

Prerequisites
  • The virtual machine disk can be RAW or QCOW2 format and can be compressed using xz or gz.

  • The disk image must be available at either an HTTP or S3 endpoint.

Procedure
  1. Identify an HTTP or S3 file server that hosts the virtual disk image that you want to import. You need the complete URL in the correct format:

  2. If your data source requires authentication credentials, edit the endpoint-secret.yaml file and apply it to the cluster:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <endpoint-secret>
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: ""  # <optional: your key or user name, base64 encoded>
      secretKey:    "" # <optional: your secret or password, base64 encoded>
    $ oc apply -f endpoint-secret.yaml
  3. Edit the VM configuration file, optionally including the secretRef parameter. In our example, we used a Fedora image:

    apiVersion: kubevirt.io/v1alpha2
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv
        spec:
          pvc:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Gi
            storageClassName: local
          source:
            http:
              url: https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2
              secretRef: "" # Optional
        status: {}
      running: false
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
                volumeName: datavolumevolume1
            machine:
              type: ""
            resources:
              requests:
                memory: 64M
          terminationGracePeriodSeconds: 0
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumevolume1
    status: {}
  4. Create the virtual machine:

    $ oc create -f vm-<name>-datavolume.yaml

    The virtual machine and a DataVolume will now be created. The CDI controller creates an underlying PVC with the correct annotation and begins the import process. When the import completes, the DataVolume status changes to Succeeded and the virtual machine will be allowed to start.

    DataVolume provisioning happens in the background, so there is no need to monitor it. You can start the VM and it will not run until the import is complete.

Optional verification steps
  1. Run $ oc get pods and look for the importer pod. This pod downloads the image from the specified URL and stores it on the provisioned PV.

  2. Monitor the DataVolume status until it shows Succeeded.

    $ oc describe dv <data-label> (1)
    1 The data label for the DataVolume specified in the VirtualMachine configuration file.
  3. To verify that provisioning is complete and that the VMI has started, try accessing its serial console:

    $ virtctl console <vm-fedora-datavolume>
Importing a virtual machine disk to a PVC

The process of importing a virtual machine disk is handled by the CDI controller. When a PVC is created with special cdi.kubevirt.io/storage.import annotations, the controller creates a short-lived import pod that attaches to the PV and downloads the virtual disk image into the PV.

Prerequisites
  • The virtual machine disk can be RAW or QCOW2 format and can be compressed using xz or gzip.

  • The disk image must be available at either an HTTP or S3 endpoint.

For locally provisioned storage, the PV needs to be created before the PVC. This is not required for OpenShift Container Storage, for which the PVs are created dynamically.

Procedure
  1. Identify an HTTP or S3 file server hosting the virtual disk image that you want to import. You need the complete URL, in either format:

  2. If the file server requires authentication credentials, edit the endpoint-secret.yaml file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: endpoint-secret
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: ""  # <optional: your key or user name, base64 encoded>
      secretKey:    "" # <optional: your secret or password, base64 encoded>
    1. Save the value of metadata.name to use with the cdi.kubevirt.io/storage.import.secret annotation in your PVC configuration file.

      For example: cdi.kubevirt.io/storage.import.secret: endpoint-secret

  3. Apply endpoint-secret.yaml to the cluster:

    $ oc apply -f endpoint-secret.yaml
  4. Edit the PVC configuration file, making sure to include the required annotations.

    For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: "example-vmdisk-volume"
      labels:
       app: containerized-data-importer
      annotations:
        cdi.kubevirt.io/storage.import.endpoint: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2" (1)
        cdi.kubevirt.io/storage.import.secret: "endpoint-secret" (2)
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    1 Endpoint annotation for the import image URL
    2 Endpoint annotation for the authorization secret
  5. Create the PVC using the oc CLI:

    $ oc create -f <pvc.yaml> (1)
    1 The PersistentVolumeClaim file name.

    After the disk image is successfully imported into the PV, the import pod expires, and you can bind the PVC to a virtual machine object within OpenShift Container Platform.

Next, create a virtual machine object to bind to the PVC.

Importing a virtual machine into a template from the web console
This procedure has been deprecated from 1.3 onwards.

The Import Virtual Machine Ansible playbook has the option to import a virtual machine as a template object, which you can use to create virtual machines.

Templates are useful when you want to create multiple virtual machines from the same base image with minor changes to resource parameters.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11

  • Container-native Virtualization version 1.3

Procedure
  1. Ensure you are in the correct project. If not, click the Project list and select the appropriate project or create a new one.

  2. Click Catalog on the side menu.

  3. Click the Virtualization tab to filter the catalog.

  4. Click Import Virtual Machine and click Next.

  5. Select Import as a template from URL and click Next.

  6. Enter the required parameters. For example:

    Add to Project: template-test
    OpenShift Admin Username: cnv-admin
    OpenShift Admin Password: password
    ReType OpenShift Admin Password: password
    Disk Image URL: https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2
    Operating system type: linux
    Template Name: fedora
    Number of Cores: 1
    Memory (MiB): 1024
    Disk Size (GiB) (leave at 0 to auto detect size): 0
    Storage Class (leave empty to use the default storage class):
  7. Click Create to begin importing the virtual machine.

A temporary pod, with the generated names of importer-template-<name>-dv-01-<random>, is built to handle the process of importing the data and creating the template. Upon completion, this temporary pod is discarded and the <name>-template (fedora-template in the previous step) becomes visible in the catalog and can be used to create virtual machines.

You may need to refresh your browser to see the template upon completion. This is due to a limitation of the Template Service Broker.

Cloning an existing PVC and creating a virtual machine using a dataVolumeTemplate

You can create a virtual machine that clones the PVC of an existing virtual machine into a DataVolume. By referencing a dataVolumeTemplate in the virtual machine spec, the source PVC is cloned to a DataVolume, which is then automatically used for the creation of the virtual machine.

When a DataVolume is created as part of the DataVolumeTemplate of a virtual machine, the lifecycle of the DataVolume is then dependent on the virtual machine: If the virtual machine is deleted, the DataVolume and associated PVC will also be deleted.
Prerequisites
  • A PVC of an existing virtual machine disk. The associated virtual machine must be powered down, or the clone process will be queued until the PVC is available.

Procedure
  1. Examine the DataVolume you want to clone to identify the name and namespace of the associated PVC.

  2. Create a YAML file for a VirtualMachine object. The following virtual machine example, <vm-dv-clone>, clones <my-favorite-vm-disk> (located in the <source-namespace> namespace) and creates the 2Gi <favorite-clone> DataVolume, referenced in the virtual machine as the <dv-clone> volume.

    For example:

    apiVersion: kubevirt.io/v1alpha2
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-dv-clone
      name: vm-dv-clone (1)
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-dv-clone
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: registry-disk
                volumeName: root
            resources:
              requests:
                memory: 64M
          volumes:
          - dataVolume:
              name: favorite-clone
            name: root
      dataVolumeTemplates:
      - metadata:
          name: favorite-clone
        spec:
          pvc:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Gi
          source:
            pvc:
              namespace: "source-namespace"
              name: "my-favorite-vm-disk"
    1 The virtual machine to create.
  3. Create the virtual machine with the PVC-cloned DataVolume:

    $ oc create -f <vm-clone-dvt>.yaml
Cloning the PVC of an existing virtual machine disk

You can clone a PVC of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.

When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine: If the virtual machine is deleted, neither the DataVolume nor its associated PVC will be deleted.
Prerequisites
  • A PVC of an existing virtual machine disk. The associated virtual machine must be powered down, or the clone process will be queued until the PVC is available.

Procedure
  1. Examine the DataVolume you want to clone to identify the name and namespace of the associated PVC.

  2. Create a YAML file for a DataVolume object that specifies the following parameters:

    metadata: name

    The name of the new DataVolume.

    source: pvc: namespace

    The namespace in which the source PVC exists.

    source: pvc: name

    The name of the source PVC.

    storage

    The size of the new DataVolume. Be sure to allocate enough space or the cloning operation fails. The size must be the same or larger as the source PVC.

    For example:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: cloner-datavolume
    spec:
      source:
        pvc:
          namespace: "<source-namespace>"
          name: "<my-favorite-vm-disk>"
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
    storage: 500Mi
  3. Start the PVC clone by creating the DataVolume:

    $ oc create -f <datavolume>.yaml

DataVolumes prevent a virtual machine from starting before the PVC is prepared so you can create a virtual machine that references the new DataVolume while the PVC clones.

Creating virtual machines in OpenShift Container Platform

Creating a new virtual machine using the web console

The Container-native Virtualization user interface is a specific build of the OpenShift Container Platform web console that contains the core features needed for virtualization use cases, including a virtualization navigation item.

kubevirt-web-ui is installed by default during the kubevirt-apb deployment.

You can create a new virtual machine from the Container-native Virtualization web console. The VM can be configured with an interactive wizard, or you can bring your own YAML file.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

  • Container-native Virtualization, version 1.3 or newer

  • (Optional) An existing PVC to attach to the virtual machine

Procedure
  1. Access the web UI at <kubevirt-web-ui.your.app.subdomain.host.com>. Log in by using your OpenShift Container Platform credentials.

  2. Open the Workloads menu and select the Virtual Machines menu item to list all available virtual machines.

  3. Click Create Virtual Machine, which has two options:

    Create with YAML

    allows you to paste and submit a YAML file describing the VM.

    Create with Wizard

    takes you through the process of VM creation.

  4. Select Create with Wizard. Input the desired VM name, description, and namespace where you want the VM to be created.

  5. Next, select the provisioning source from the following options:

    PXE

    The virtual machine will be booted over the network. The network interface and logical network will be configured later in the Networking tab.

    URL

    Provide the address of a disk image accessible from OpenShift Container Platform. RAW and QCOW2 formats are supported. Either format can be compresed with gzip or xz.

    Registry

    Provide a container containing a bootable operating system in a registry accessible from OpenShift Container Platform. A registryDisk volume is ephemeral, and the volume will be discarded when the virtual machine is stopped, restarted, or deleted.

    • Example: <kubevirt/fedora-cloud-registry-disk-demo>

    Template

    Create a new virtual machine from a VM template that you imported by using the Import VM APB. No other templates are supported.

  6. Next, select the operating system, flavor, and workload profile for the VM. If you want to use cloud-init or start the virtual machine on creation, select those options. Then, proceed to the next screen.

  7. (Optional) On the networking screen, you can create, delete, or rearrange network interfaces. The interfaces can be connected to the logical network using NetworkAttachmentDefinition.

  8. (Optional) On the storage screen, you can create, delete, or rearrange the virtual machine disks. If you want to attach a PVC from this screen, it must already exist.

  9. Once you have created the VM, it will be visible under Workloads > Virtual Machines. You can start the new VM from the "cog" icon to the left of the VM entry. To see additional details about the VM, click its name.

  10. To interact with the operating system, click the Consoles tab. This connects to the VNC console of the virtual machine.

Creating a new virtual machine from the CLI

The spec object of the VirtualMachine configuration file references the virtual machine settings, such as the number of cores and the amount of memory, the disk type, and the volumes to use.

Attach the virtual machine disk to the virtual machine by referencing the relevant PVC claimName as a volume.

ReplicaSet is not currently supported in Container-native Virtualization.

See the Reference section for information about volume types and sample configuration files.

Table 3. Domain settings
Setting Description

cores

The number of cores inside the virtual machine. Must be a value greater than or equal to 1.

memory

The amount of RAM allocated to the virtual machine by the node. Specify the denomination with M for Megabyte or Gi for Gigabyte.

disks: volumeName

The Name of the volume which is referenced. Must match the name of a volume.

Table 4. Volume settings
Setting Description

name

The Name of the volume. Must be a DNS_LABEL and unique within the virtual machine.

persistentVolumeClaim

The PVC to attach to the virtual machine. The claimName of the PVC must be in the same project as the virtual machine.

See the kubevirt API Reference for a definitive list of virtual machine settings.

To create a virtual machine with the OpenShift Container Platform client:

$ oc create -f <vm.yaml>

Virtual machines are created in a Stopped state. Run a virtual machine instance by starting it.

Creating a virtual machine from a template using the web console

You can use templates to create virtual machines, removing the need to download a disk image for each virtual machine. The PVC created for the template is cloned, allowing you to change the resource parameters for each new virtual machine.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

  • Container-native Virtualization version 1.3 or newer

Procedure
  1. Ensure you are in the correct project. If not, click the Project list and select the appropriate project or create a new one.

  2. Click Catalog on the side menu.

  3. Click the Virtualization tab to filter the catalog.

  4. Select the template and click Next.

  5. Enter the required parameters. For example:

    Add to Project: <template-test>
    NAME: <fedora-1>
    MEMORY: <2048Mi>
    CPU_CORES: (2)
  6. Click Next.

  7. Choose whether or not to create a binding for the virtual machine. Bindings create a secret containing the necessary information for another application to use the virtual machine service. Bindings can also be added after the virtual machine has been created.

  8. Click Create to begin creating the virtual machine.

Temporary pods with the generated names of clone-source-pod-<random> and clone-target-pod-<random>, are built, in the template project and the virtual machine project respectively, to handle the creation of the virtual machine and the corresponding PVC. The PVC is given a generated name of vm-<vm-name>-disk-01 or vm-<vm-name>-dv-01. Upon completion, the temporary pods are discarded, and the virtual machine (<fedora-1> in the above example) is ready in a Stopped state.

Using virtual machines in OpenShift Container Platform

Accessing the serial console of a VMI

The virtctl console command opens a serial console to the specified virtual machine instance.

Prerequisites
  • The virtual machine instance you want to access must be running

Procedure
  1. Connect to the serial console with virtctl:

    $ virtctl console <VMI>
Accessing the graphical console of a VMI with VNC

The virtctl client utility can use remote-viewer to open a graphical console to a running virtual machine instance. This is installed with the virt-viewer package.

Prerequisites
  • virt-viewer must be installed.

  • The virtual machine instance you want to access must be running.

If you use virtctl via SSH on a remote machine, you must forward the X session to your machine for this procedure to work.

Procedure
  1. Connect to the graphical interface with the virtctl utility:

    $ virtctl vnc <VMI>
  2. If the command failed, try using the -v flag to collect troubleshooting information:

    $ virtctl vnc <VMI> -v 4
Accessing a virtual machine instance via SSH

You can use SSH to access a virtual machine, but first you must expose port 22 on the VM.

The virtctl expose command forwards a virtual machine instance port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh service which forwards port 22 of the <fedora-vm> virtual machine to a port on the node:

Prerequisites
  • The virtual machine instance you want to access must be running.

Procedure
  1. Run the following command to create the fedora-vm-ssh service:

    $ virtctl expose vm <fedora-vm> --port=20022 --target-port=22 --name=fedora-vm-ssh --type=NodePort
  2. Check the service to find out which port the service acquired:

    $ oc get svc
    NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
    fedora-vm-ssh   NodePort   127.0.0.1      <none>        20022:32551/TCP   6s
  3. Log into the virtual machine instance via SSH, using the ipAddress of the node and the port that you found in Step 2:

    $ ssh username@<node IP> -p 32551
Controlling virtual machines

Virtual machines can be started and stopped, depending on the current state of the virtual machine. The option to restart VMs is available in the Web Console only.

Use the virtctl client utility to change the state of the virtual machine, open virtual console sessions with the virtual machines, and expose virtual machine ports as services.

The virtctl syntax is: virtctl <action> <VM-name>

You can only control objects in the project you are currently working in, unless you specify the -n <project_name> option.

Examples:

$ virtctl start example-vm
$ virtctl stop example-vm

oc get vm lists the virtual machines in the project. oc get vmi lists running virtual machine instances.

Deleting virtual machines in OpenShift Container Platform

Deleting virtual machines and virtual machine PVCs

When you delete a virtual machine, the PVC it uses is unbound. If you do not plan to bind this PVC to a different VM, delete it, too.

You can only delete objects in the project you are currently working in, unless you specify the -n <project_name> option.

$ oc delete vm fedora-vm
$ oc delete pvc fedora-vm-pvc

Advanced virtual machine configuration

Using an Open vSwitch bridge as the network source for a VM

With Container-native Virtualization, you can connect a virtual machine instance to an Open vSwitch bridge that is configured on the node.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

Procedure
  1. Prepare the cluster host networks (optional).

    If the host network needs additional configuration changes, such as bonding, refer to the Red Hat Enterprise Linux networking guide.

  2. Configure interfaces and bridges on all cluster hosts.

    On each node, choose an interface connected to the desired network. Then, create an Open vSwitch bridge and specify the interface you chose as the bridge’s port.

    In this example, we create bridge br1 and connect it to interface eth1. This bridge must be configured on all nodes. If it is only available on a subset of nodes, make sure that VMIs have nodeSelector constraints in place.

     Any connections to `eth1` are lost once the interface is
    assigned to the bridge, so another interface must be present on the host.
    $ ovs-vsctl add-br br1
    $ ovs-vsctl add-port br1 eth1
    $ ovs-vsctl show
    8d004495-ea9a-44e1-b00c-3b65648dae5f
        Bridge br1
            Port br1
                Interface br1
                    type: internal
            Port "eth1"
                Interface "eth1"
        ovs_version: "2.8.1"
  3. Configure the network on the cluster.

    L2 networks are treated as cluster-wide resources. Define the network in a network attachment definition YAML file. You can define the network using the NetworkAttachmentDefinition CRD.

    The NetworkAttachmentDefinition CRD object contains information about pod-to-network attachment. In the following example, there is an attachment to Open vSwitch bridge br1 and traffic is tagged to VLAN 100.

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: vlan-100-net-conf
    spec:
      config: '{
          "cniVersion": "0.3.1",
          "type": "ovs",
          "bridge": "br1",
          "vlan": 100
        }'

    "vlan" is optional. If omitted, the VMI will be attached through a trunk.

  4. Edit the virtual machine instance configuration file to include the details of the interface and network.

    Specify that the network is connected to the previously created NetworkAttachmentDefinition. In this scenario, vlan-100-net is connected to the NetworkAttachmentDefinition called vlan-100-net-conf:

    networks:
    - name: default
      pod: {}
    - name: vlan-100-net
      multus:
        networkName: vlan-100-net-conf

    After you start the VMI, the eth0 interface connects to the default cluster network and eth1 connects to VLAN 100 using bridge br1 on the node running the VMI.

PXE booting with a specified MAC address

PXE booting, or network booting, is supported in Container-native Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

The Reference section has a configuration file template for PXE booting.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

  • A configured interface that allows PXE booting

Procedure
  1. Configure a PXE network on the cluster:

    1. Create NetworkAttachmentDefinition of PXE network pxe-net-conf:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf
      spec:
        config: '{
            "cniVersion": "0.3.1",
            "type": "ovs",
            "bridge": "br1"
          }'

      In this example, the VMI will be attached through a trunk port to the Open vSwitch bridge <br1>.

    2. Create Open vSwitch bridge <br1> and connect it to interface <eth1>, which is connected to a network that allows for PXE booting:

      $ ovs-vsctl add-br br1
      $ ovs-vsctl add-port br1 eth1
      $ ovs-vsctl show
      8d004495-ea9a-44e1-b00c-3b65648dae5f
          Bridge br1
              Port br1
                  Interface br1
                      type: internal
              Port "eth1"
                  Interface "eth1"
          ovs_version: "2.8.1"

      This bridge must be configured on all nodes. If it is only available on a subset of nodes, make sure that VMIs have nodeSelector constraints in place.

  2. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. However, note that at this time, MAC addresses assigned automatically are not persistent.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      interfaces:
      - bridge: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after OS provisioning.

      Set the disk bootOrder value to 2:

      devices:
        disks:
        - disk:
            bus: virtio
          name: registrydisk
          volumeName: registryvolume
          bootOrder: 2
    3. Specify that the network is connected to the previously created NetworkAttachmentDefinition. In this scenario, <pxe-net> is connected to the NetworkAttachmentDefinition called <pxe-net-conf>:

      networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  3. Create the virtual machine instance:

    $ oc create -f vmi-pxe-boot.yaml
    virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
  4. Wait for the virtual machine instance to run:

    $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  5. View the virtual machine instance using VNC:

    $ virtctl vnc vmi-pxe-boot
  6. Watch the boot screen to verify that the PXE boot is successful.

  7. Log in to the VMI:

    $ virtctl console vmi-pxe-boot
  8. Verify the interfaces and MAC address on the VM, and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OpenShift Container Platform.

    $ ip addr
    ...
    3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
       link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
Configuring guest memory overcommitment

If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances. Enabling memory overcommitment means you can maximize resources that are normally reserved for the host.

For example, if the host has 32 GB RAM, you can leverage memory overcommitment to fit 8 VMs with 4 GB RAM each. This works under the assumption that the VMs will not use all of their memory at the same time.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

Procedure

To explicitly tell the VMI that it has more memory available than what has been requested from the cluster, set spec.domain.memory.guest to a higher value than spec.domain.resources.requests.memory. This process is called memory overcommitment.

In this example, <1024M> is requested from the cluster, but the VMI is told that it has <2048M> available. As long as there is enough free memory available on the node, the VMI will consume up to 2048M.

kind: VirtualMachine
spec:
  template:
    domain:
    resources:
        requests:
          memory: <1024M>
    memory:
        guest: <2048M>

The same eviction rules as those for pods apply to the VMI if the node gets under memory pressure.

Disabling guest memory overhead accounting
This procedure is only useful in certain use-cases and should only be attempted by advanced users.

A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure wrapping each VirtualMachineInstance process.

Though it is not usually advisable, it is possible to increase the VMI density on the node by disabling guest memory overhead accounting.

Prerequisites
  • A cluster running OpenShift Container Platform 3.11 or newer

Procedure

To disable guest memory overhead accounting, edit the YAML configuration file and set the overcommitGuestOverhead value to true. This parameter is disabled by default.

kind: VirtualMachine
spec:
  template:
    domain:
    resources:
        overcommitGuestOverhead: true
        requests:
          memory: 1024M

If overcommitGuestOverhead is enabled, it adds the guest overhead to memory limits (if present).

Events, logs, errors, and metrics

Events

OpenShift Container Platform events are records of important life-cycle information in a project and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.

To retrieve the events for the project, run:

$ oc get events

Events are also included in the resource description, which you can retrieve by using the OpenShift Container Platform client.

$ oc describe <resource_type> <resource_name>
$ oc describe vm <fedora-vm>
$ oc describe vmi <fedora-vm>
$ oc describe pod virt-launcher-fedora-vm-<random>

Resource descriptions also include configuration, scheduling, and status details.

Logs

Logs are collected for OpenShift Container Platform builds, deployments, and pods. Virtual machine logs can be retrieved from the virtual machine launcher pod.

$ oc logs virt-launcher-fedora-vm-zzftf

The -f option follows the log output in real time, which is useful for monitoring progress and error checking.

If the launcher pod is failing to start, use the --previous option to see the logs of the last attempt.

ErrImagePull and ImagePullBackOff errors can be caused by an incorrect deployment configuration or problems with the images being referenced.

Metrics

OpenShift Container Platform Metrics collects memory, CPU, and network performance information for nodes, components, and containers in the cluster. The specific information collected depends on how the Metrics subsystem is configured. For more information on configuring Metrics, see the OpenShift Container Platform Configuring Clusters Guide.

The oc CLI command adm top uses the Heapster API to fetch data about the current state of pods and nodes in the cluster.

To retrieve metrics for a pod:

$ oc adm top pod <pod_name>

To retrieve metrics for the nodes in the cluster:

$ oc adm top node

The OpenShift Container Platform web console can represent metric information graphically over a time range.

Reference

Types of storage volumes for virtual machines
ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts, and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. This allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

DataVolumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs using this volume type are guaranteed not to start until the volume is ready.

cloudInitNoCloud

Attaches a disk containing the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

registryDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and embedded in a volume when the virtual machine is created. A registryDisk volume is ephemeral, and the volume is discarded when the virtual machine is stopped, restarted, or deleted.

Registry disks are not limited to a single virtual machine and are useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

Template: PVC configuration file

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "example-vmdisk-volume"
  labels:
   app: containerized-data-importer
  annotations:
    kubevirt.io/storage.import.endpoint: "" # Required. Format: (http||s3)://www.myUrl.com/path/to/data
    kubevirt.io/storage.import.secretName: "" # Optional. The name of the secret containing credentials for the data source
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Template: VM configuration file

vm.yaml

apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
  creationTimestamp: null
  labels:
    kubevirt-vm: fedora-vm
  name: fedora-vm
spec:
  running: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/domain: fedora-vm
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: registrydisk
            volumeName: root
          - disk:
              bus: virtio
            name: cloudinitdisk
            volumeName: cloudinitvolume
        machine:
          type: ""
        resources:
          requests:
            memory: 1Gi
      terminationGracePeriodSeconds: 0
      volumes:
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
        name: cloudinitvolume
      - name: root
        persistentVolumeClaim:
          claimName: example-vmdisk-volume
status: {}
Template: VM configuration file (DataVolume)

example-vm-dv.yaml

apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
  labels:
    kubevirt.io/vm: example-vm
  name: example-vm
spec:
  dataVolumeTemplates:
  - metadata:
      name: example-dv
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1G
      source:
          http:
             url: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: example-vm
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
          - disk:
              bus: virtio
            name: disk0
            volumeName: example-datavolume
        machine:
          type: q35
        resources:
          requests:
            memory: 1G
      terminationGracePeriodSeconds: 0
      volumes:
      - dataVolume:
          name: example-dv
        name: example-datavolume
Template: DataVolume import configuration file

example-import-dv.yaml

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: "example-import-dv"
spec:
  source:
      http:
         url: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2" # Or S3
         secretRef: "" # Optional
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: "1G"
Template: DataVolume clone configuration file

example-clone-dv.yaml

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: "example-clone-dv"
spec:
  source:
      pvc:
        name: source-pvc
        namespace: example-ns
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: "1G"
Template: VMI configuration file for PXE booting

vmi-pxe-boot.yaml

apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachineInstance
metadata:
  creationTimestamp: null
  labels:
    special: vmi-pxe-boot
  name: vmi-pxe-boot
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: registrydisk
        volumeName: registryvolume
        bootOrder: 2
      - disk:
          bus: virtio
        name: cloudinitdisk
        volumeName: cloudinitvolume
      interfaces:
      - bridge: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
    machine:
      type: ""
    resources:
      requests:
        memory: 1024M
  networks:
  - name: default
    pod: {}
  - multus:
      networkName: pxe-net-conf
    name: pxe-net
  terminationGracePeriodSeconds: 0
  volumes:
  - name: registryvolume
    registryDisk:
      image: kubevirt/fedora-cloud-registry-disk-demo
  - cloudInitNoCloud:
      userData: |
        #!/bin/bash
        echo "fedora" | passwd fedora --stdin
    name: cloudinitvolume
status: {}

End