Product overview

Introduction to Container-native Virtualization

Container-native Virtualization is an add-on to OpenShift Container Platform that allows virtual machine workloads to run and be managed alongside container workloads. You can create virtual machines from disk images imported using the containerized data importer (CDI) controller, or from scratch within OpenShift Container Platform.

Container-native Virtualization introduces two new objects to OpenShift Container Platform:

  • Virtual Machine: The virtual machine in OpenShift Container Platform

  • Virtual Machine Instance: A running instance of the virtual machine

With the Container-native Virtualization add-on, virtual machines run in pods and have the same network and storage capabilities as standard pods.

Existing virtual machine disks are imported into persistent volumes (PVs), which are made accessible to Container-native Virtualization virtual machines using persistent volume claims (PVCs). In OpenShift Container Platform, the virtual machine object can be modified or replaced as needed, without affecting the persistent data stored on the PV.

Container-native Virtualization is currently a Technology Preview feature. For details about Red Hat support for Container-native Virtualization, see the Container-native Virtualization - Technology Preview Support Policy.

Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Prerequisites

Container-native Virtualization requires an existing OpenShift Container Platform cluster with the following configuration considerations:

Node configuration

See the OpenShift Container Platform Installing Clusters Guide for planning considerations for different cluster configurations.

Binary builds and MiniShift are not supported with Container-native Virtualization.

Admission control webhooks

Container-native Virtualization implements an admission controller as a webhook so that Container-native Virtualization-specific creation requests are forwarded to the webhook for validation. Registering webhooks must be enabled during installation of the OpenShift Container Platform cluster.

To register the admission controller webhook, add the following under the [OSEv3:vars] section in your Ansible inventory file during OpenShift Container Platform deployment:

openshift_master_admission_plugin_config={"ValidatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}},"MutatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}}}

Enable APB discovery

Container-native Virtualization uses Ansible Playbook Bundles (APBs) for installation and additional functionality. APBs must be added to the whitelists for the OpenShift Ansible Broker Configuration to enable discovery of the following:

kubevirt-apb

installs Container-native Virtualization.

import-vm-apb

imports a virtual machine or a template from URL.

To add APBs to the whitelists, include the following under the [OSEv3:vars] section in your Ansible inventory file during OpenShift Container Platform deployment:

ansible_service_broker_registry_whitelist=['.*-apb$']
ansible_service_broker_local_registry_whitelist=['.*-apb$']

CRI-O runtime

CRI-O is the required container runtime for use with Container-native Virtualization.

See the OpenShift Container Platform 3.11 CRI-O Runtime Documentation for more information on using CRI-O.

Storage

Container-native Virtualization supports local volumes, block volumes, and Red Hat OpenShift Container Storage as storage backends.

Local volumes

Local volumes are PVs that represent locally-mounted file systems. See the OpenShift Container Platform Configuring Clusters Guide for more information.

Block volumes

Container-native Virtualization 1.3 supports the use of block volume PVCs. In order to use block volumes, the OpenShift Container Platform cluster must be configured with the BlockVolume feature gate enabled. See the OpenShift Container Platform Architecture Guide for more information.

Local and block volumes both have limited support in OpenShift Container Platform 3.11 because they are currently Technology Preview. This may change in a future release.

Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage uses Red Hat Gluster Storage to provide persistent storage and dynamic provisioning. It can be used containerized within OpenShift Container Platform (converged mode) and non-containerized on its own nodes (independent mode).

Requires Red Hat OpenShift Container Storage version 3.11.1 or later. Earlier versions of Red Hat OpenShift Container Storage do not support CRI-O, the required container runtime for Container-native Virtualization.

Metrics

Metrics are not required, but are a recommended addition to your OpenShift Container Platform cluster because they provide additional information about Container-native Virtualization resources.

See the OpenShift Container Platform Installing Clusters Guide for comprehensive information on deploying metrics in your cluster.

Installing Container-native Virtualization

Creating a cluster admin user

Container-native Virtualization is installed across the existing OpenShift Container Platform cluster. This requires a user with the cluster-admin cluster role. Create a user with this cluster role using the default system:admin user on the master.

Prerequisite
Procedure
  1. Log in to the master as the system:admin user:

    $ oc login -u system:admin
  2. Create a new user as per the configured identity provider. The following example uses the command for the HTPasswd identity provider to create a cnv-admin user.

    $ htpasswd -c </path/to/users.htpasswd> cnv-admin
  3. Add the cluster-admin cluster role to the new user:

    $ oc adm policy add-cluster-role-to-user cluster-admin cnv-admin
  4. You can now log in as the cnv-admin user to install the Container-native Virtualization components across the cluster:

    $ oc login -u cnv-admin
Enabling Container-native Virtualization repositories

You must enable the rhel-7-server-cnv-1.3-tech-preview-rpms repository for the master to install the Container-native Virtualization packages.

Prerequisites
Procedure
  • Enable the repository:

$ subscription-manager repos --enable=rhel-7-server-cnv-1.3-tech-preview-rpms
Installing virtctl client utility

The virtctl client utility is used to manage the state of the virtual machine, forward ports from the virtual machine pod to the node, and open console access to the virtual machine.

Procedure
  1. Install the kubevirt-virtctl package:

    $ yum install kubevirt-virtctl

The virtctl utility is also available for download from the Red Hat Network.

Installing Container-native Virtualization to OpenShift Container Platform

This procedure is for Container-native Virtualization v1.3. Future versions of Container-native Virtualization will be installed using the Operators Framework.

The kubevirt-apb APB installs all Container-native Virtualization components to your OpenShift Container Platform cluster. You need to import the APBs into your local registry before they can be used.

This procedure installs the following components:

  • Container-native Virtualization core components (KubeVirt)

  • Containerized data importer (CDI) controller

  • Multus and Open vSwitch (OVS) container network interface plug-ins

  • Updated OpenShift Container Platform web console

Prerequisites
  • OpenShift Container Platform 3.11 cluster

  • User with cluster-admin privileges

  • APB tool

  • virtctl client utility, installed

Procedure
  1. Import the Container-native Virtualization APBs into the local registry.

    1. Import kubevirt-apb:

      $ oc import-image --from registry.access.redhat.com/cnv-tech-preview/kubevirt-apb --confirm kubevirt-apb -n openshift
    2. Import import-vm-apb:

      $ oc import-image --from registry.access.redhat.com/cnv-tech-preview/import-vm-apb --confirm import-vm-apb  -n openshift
  2. Refresh the list of bootstrapped APBs in the Automation Broker Catalog.

    $ apb broker bootstrap
  3. Force a relist of the OpenShift Service Catalog.

    $ apb catalog relist
  4. Verify that the relevant service classes are now present.

    1. Search the Catalog for the localregistry-virtualization service class:

      $ apb broker catalog | grep 'localregistry-virtualization'
    2. Search the Catalog for the localregistry-import-vm-apb service class:

      $ apb broker catalog | grep 'localregistry-import-vm-apb'

      If you do not see localregistry-*, the service class may be under a unique name based on your ASB configuration. Refer to the OpenShift Ansible Broker Configuration documentation for more information.

  5. Ensure that you are in the kube-system project. The current project is marked with an asterisk.

    $ oc projects
    1. If necessary, switch projects.

      $ oc project kube-system
  6. Edit your kubevirt-apb.yaml template to match the following example. Do not provide the admin_user and admin_password parameter values.

    apiVersion: servicecatalog.k8s.io/v1beta1
    kind: ServiceInstance
    metadata:
      name: kubevirt
      namespace: kube-system
    spec:
      clusterServiceClassExternalName: localregistry-virtualization
      clusterServicePlanExternalName: default
      parameters:
        admin_user:
        admin_password:
        registry_url: registry.access.redhat.com
        registry_namespace: cnv-tech-preview
        docker_tag: v1.3.0
  7. Apply the kubevirt-apb template to install Container-native Virtualization to your cluster. Use the --edit option to edit the file before it is applied, which allows you to add the admin user credentials without storing them in plaintext in the file.

    Unlike other passwords in OpenShift Container Platform, the entered credentials are stored in cleartext. To improve security, create a temporary admin account for this process. The account can be deleted once installation is complete.

    If you don’t create a temporary account, note that any user who has permissions to view the kubevirt-apb.yaml file and the related ServiceInstance object can also view the cleartext credentials when running commands like $ oc describe serviceinstance.

    $ oc create --edit -f kubevirt-apb.yaml
    ...
    spec:
      clusterServiceClassExternalName: localregistry-virtualization
      clusterServicePlanExternalName: default
      parameters:
        admin_user: <cluster_admin_username>
        admin_password: <admin_user_password>
    ...
  8. Verify the installation.

    1. Watch the APB bundle container until it completes:

      $ oc get pods -w --all-namespaces | grep 'virtualization-prov'
    2. Watch the kubevirt service instance until it is Ready:

      $ oc get serviceinstance -n kube-system -w
    3. Watch the Container-native Virtualization pods (virt, cdi, multus, and ovs-cni) until they are Running:

      $ oc get pods -n kube-system -w
    4. Finally, verify that the Container-native Virtualization API accepts requests:

      $ virtctl version
Installing KubeVirt templates

The kubevirt-templates package includes the virtual machine templates compatible with OpenShift Container Platform.

Prerequisites
  • OpenShift Container Platform 3.11

Procedure
  1. Install the kubevirt-templates package:

    # yum install -y kubevirt-templates
  2. Upload the common templates to your current project’s template library:

    # oc create -f  /usr/share/kubevirt-templates/manifests/common-templates.yaml -n openshift

Reference

OpenShift Container Platform example inventory file

You can use this example to see how to modify your own Ansible inventory file to match your cluster configuration.

In this example, the cluster has a single master that is also an infra node, and there are two separate compute nodes.

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
openshift_deployment_type=openshift-enterprise
ansible_ssh_user=root
ansible_service_broker_registry_whitelist=['.*-apb$']
ansible_service_broker_local_registry_whitelist=['.*-apb$']

# Enable admission controller webhooks
openshift_master_admission_plugin_config={"ValidatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}},"MutatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}}}

# CRI-O
openshift_use_crio=true

# Provide your credentials to consume the redhat.io registry
oreg_auth_user=$rhnuser
oreg_auth_password='$rhnpassword'

# Host groups
[masters]
master.example.com

[etcd]
master.example.com

[nodes]
master.example.com openshift_node_group_name='node-config-master-infra-crio'
node1.example.com openshift_node_group_name='node-config-compute-crio'
node2.example.com openshift_node_group_name='node-config-compute-crio'