×

Container-native virtualization provides Layer-2 networking capabilities that allow you to connect virtual machines to multiple networks. You can import virtual machines with existing workloads that depend on access to multiple interfaces. You can also configure a PXE network so that you can boot machines over the network.

To get started, a network administrator configures a NetworkAttachmentDefinition of type cnv-bridge. Then, users can attach Pods, virtual machine instances, and virtual machines to the bridge network. From the container-native virtualization web console, you can create a vNIC that refers to the bridge network.

Container-native virtualization networking glossary

Container-native virtualization provides advanced networking functionality by using custom resources and plug-ins.

The following terms are used throughout container-native virtualization documentation:

Container Network Interface (CNI)

a Cloud Native Computing Foundation project, focused on container network connectivity. Container-native virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.

Multus

a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.

Custom Resource Definition (CRD)

a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.

NetworkAttachmentDefinition

a CRD introduced by the Multus project that allows you to attach Pods, virtual machines, and virtual machine instances to one or more networks.

Preboot eXecution Environment (PXE)

an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

Connecting a resource to a bridge-based network

As a network administrator, you can configure a NetworkAttachmentDefinition of type cnv-bridge to provide Layer-2 networking to Pods and virtual machines.

Prerequisites
  • Container-native virtualization 2.0 or newer

  • A Linux bridge must be configured and attached to the correct Network Interface Card on every node.

  • If you use VLANs, vlan_filtering must be enabled on the bridge.

  • The NIC must be tagged to all relevant VLANs.

    • For example: bridge vlan add dev bond0 vid 1-4095 master

Procedure
  1. Create a new file for the NetworkAttachmentDefinition in any local directory. The file must have the following contents, modified to match your configuration:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: a-bridge-network
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 (1)
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "plugins": [
          {
            "type": "cnv-bridge", (2)
            "bridge": "br0", (3)
            "ipam": {}
          },
          {
            "type": "tuning" (4)
          }
        ]
      }'
    1 If you add this annotation to your NetworkAttachmentDefinition, your virtual machine instances will only run on nodes that have the br0 bridge connected.
    2 The actual name of the Container Network Interface (CNI) plug-in that provides the network for this NetworkAttachmentDefinition. Do not change this field unless you want to use a different CNI.
    3 You must substitute the actual name of the bridge, if it is not br0.
    4 Required. This allows the MAC pool manager to assign a unique MAC address to the connection.
    $ oc create -f <resource_spec.yaml>
  2. Edit the configuration file of a virtual machine or virtual machine instance that you want to connect to the bridge network:

    apiVersion: v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      annotations:
        k8s.v1.cni.cncf.io/networks: a-bridge-network (1)
    spec:
    ...
    1 You must substitute the actual name value from the NetworkAttachmentDefinition.

    In this example, the NetworkAttachmentDefinition and Pod are in the same namespace.

    To specify a different namespace, use the following syntax:

    ...
      annotations:
        k8s.v1.cni.cncf.io/networks: <namespace>/a-bridge-network
    ...
  3. Apply the configuration file to the resource:

    $ oc create -f <local/path/to/network-attachment-definition.yaml>

When defining the vNIC in the next section, ensure that the NETWORK value is the bridge network name from the NetworkAttachmentDefinition you created in the previous section.

Creating a NIC for a virtual machine

Create and attach additional NICs to a virtual machine from the web console.

Procedure
  1. In the correct project in the container-native virtualization console, click WorkloadsVirtual Machines.

  2. Select a virtual machine template.

  3. Click Network Interfaces to display the NICs already attached to the virtual machine.

  4. Click Create NIC to create a new slot in the list.

  5. Fill in the NAME, NETWORK, MAC ADDRESS, and BINDING METHOD for the new NIC.

  6. Click the button to save and attach the NIC to the virtual machine.

Networking fields

Name Description

Create NIC

Create a new NIC for the virtual machine.

NIC NAME

Name for the NIC.

MAC ADDRESS

MAC address for the network interface. If a MAC address is not specified, an ephemeral address is generated for the session.

NETWORK CONFIGURATION

List of available NetworkAttachmentDefinition objects.

BINDING METHOD

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

PXE NIC

List of PXE-capable networks. Only visible if PXE has been selected as the Provision Source.

Install the optional QEMU guest agent on the virtual machine so that the host can display relevant information about the additional networks.