×

Container-native virtualization provides layer-2 networking capabilities that allow you to connect virtual machines to multiple networks. You can import virtual machines with existing workloads that depend on access to multiple interfaces. You can also configure a PXE network so that you can boot machines over the network.

To get started, a network administrator configures a bridge NetworkAttachmentDefinition for a namespace in the web console or CLI. Users can then create a NIC to attach Pods and virtual machines in that namespace to the bridge network.

Container-native virtualization networking glossary

Container-native virtualization provides advanced networking functionality by using custom resources and plug-ins.

The following terms are used throughout container-native virtualization documentation:

Container Network Interface (CNI)

a Cloud Native Computing Foundation project, focused on container network connectivity. Container-native virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.

Multus

a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.

Custom Resource Definition (CRD)

a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.

NetworkAttachmentDefinition

a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.

Preboot eXecution Environment (PXE)

an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

Creating a NetworkAttachmentDefinition

Prerequisites

  • A Linux bridge must be configured and attached on every node. See the node networking section for more information.

Creating a Linux bridge NetworkAttachmentDefinition in the web console

The NetworkAttachmentDefinition is a custom resource that exposes layer-2 devices to a specific namespace in your container-native virtualization cluster.

Network administrators can create NetworkAttachmentDefinitions to provide existing layer-2 networking to pods and virtual machines.

Procedure
  1. In the web console, click NetworkingNetwork Attachment Definitions.

  2. Click Create Network Attachment Definition .

  3. Enter a unique Name and optional Description.

  4. Click the Network Type list and select CNV Linux bridge.

  5. Enter the name of the bridge in the Bridge Name field.

  6. (Optional) If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.

  7. Click Create.

Creating a Linux bridge NetworkAttachmentDefinition in the CLI

As a network administrator, you can configure a NetworkAttachmentDefinition of type cnv-bridge to provide Layer-2 networking to pods and virtual machines.

The NetworkAttachmentDefinition must be in the same namespace as the Pod or virtual machine.

Procedure
  1. Create a new file for the NetworkAttachmentDefinition in any local directory. The file must have the following contents, modified to match your configuration:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: a-bridge-network
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 (1)
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "a-bridge-network", (2)
        "plugins": [
          {
            "type": "cnv-bridge", (3)
            "bridge": "br0" (4)
          },
          {
            "type": "cnv-tuning" (5)
          }
        ]
      }'
    1 If you add this annotation to your NetworkAttachmentDefinition, your virtual machine instances will only run on nodes that have the br0 bridge connected.
    2 Required. A name for the configuration. It is recommended to match the configuration name to the name value of the NetworkAttachmentDefinition.
    3 The actual name of the Container Network Interface (CNI) plug-in that provides the network for this NetworkAttachmentDefinition. Do not change this field unless you want to use a different CNI.
    4 You must substitute the actual name of the bridge, if it is not br0.
    5 Required. This allows the MAC pool manager to assign a unique MAC address to the connection.
    $ oc create -f <resource_spec.yaml>
  2. Edit the configuration file of a virtual machine or virtual machine instance that you want to connect to the bridge network, for example:

    apiVersion: v1
    kind: VirtualMachine
    metadata:
      name: example-vm
    spec:
      domain:
        devices:
          interfaces:
            - masquerade: {}
              name: default
            - bridge: {}
              name: bridge-net (1)
      ...
    
      networks:
        - name: default
          pod: {}
        - name: bridge-net (1)
          multus:
            networkName: a-bridge-network (2)
      ...
    1 The name value for the bridge interface and network must be the same.
    2 You must substitute the actual name value from the NetworkAttachmentDefinition.

    The virtual machine instance will be connected to both the default Pod network and bridge-net, which is defined by a NetworkAttachmentDefinition named a-bridge-network.

  3. Apply the configuration file to the resource:

    $ oc create -f <local/path/to/network-attachment-definition.yaml>

When defining the NIC in the next section, ensure that the NETWORK value is the bridge network name from the NetworkAttachmentDefinition you created in the previous section.

Creating a NIC for a virtual machine

Create and attach additional NICs to a virtual machine from the web console.

Procedure
  1. In the correct project in the container-native virtualization console, click WorkloadsVirtual Machines.

  2. Select a virtual machine.

  3. Click Network Interfaces to display the NICs already attached to the virtual machine.

  4. Click Create Network Interface to create a new slot in the list.

  5. Fill in the Name, Model, Network, Type, and MAC Address for the new NIC.

  6. Click the button to save and attach the NIC to the virtual machine.

Networking fields

Name Description

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Install the optional QEMU guest agent on the virtual machine so that the host can display relevant information about the additional networks.