×

Container-native virtualization provides layer-2 networking capabilities that allow you to connect virtual machines to multiple networks. You can import virtual machines with existing workloads that depend on access to multiple interfaces. You can also configure a PXE network so that you can boot machines over the network.

To get started, a network administrator configures a bridge NetworkAttachmentDefinition for a namespace in the web console or CLI. Users can then create a NIC to attach Pods and virtual machines in that namespace to the bridge network.

Container-native virtualization networking glossary

Container-native virtualization provides advanced networking functionality by using custom resources and plug-ins.

The following terms are used throughout container-native virtualization documentation:

Container Network Interface (CNI)

a Cloud Native Computing Foundation project, focused on container network connectivity. Container-native virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.

Multus

a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.

Custom Resource Definition (CRD)

a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.

NetworkAttachmentDefinition

a CRD introduced by the Multus project that allows you to attach Pods, virtual machines, and virtual machine instances to one or more networks.

Preboot eXecution Environment (PXE)

an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

Creating a NetworkAttachmentDefinition

Creating a Linux bridge NetworkAttachmentDefinition in the web console

The NetworkAttachmentDefinition is a custom resource that exposes layer-2 devices to a specific namespace in your container-native virtualization cluster.

Network administrators can create NetworkAttachmentDefinitions to provide existing layer-2 networking to Pods and virtual machines.

Prerequisites
  • Container-native virtualization 2.2 or above installed on your cluster.

  • A Linux bridge must be configured and attached to the correct Network Interface Card (NIC) on every node.

  • If you use VLANs, vlan_filtering must be enabled on the bridge.

  • The NIC must be tagged to all relevant VLANs.

    • For example: bridge vlan add dev bond0 vid 1-4095 master

Procedure
  1. In the web console, click NetworkingNetwork Attachment Definitions.

  2. Click Create Network Attachment Definition .

  3. Enter a unique Name and optional Description.

  4. Click the Network Type list and select CNV Linux bridge.

  5. Enter the name of the bridge in the Bridge Name field.

  6. (Optional) If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.

  7. Click Create.

Creating a Linux bridge NetworkAttachmentDefinition in the CLI

As a network administrator, you can configure a NetworkAttachmentDefinition of type cnv-bridge to provide Layer-2 networking to Pods and virtual machines.

The NetworkAttachmentDefinition must be in the same namespace as the Pod or virtual machine.

Prerequisites
  • Container-native virtualization 2.0 or newer

  • A Linux bridge must be configured and attached to the correct Network Interface Card on every node.

  • If you use VLANs, vlan_filtering must be enabled on the bridge.

  • The NIC must be tagged to all relevant VLANs.

    • For example: bridge vlan add dev bond0 vid 1-4095 master

Procedure
  1. Create a new file for the NetworkAttachmentDefinition in any local directory. The file must have the following contents, modified to match your configuration:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: a-bridge-network
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 (1)
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "cnv-bridge-conf", (2)
        "plugins": [
          {
            "type": "cnv-bridge", (3)
            "bridge": "br0" (4)
          },
          {
            "type": "cnv-tuning" (5)
          }
        ]
      }'
    1 If you add this annotation to your NetworkAttachmentDefinition, your virtual machine instances will only run on nodes that have the br0 bridge connected.
    2 Required. A name for the configuration.
    3 The actual name of the Container Network Interface (CNI) plug-in that provides the network for this NetworkAttachmentDefinition. Do not change this field unless you want to use a different CNI.
    4 You must substitute the actual name of the bridge, if it is not br0.
    5 Required. This allows the MAC pool manager to assign a unique MAC address to the connection.
    $ oc create -f <resource_spec.yaml>
  2. Edit the configuration file of a virtual machine or virtual machine instance that you want to connect to the bridge network:

    apiVersion: v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      annotations:
        k8s.v1.cni.cncf.io/networks: a-bridge-network (1)
    spec:
    ...
    1 You must substitute the actual name value from the NetworkAttachmentDefinition.
  3. Apply the configuration file to the resource:

    $ oc create -f <local/path/to/network-attachment-definition.yaml>

When defining the NIC in the next section, ensure that the NETWORK value is the bridge network name from the NetworkAttachmentDefinition you created in the previous section.

Creating a NIC for a virtual machine

Create and attach additional NICs to a virtual machine from the web console.

Procedure
  1. In the correct project in the container-native virtualization console, click WorkloadsVirtual Machines.

  2. Select a virtual machine.

  3. Click Network Interfaces to display the NICs already attached to the virtual machine.

  4. Click Create Network Interface to create a new slot in the list.

  5. Fill in the Name, Model, Network, Type, and MAC Address for the new NIC.

  6. Click the button to save and attach the NIC to the virtual machine.

Networking fields

Name Description

Name

Name for the Network Interface Card.

Model

Driver for the Network Interface Card or model for the Network Interface Card.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Install the optional QEMU guest agent on the virtual machine so that the host can display relevant information about the additional networks.