×

Installing networking Operators

Configuring a Linux bridge network

After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).

Creating a Linux bridge NNCP

You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network.

Prerequisites
  • You have installed the Kubernetes NMState Operator.

Procedure
  • Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy (1)
    spec:
      desiredState:
        interfaces:
          - name: br1 (2)
            description: Linux bridge with eth1 as a port (3)
            type: linux-bridge (4)
            state: up (5)
            ipv4:
              enabled: false (6)
            bridge:
              options:
                stp:
                  enabled: false (7)
              port:
                - name: eth1 (8)
    1 Name of the policy.
    2 Name of the interface.
    3 Optional: Human-readable description of the interface.
    4 The type of interface. This example creates a bridge.
    5 The requested state for the interface after creation.
    6 Disables IPv4 in this example.
    7 Disables STP in this example.
    8 The node NIC to which the bridge is attached.

Creating a Linux bridge NAD by using the web console

You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the Red Hat OpenShift Service on AWS web console.

A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.

Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.

Procedure
  1. In the web console, click NetworkingNetworkAttachmentDefinitions.

  2. Click Create Network Attachment Definition.

    The network attachment definition must be in the same namespace as the pod or virtual machine.

  3. Enter a unique Name and optional Description.

  4. Select CNV Linux bridge from the Network Type list.

  5. Enter the name of the bridge in the Bridge Name field.

  6. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.

  7. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.

  8. Click Create.

Configuring a network for live migration

After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.

Configuring a dedicated secondary network for live migration

To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).

Prerequisites
  • You installed the OpenShift CLI (oc).

  • You logged in to the cluster as a user with the cluster-admin role.

  • Each node has at least two Network Interface Cards (NICs).

  • The NICs for live migration are connected to the same VLAN.

Procedure
  1. Create a NetworkAttachmentDefinition manifest according to the following example:

    Example configuration file
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: my-secondary-network (1)
      namespace: openshift-cnv
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "migration-bridge",
        "type": "macvlan",
        "master": "eth1", (2)
        "mode": "bridge",
        "ipam": {
          "type": "whereabouts", (3)
          "range": "10.200.5.0/24" (4)
        }
      }'
    1 Specify the name of the NetworkAttachmentDefinition object.
    2 Specify the name of the NIC to be used for live migration.
    3 Specify the name of the CNI plugin that provides the network for the NAD.
    4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
  2. Open the HyperConverged CR in your default editor by running the following command:

    oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR:

    Example HyperConverged manifest
    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      liveMigrationConfig:
        completionTimeoutPerGiB: 800
        network: <network> (1)
        parallelMigrationsPerCluster: 5
        parallelOutboundMigrationsPerNode: 2
        progressTimeout: 150
    # ...
    1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations.
  4. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network.

Verification
  • When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.

    $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'

Selecting a dedicated network by using the web console

You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.

Prerequisites
  • You configured a Multus network for live migration.

  • You created a network attachment definition for the network.

Procedure
  1. Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.

  2. Click the Settings tab and then click Live migration.

  3. Select the network from the Live migration network list.

Enabling load balancer service creation by using the web console

You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.

Prerequisites
  • You have configured a load balancer for the cluster.

  • You are logged in as a user with the cluster-admin role.

  • You created a network attachment definition for the network.

Procedure
  1. Navigate to VirtualizationOverview.

  2. On the Settings tab, click Cluster.

  3. Expand General settings and SSH configuration.

  4. Set SSH over LoadBalancer service to on.