×

You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode

Configuring masquerade mode from the command line

You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.

Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.

Prerequisites
  • The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.

Procedure
  1. Edit the interfaces spec of your virtual machine configuration file:

    kind: VirtualMachine
    spec:
      domain:
        devices:
          interfaces:
            - name: default
              masquerade: {} (1)
              ports: (2)
                - port: 80
      networks:
      - name: default
        pod: {}
    1 Connect using masquerade mode.
    2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80.

    Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.

  2. Create the virtual machine:

    $ oc create -f <vm-name>.yaml

Configuring masquerade mode with dual-stack (IPv4 and IPv6)

You can configure a new virtual machine to use both IPv6 and IPv4 on the default pod network by using cloud-init.

The IPv6 network address must be statically set to fd10:0:2::2/120 with a default gateway of fd10:0:2::1 in the virtual machine configuration. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally.

When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.

Prerequisites
  • The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack.

Procedure
  1. In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init.

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm-ipv6
    ...
              interfaces:
                - name: default
                  masquerade: {} (1)
                  ports:
                    - port: 80 (2)
          networks:
          - name: default
            pod: {}
          volumes:
          - cloudInitNoCloud:
              networkData: |
                version: 2
                ethernets:
                  eth0:
                    dhcp4: true
                    addresses: [ fd10:0:2::2/120 ] (3)
                    gateway6: fd10:0:2::1 (4)
    1 Connect using masquerade mode.
    2 Allows incoming traffic on port 80 to the virtual machine.
    3 You must use the IPv6 address fd10:0:2::2/120.
    4 You must use the gateway fd10:0:2::1.
  2. Create the virtual machine in the namespace:

    $ oc create -f example-vm-ipv6.yaml
Verification
  • To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:

$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"

Creating a service from a virtual machine

Create a service from a running virtual machine by first creating a Service object to expose the virtual machine.

If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object.

The spec.ipFamilyPolicy field can be set to one of the following values:

  • SingleStack: The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.

  • PreferDualStack: The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.

  • RequireDualStack: This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.

You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values:

  • [IPv4]

  • [IPv6]

  • [IPv4, IPv6]

  • [IPv6, IPv4]

The ClusterIP service type exposes the virtual machine internally, within the cluster. The NodePort or LoadBalancer service types expose the virtual machine externally, outside of the cluster.

This procedure presents an example of how to create, connect to, and expose a Service object of type: ClusterIP as a virtual machine-backed service.

ClusterIP is the default service type, if the service type is not specified.

Procedure
  1. Edit the virtual machine YAML as follows:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key (1)
        spec:
          domain:
            devices:
              disks:
                - name: containerdisk
                  disk:
                    bus: virtio
                - name: cloudinitdisk
                  disk:
                    bus: virtio
              interfaces:
              - masquerade: {}
                name: default
            resources:
              requests:
                memory: 1024M
          networks:
            - name: default
              pod: {}
          volumes:
            - name: containerdisk
              containerDisk:
                image: kubevirt/fedora-cloud-container-disk-demo
            - name: cloudinitdisk
              cloudInitNoCloud:
                userData: |
                  #!/bin/bash
                  echo "fedora" | passwd fedora --stdin
    1 Add the label special: key in the spec.template.metadata.labels section.

    Labels on a virtual machine are passed through to the pod. The labels on the VirtualMachine configuration, for example special: key, must match the labels in the Service YAML selector attribute, which you create later in this procedure.

  2. Save the virtual machine YAML to apply your changes.

  3. Edit the Service YAML to configure the settings necessary to create and expose the Service object:

    apiVersion: v1
    kind: Service
    metadata:
      name: vmservice (1)
      namespace: example-namespace (2)
    spec:
      ports:
      - port: 27017
        protocol: TCP
        targetPort: 22 (3)
      selector:
        special: key (4)
      type: ClusterIP (5)
    1 Specify the name of the service you are creating and exposing.
    2 Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML.
    3 Add targetPort: 22, exposing the service on SSH port 22.
    4 In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file.
    5 In the spec section of the Service YAML, add type: ClusterIP for a ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as NodePort and LoadBalancer, replace type: ClusterIP with type: NodePort or type: LoadBalancer, as appropriate.
  4. Save the Service YAML to store the service configuration.

  5. Create the ClusterIP service:

    $ oc create -f <service_name>.yaml
  6. Start the virtual machine. If the virtual machine is already running, restart it.

  7. Query the Service object to verify it is available and is configured with type ClusterIP.

    Verification
    • Run the oc get service command, specifying the namespace that you reference in the virtual machine and Service YAML files.

      $ oc get service -n example-namespace
      Example output
      NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
      vmservice   ClusterIP   172.30.3.149   <none>        27017/TCP   2m
      • As shown from the output, vmservice is running.

      • The TYPE displays as ClusterIP, as you specified in the Service YAML.

  8. Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.

    1. Edit the virtual machine YAML as follows:

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        name: vm-connect
        namespace: example-namespace
      spec:
        running: false
        template:
          spec:
            domain:
              devices:
                disks:
                  - name: containerdisk
                    disk:
                      bus: virtio
                  - name: cloudinitdisk
                    disk:
                      bus: virtio
                interfaces:
                - masquerade: {}
                  name: default
              resources:
                requests:
                  memory: 1024M
            networks:
              - name: default
                pod: {}
            volumes:
              - name: containerdisk
                containerDisk:
                  image: kubevirt/fedora-cloud-container-disk-demo
              - name: cloudinitdisk
                cloudInitNoCloud:
                  userData: |
                    #!/bin/bash
                    echo "fedora" | passwd fedora --stdin
    2. Run the oc create command to create a second virtual machine, where file.yaml is the name of the virtual machine YAML:

      $ oc create -f <file.yaml>
    3. Start the virtual machine.

    4. Connect to the virtual machine by running the following virtctl command:

      $ virtctl -n example-namespace console <new-vm-name>

      For service type LoadBalancer, use the vinagre client to connect your virtual machine by using the public IP and port. External ports are dynamically allocated when using service type LoadBalancer.

    5. Run the ssh command to authenticate the connection, where 172.30.3.149 is the ClusterIP of the service and fedora is the user name of the virtual machine:

      $ ssh fedora@172.30.3.149 -p 27017
      Verification
      • You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.