$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported:
You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure.
For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment.
Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the NetworkAttachmentDefinition
object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address.
Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition
object. This approach allows for the chaining of CNI plugins.
When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet id that is attached to the secondary interface:
|
An additional network is configured by using the NetworkAttachmentDefinition
API in the k8s.cni.cncf.io
API group.
Do not store any sensitive information or a secret in the |
The configuration for the API is described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name for the additional network. |
|
|
The namespace that the object is associated with. |
|
|
The CNI plugin configuration in JSON format. |
The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration.
The following YAML describes the configuration parameters for managing an additional network with the CNO:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
# ...
additionalNetworks: (1)
- name: <name> (2)
namespace: <namespace> (3)
rawCNIConfig: |- (4)
{
...
}
type: Raw
1 | An array of one or more additional network configurations. | ||
2 | The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . |
||
3 | The namespace to create the network attachment in. If you do not specify a value then the default namespace is used.
|
||
4 | A CNI plugin configuration in JSON format. |
The configuration for an additional network is specified from a YAML configuration file, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: <name> (1)
spec:
config: |- (2)
{
...
}
1 | The name for the additional network attachment that you are creating. |
2 | A CNI plugin configuration in JSON format. |
The specific configuration fields for additional networks is described in the following sections.
The following object describes the configuration parameters for the bridge CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
|
Optional: Indicates whether the default vlan must be preserved on the |
|
|
Optional: Assign a VLAN trunk tag. The default value is |
|
|
Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
Optional: Enables duplicate address detection for the container side |
|
|
Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is |
The VLAN parameter configures the VLAN tag on the host end of the |
To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command:
|
Specify your network device by setting only one of the following parameters: |
The following object describes the configuration parameters for the host-device CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
Optional: The name of the device, such as |
|
|
Optional: The device hardware MAC address. |
|
|
Optional: The Linux kernel device path, such as |
|
|
Optional: The PCI address of the network device, such as |
The following object describes the configuration parameters for the VLAN CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The Ethernet interface to associate with the network attachment. If a |
|
|
Set the id of the vlan. |
|
|
The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
Optional: DNS information to return, for example, a priority-ordered list of DNS nameservers. |
|
|
Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to |
The following example configures an additional network named vlan-net
:
{
"name": "vlan-net",
"cniVersion": "0.3.1",
"type": "vlan",
"master": "eth0",
"mtu": 1500,
"vlanId": 5,
"linkInContainer": false,
"ipam": {
"type": "host-local",
"subnet": "10.1.1.0/24"
},
"dns": {
"nameservers": [ "10.1.1.1", "8.8.8.8" ]
}
}
The following object describes the configuration parameters for the IPVLAN CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. |
|
|
Optional: The operating mode for the virtual network. The value must be |
|
|
Optional: The Ethernet interface to associate with the network attachment. If a |
|
|
Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to |
|
The following example configures an additional network named ipvlan-net
:
{
"cniVersion": "0.3.1",
"name": "ipvlan-net",
"type": "ipvlan",
"master": "eth1",
"linkInContainer": false,
"mode": "l3",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.10.10/24"
}
]
}
}
The following object describes the configuration parameters for the macvlan CNI plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
Optional: Configures traffic visibility on the virtual network. Must be either |
|
|
Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. |
|
|
Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to |
If you specify the |
The following object describes the configuration parameters for the TAP CNI plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The CNI specification version. The |
||
|
|
The value for the |
||
|
|
The name of the CNI plugin to configure: |
||
|
|
Optional: Request the specified MAC address for the interface. |
||
|
|
Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
||
|
|
Optional: The SELinux context to associate with the tap device.
|
||
|
|
Optional: Set to |
||
|
|
Optional: The user owning the tap device. |
||
|
|
Optional: The group owning the tap device. |
||
|
|
Optional: Set the tap device as a port of an already existing bridge. |
The following example configures an additional network named mynet
:
{
"name": "mynet",
"cniVersion": "0.3.1",
"type": "tap",
"mac": "00:11:22:33:44:55",
"mtu": 1500,
"selinuxcontext": "system_u:system_r:container_t:s0",
"multiQueue": true,
"owner": 0,
"group": 0
"bridge": "br1"
}
To create the tap device with the container_t
SELinux context, enable the container_use_devices
boolean on the host by using the Machine Config Operator (MCO).
You have installed the OpenShift CLI (oc
).
Create a new YAML file named, such as setsebool-container-use-devices.yaml
, with the following details:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-setsebool
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: setsebool.service
contents: |
[Unit]
Description=Set SELinux boolean for the TAP CNI plugin
Before=kubelet.service
[Service]
Type=oneshot
ExecStart=/usr/sbin/setsebool container_use_devices=on
RemainAfterExit=true
[Install]
WantedBy=multi-user.target graphical.target
Create the new MachineConfig
object by running the following command:
$ oc apply -f setsebool-container-use-devices.yaml
Applying any changes to the |
Verify the change is applied by running the following command:
$ oc get machineconfigpools
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h
worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d
All nodes should be in the updated and ready state. |
For more information about enabling an SELinux boolean on a node, see Setting SELinux booleans
The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition
custom resource (CR).
Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated |
You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies.
A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network.
A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes.
The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.
Networks names must be unique. For example, creating multiple |
You can use an OVN-Kubernetes additional network with the following supported platforms:
Bare metal
IBM Power®
IBM Z®
IBM® LinuxONE
VMware vSphere
Red Hat OpenStack Platform (RHOSP)
The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin:
Field | Type | Description |
---|---|---|
|
|
The CNI specification version. The required value is |
|
|
The name of the network. These networks are not namespaced. For example, you can have a network named
|
|
|
The name of the CNI plugin to configure. This value must be set to |
|
|
The topological configuration for the network. Must be one of |
|
|
The subnet to use for the network across the cluster. For When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. |
|
|
The maximum transmission unit (MTU). The default value, |
|
|
The metadata |
|
|
A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. |
|
|
If topology is set to |
The multi-network policy API, which is provided by the MultiNetworkPolicy
custom resource definition (CRD) in the k8s.cni.cncf.io
API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets
field. Refer to the following table for details:
subnets field specified |
Allowed multi-network policy selectors |
---|---|
Yes |
|
No |
|
For example, the following multi-network policy is valid only if the subnets
field is defined in the additional network CNI configuration for the additional network named blue2
:
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: allow-same-namespace
annotations:
k8s.v1.cni.cncf.io/policy-for: blue2
spec:
podSelector:
ingress:
- from:
- podSelector: {}
The following example uses the ipBlock
network policy selector, which is always valid for an OVN-Kubernetes additional network:
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: ingress-ipblock
annotations:
k8s.v1.cni.cncf.io/policy-for: default/flatl2net
spec:
podSelector:
matchLabels:
name: access-control
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.200.0.0/30
The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments.
Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster. |
The following JSON example configures a switched secondary network:
{
"cniVersion": "0.3.1",
"name": "l2-network",
"type": "ovn-k8s-cni-overlay",
"topology":"layer2",
"subnets": "10.100.200.0/24",
"mtu": 1300,
"netAttachDefName": "ns1/l2-network",
"excludeSubnets": "10.100.200.0/29"
}
The switched localnet
topology interconnects the workloads created as Network Attachment Definitions (NAD) through a cluster-wide logical switch to a physical network.
The NMState Operator is installed. For more information, see About the Kubernetes NMState Operator.
You must map an additional network to the OVN bridge to use it as an OVN-Kubernetes additional network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).
You can create an NodeNetworkConfigurationPolicy
object, part of the nmstate.io/v1
API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector
expression, such as node-role.kubernetes.io/worker: ''
.
When attaching an additional network, you can either use the existing br-ex
bridge or create a new bridge. Which approach to use depends on your specific network infrastructure.
If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex
bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly.
If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your additional network. This approach provides for traffic isolation from your primary cluster network.
The localnet1
network is mapped to the br-ex
bridge in the following example:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: mapping (1)
spec:
nodeSelector:
node-role.kubernetes.io/worker: '' (2)
desiredState:
ovn:
bridge-mappings:
- localnet: localnet1 (3)
bridge: br-ex (4)
state: present (5)
1 | The name for the configuration object. |
2 | A node selector that specifies the nodes to apply the node network configuration policy to. |
3 | The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network. |
4 | The name of the OVS bridge on the node. This value is required only if you specify state: present . |
5 | The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . |
In the following example, the localnet2
network interface is attached to the ovs-br1
bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as an additional network.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: ovs-br1-multiple-networks (1)
spec:
nodeSelector:
node-role.kubernetes.io/worker: '' (2)
desiredState:
interfaces:
- name: ovs-br1 (3)
description: |-
A dedicated OVS bridge with eth1 as a port
allowing all VLANs and untagged traffic
type: ovs-bridge
state: up
bridge:
allow-extra-patch-ports: true
options:
stp: false
port:
- name: eth1 (4)
ovn:
bridge-mappings:
- localnet: localnet2 (5)
bridge: ovs-br1 (6)
state: present (7)
1 | The name for the configuration object. |
2 | A node selector that specifies the nodes to apply the node network configuration policy to. |
3 | A new OVS bridge, separate from the default bridge used by OVN-Kubernetes for all cluster traffic. |
4 | A network device on the host system to associate with this new OVS bridge. |
5 | The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network. |
6 | The name of the OVS bridge on the node. This value is required only if you specify state: present . |
7 | The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . |
This declarative approach is recommended because the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently.
The following JSON example configures a localnet secondary network:
{
"cniVersion": "0.3.1",
"name": "ns1-localnet-network",
"type": "ovn-k8s-cni-overlay",
"topology":"localnet",
"subnets": "202.10.130.112/28",
"vlanID": 33,
"mtu": 1500,
"netAttachDefName": "ns1/localnet-network"
"excludeSubnets": "10.100.200.0/29"
}
You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks
annotation.
The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide.
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: l2-network
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
The following example provisions a pod with a static IP address.
|
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "l2-network", (1)
"mac": "02:03:04:05:06:07", (2)
"interface": "myiface1", (3)
"ips": [
"192.0.2.20/24"
] (4)
}
]'
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
1 | The name of the network. This value must be unique across all NetworkAttachmentDefinitions . |
2 | The MAC address to be assigned for the interface. |
3 | The name of the network interface to be created for the pod. |
4 | The IP addresses to be assigned to the network interface. |
The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins.
You can use the following IP address assignment types:
Static assignment.
Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network.
Dynamic assignment through the Whereabouts IPAM CNI plugin.
The following table describes the configuration for static IP address assignment:
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
|
|
An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. |
|
|
An array of objects specifying routes to configure inside the pod. |
|
|
Optional: An array of objects specifying the DNS configuration. |
The addresses
array requires objects with the following fields:
Field | Type | Description |
---|---|---|
|
|
An IP address and network prefix that you specify. For example, if you specify |
|
|
The default gateway to route egress network traffic to. |
Field | Type | Description |
---|---|---|
|
|
The IP address range in CIDR format, such as |
|
|
The gateway where network traffic is routed. |
Field | Type | Description |
---|---|---|
|
|
An array of one or more IP addresses for to send DNS queries to. |
|
|
The default domain to append to a hostname. For example, if the
domain is set to |
|
|
An array of domain names to append to an unqualified hostname,
such as |
{
"ipam": {
"type": "static",
"addresses": [
{
"address": "191.168.1.7/24"
}
]
}
}
The following JSON describes the configuration for dynamic IP address address assignment with DHCP.
Renewal of DHCP leases
A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition
|
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
{
"ipam": {
"type": "dhcp"
}
}
The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.
The following table describes the configuration for dynamic IP address assignment with Whereabouts:
Field | Type | Description |
---|---|---|
|
|
The IPAM address type. The value |
|
|
An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. |
|
|
Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. |
{
"ipam": {
"type": "whereabouts",
"range": "192.0.2.192/27",
"exclude": [
"192.0.2.192/30",
"192.0.2.196/32"
]
}
}
The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down.
You can also use a |
The whereabouts-reconciler
daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest.
To trigger the deployment of the whereabouts-reconciler
daemon set, you must manually create a whereabouts-shim
network attachment by editing the Cluster Network Operator custom resource (CR) file.
Use the following procedure to deploy the whereabouts-reconciler
daemon set.
Edit the Network.operator.openshift.io
custom resource (CR) by running the following command:
$ oc edit network.operator.openshift.io cluster
Include the additionalNetworks
section shown in this example YAML extract within the spec
definition of the custom resource (CR):
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
# ...
spec:
additionalNetworks:
- name: whereabouts-shim
namespace: default
rawCNIConfig: |-
{
"name": "whereabouts-shim",
"cniVersion": "0.3.1",
"type": "bridge",
"ipam": {
"type": "whereabouts"
}
}
type: Raw
# ...
Save the file and exit the text editor.
Verify that the whereabouts-reconciler
daemon set deployed successfully by running the following command:
$ oc get all -n openshift-multus | grep whereabouts-reconciler
pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s
pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s
pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s
pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s
pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s
pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s
daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s
The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them.
Use this procedure to change the frequency at which the IP reconciler runs.
You installed the OpenShift CLI (oc
).
You have access to the cluster as a user with the cluster-admin
role.
You have deployed the whereabouts-reconciler
daemon set, and the whereabouts-reconciler
pods are up and running.
Run the following command to create a ConfigMap
object named whereabouts-config
in the openshift-multus
namespace with a specific cron expression for the IP reconciler:
$ oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *"
This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements.
The |
Retrieve information about resources related to the whereabouts-reconciler
daemon set and pods within the openshift-multus
namespace by running the following command:
$ oc get all -n openshift-multus | grep whereabouts-reconciler
pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s
pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s
pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s
pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s
pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s
pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s
daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s
Run the following command to verify that the whereabouts-reconciler
pod runs the IP reconciler with the configured interval:
$ oc -n openshift-multus logs whereabouts-reconciler-2p7hw
2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE
2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD
2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME
2024-02-02T16:33:54Z [verbose] using expression: */15 * * * *
2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * *
2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * *
2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE
2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE
2024-02-02T16:45:00Z [verbose] starting reconciler run
2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data
2024-02-02T16:45:00Z [debug] listing IP pools
2024-02-02T16:45:00Z [debug] no IP addresses to cleanup
2024-02-02T16:45:00Z [verbose] reconciler success
Dual-stack IP address assignment can be configured with the ipRanges
parameter for:
IPv4 addresses
IPv6 addresses
multiple IP address assignment
Set type
to whereabouts
.
Use ipRanges
to allocate IP addresses as shown in the following example:
cniVersion: operator.openshift.io/v1
kind: Network
=metadata:
name: cluster
spec:
additionalNetworks:
- name: whereabouts-shim
namespace: default
type: Raw
rawCNIConfig: |-
{
"name": "whereabouts-dual-stack",
"cniVersion": "0.3.1,
"type": "bridge",
"ipam": {
"type": "whereabouts",
"ipRanges": [
{"range": "192.168.10.0/24"},
{"range": "2001:db8::/64"}
]
}
}
Attach network to a pod. For more information, see "Adding a pod to an additional network".
Verify that all IP addresses are assigned.
Run the following command to ensure the IP addresses are assigned as metadata.
$ oc exec -it mypod -- ip a
The Cluster Network Operator (CNO) manages additional network definitions. When
you specify an additional network to create, the CNO creates the
NetworkAttachmentDefinition
object automatically.
Do not edit the |
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Optional: Create the namespace for the additional networks:
$ oc create namespace <namespace_name>
To edit the CNO configuration, enter the following command:
$ oc edit networks.operator.openshift.io cluster
Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR.
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
# ...
additionalNetworks:
- name: tertiary-net
namespace: namespace2
type: Raw
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "tertiary-net",
"type": "ipvlan",
"master": "eth1",
"mode": "l2",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.1.23/24"
}
]
}
}
Save your changes and quit the text editor to commit your changes.
Confirm that the CNO created the NetworkAttachmentDefinition
object by running the following command. There might be a delay before the CNO creates the object.
$ oc get network-attachment-definitions -n <namespace>
where:
<namespace>
Specifies the namespace for the network attachment that you added to the CNO configuration.
NAME AGE
test-network-1 14m
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create a YAML file with your additional network configuration, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: next-net
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "work-network",
"type": "host-device",
"device": "eth1",
"ipam": {
"type": "dhcp"
}
}
To create the additional network, enter the following command:
$ oc apply -f <file>.yaml
where:
<file>
Specifies the name of the file contained the YAML manifest.
In OpenShift Container Platform 4.14 and later, the ability to allow users to create a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface in a container namespace is now generally available.
This feature allows you to create the master interfaces as part of the pod network configuration in a separate network attachment definition. You can then base the VLAN, MACVLAN, or IPVLAN on this interface without requiring the knowledge of the network configuration of the node.
To ensure the use of a container namespace master interface, specify the linkInContainer
and set the value to true
in the VLAN, MACVLAN, or IPVLAN plugin configuration depending on the particular type of additional network.
An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces.
The following example shows how to configure the setup illustrated in this diagram.
You installed the OpenShift CLI (oc
).
You have access to the cluster as a user with the cluster-admin
role.
You have installed the SR-IOV Network Operator.
Create a dedicated container namespace where you want to deploy your pod by using the following command:
$ oc new-project test-namespace
Create an SR-IOV node policy:
Create an SriovNetworkNodePolicy
object, and then save the YAML in the sriov-node-network-policy.yaml
file:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: sriovnic
namespace: openshift-sriov-network-operator
spec:
deviceType: netdevice
isRdma: false
needVhostNet: true
nicSelector:
vendor: "15b3" (1)
deviceID: "101b" (2)
rootDevices: ["00:05.0"]
numVfs: 10
priority: 99
resourceName: sriovnic
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: "true"
The SR-IOV network node policy configuration example, with the setting |
1 | The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. |
2 | The device hexadecimal code of the SR-IOV network device. |
Apply the YAML by running the following command:
$ oc apply -f sriov-node-network-policy.yaml
Applying this might take some time due to the node requiring a reboot. |
Create an SR-IOV network:
Create the SriovNetwork
custom resource (CR) for the additional SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml
:
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-network
namespace: openshift-sriov-network-operator
spec:
networkNamespace: test-namespace
resourceName: sriovnic
spoofChk: "off"
trust: "on"
Apply the YAML by running the following command:
$ oc apply -f sriov-network-attachment.yaml
Create the VLAN additional network:
Using the following YAML example, create a file named vlan100-additional-network-configuration.yaml
:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: vlan-100
namespace: test-namespace
spec:
config: |
{
"cniVersion": "0.4.0",
"name": "vlan-100",
"plugins": [
{
"type": "vlan",
"master": "ext0", (1)
"mtu": 1500,
"vlanId": 100,
"linkInContainer": true, (2)
"ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]}
}
]
}
1 | The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation. |
2 | The linkInContainer parameter must be specified. |
Apply the YAML file by running the following command:
$ oc apply -f vlan100-additional-network-configuration.yaml
Create a pod definition by using the earlier specified networks:
Using the following YAML example, create a file named pod-a.yaml
file:
The manifest below includes 2 resources:
|
apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
security.openshift.io/scc.podSecurityLabelSync: "false"
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: test-namespace
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "sriov-network",
"namespace": "test-namespace",
"interface": "ext0" (1)
},
{
"name": "vlan-100",
"namespace": "test-namespace",
"interface": "ext0.100"
}
]'
spec:
securityContext:
runAsNonRoot: true
containers:
- name: nginx-container
image: nginxinc/nginx-unprivileged:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
ports:
- containerPort: 80
seccompProfile:
type: "RuntimeDefault"
1 | The name to be used as the master for the VLAN interface. |
Apply the YAML file by running the following command:
$ oc apply -f pod-a.yaml
Get detailed information about the nginx-pod
within the test-namespace
by running the following command:
$ oc describe pods nginx-pod -n test-namespace
Name: nginx-pod
Namespace: test-namespace
Priority: 0
Node: worker-1/10.46.186.105
Start Time: Mon, 14 Aug 2023 16:23:13 -0400
Labels: <none>
Annotations: k8s.ovn.org/pod-networks:
{"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0...
k8s.v1.cni.cncf.io/network-status:
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"10.131.0.26"
],
"mac": "0a:58:0a:83:00:1a",
"default": true,
"dns": {}
},{
"name": "test-namespace/sriov-network",
"interface": "ext0",
"mac": "6e:a7:5e:3f:49:1b",
"dns": {},
"device-info": {
"type": "pci",
"version": "1.0.0",
"pci": {
"pci-address": "0000:d8:00.2"
}
}
},{
"name": "test-namespace/vlan-100",
"interface": "ext0.100",
"ips": [
"1.1.1.1"
],
"mac": "6e:a7:5e:3f:49:1b",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks:
[ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i...
openshift.io/scc: privileged
Status: Running
IP: 10.131.0.26
IPs:
IP: 10.131.0.26
Creating a subinterface can be applied to other types of interfaces. Follow this procedure to create a subinterface based on a bridge master interface in a container namespace.
You have installed the OpenShift CLI (oc
).
You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin
privileges.
Create a dedicated container namespace where you want to deploy your pod by running the following command:
$ oc new-project test-namespace
Using the following YAML example, create a bridge NetworkAttachmentDefinition
custom resource (CR) file named bridge-nad.yaml
:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: bridge-network
spec:
config: '{
"cniVersion": "0.4.0",
"name": "bridge-network",
"type": "bridge",
"bridge": "br-001",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.0.0.0/24",
"routes": [{"dst": "0.0.0.0/0"}]
}
}'
Run the following command to apply the NetworkAttachmentDefinition
CR to your OpenShift Container Platform cluster:
$ oc apply -f bridge-nad.yaml
Verify that the NetworkAttachmentDefinition
CR has been created successfully by running the following command:
$ oc get network-attachment-definitions
NAME AGE
bridge-network 15s
Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml
for the IPVLAN additional network configuration:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: ipvlan-net
namespace: test-namespace
spec:
config: '{
"cniVersion": "0.3.1",
"name": "ipvlan-net",
"type": "ipvlan",
"master": "ext0", (1)
"mode": "l3",
"linkInContainer": true, (2)
"ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]}
}'
1 | Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation. |
2 | Specifies that the master interface is in the container network namespace. |
Apply the YAML file by running the following command:
$ oc apply -f ipvlan-additional-network-configuration.yaml
Verify that the NetworkAttachmentDefinition
CR has been created successfully by running the following command:
$ oc get network-attachment-definitions
NAME AGE
bridge-network 87s
ipvlan-net 9s
Using the following YAML example, create a file named pod-a.yaml
for the pod definition:
apiVersion: v1
kind: Pod
metadata:
name: pod-a
namespace: test-namespace
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "bridge-network",
"interface": "ext0" (1)
},
{
"name": "ipvlan-net",
"interface": "ext1"
}
]'
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: test-pod
image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
1 | Specifies the name to be used as the master for the IPVLAN interface. |
Apply the YAML file by running the following command:
$ oc apply -f pod-a.yaml
Verify that the pod is running by using the following command:
$ oc get pod -n test-namespace
NAME READY STATUS RESTARTS AGE
pod-a 1/1 Running 0 2m36s
Show network interface information about the pod-a
resource within the test-namespace
by running the following command:
$ oc exec -n test-namespace pod-a -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::488b:91ff:fe84:a94b/64 scope link
valid_lft forever preferred_lft forever
4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0
valid_lft forever preferred_lft forever
inet6 fe80::bcda:bdff:fe7e:f437/64 scope link
valid_lft forever preferred_lft forever
5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1
valid_lft forever preferred_lft forever
inet6 fe80::beda:bd00:17e:f437/64 scope link
valid_lft forever preferred_lft forever
This output shows that the network interface ext1
is associated with the physical interface ext0
.