{
"cniVersion": "0.3.0", (1)
"type": "loopback", (2)
"additional": "<plugin-specific-json-data>" (3)
}
Multus CNI provides the capability to attach multiple network interfaces to Pods in OpenShift Container Platform. This gives you flexibility when you must configure Pods that deliver network functionality, such as switching or routing.
Multus CNI is useful in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:
You can send traffic along two different planes in order to manage how much traffic is along each plane.
You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.
All of the Pods in the cluster will still use the cluster-wide default network
to maintain connectivity across the cluster. Every Pod has an eth0
interface
which is attached to the cluster-wide Pod network. You can view the interfaces
for a Pod using the oc exec -it <pod_name> -- ip a
command. If you
add additional network interfaces using Multus CNI, they will be named net1
,
net2
, …, netN
.
To attach additional network interfaces to a Pod, you must create configurations
which define how the interfaces are attached. Each interface is specified using
a Custom Resource (CR) that has a NetworkAttachmentDefinition
type. A CNI
configuration inside each of these CRs defines how that interface will be
created. Multus CNI is a CNI plug-in that can call other CNI plug-ins. This
allows the use of other CNI plug-ins to create additional network interfaces.
For high performance networking, use the SR-IOV Device Plugin with Multus CNI.
Execute the following steps to attach additional network interfaces to Pods:
Create a CNI configuration as a custom resource.
Annotate the Pod with the configuration name.
Verify that the attachment was successful by viewing the status annotation.
CNI configurations are JSON data with only a single required field, type
. The
configuration in the additional
field is free-form JSON data, which allows CNI
plug-ins to make the configurations in the form that they require. Different CNI
plug-ins use different configurations. See the documentation specific to the CNI
plug-in that you want to use.
An example CNI configuration:
{
"cniVersion": "0.3.0", (1)
"type": "loopback", (2)
"additional": "<plugin-specific-json-data>" (3)
}
1 | cniVersion : Specifies the CNI version that is used. The CNI plug-in uses
this information to check whether it is using a valid version. |
2 | type : Specifies which CNI plug-in binary to call on disk. In this example,
the loopback binary is specified, Therefore, it creates a loopback-type network
interface. |
3 | additional : The <information> value provided in the code above is an
example. Each CNI plug-in specifies the configuration parameters it needs in
JSON. These are specific to the CNI plug-in binary that is named in the type
field. |
Additional interfaces for Pods are defined in CNI configurations that are stored
as Custom Resources (CRs). These CRs can be created, listed, edited, and deleted
using the oc
tool.
The following procedure configures a macvlan
interface on a Pod. This
configuration might not apply to all production environments, but you can use
the same procedure for other CNI plug-ins.
If you want to attach an additional interface to a Pod, the CR that defines the interface must be in the same project (namespace) as the Pod. |
Create a project to store CNI configurations as CRs and the Pods that will use the CRs.
$ oc new-project multinetwork-example
Create the CR that will define an additional network interface. Create a YAML
file called macvlan-conf.yaml
with the following contents:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition (1)
metadata:
name: macvlan-conf (2)
spec:
config: '{ (3)
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
1 | kind: NetworkAttachmentDefinition . This is the name for the CR where this
configuration will be stored. It is a custom extension of Kubernetes that
defines how networks are attached to Pods. |
2 | name maps to the annotation, which is used in the next step. |
3 | config : The CNI configuration is packaged in the config field. |
The configuration is specific to a plug-in, which enables macvlan
. Note the
type
line in the CNI configuration portion. Aside from the IPAM (IP address
management) parameters for networking, in this example the master
field must
reference a network interface that resides on the node(s) hosting the Pod(s).
Run the following command to create the CR:
$ oc create -f macvlan-conf.yaml
This example is based on a |
You can manage the CRs for additional interfaces using the oc
CLI.
Use the following command to list the CRs for additional interfaces:
$ oc get network-attachment-definitions.k8s.cni.cncf.io
Use the following command to delete CRs for additional interfaces:
$ oc delete network-attachment-definitions.k8s.cni.cncf.io macvlan-conf
To create a Pod that uses the additional interface, use an annotation that
refers to the CR. Create a YAML file called samplepod.yaml
for a Pod with the
following contents:
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf (1)
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: centos/tools
1 | The annotations field contains k8s.v1.cni.cncf.io/networks:
macvlan-conf , which correlates to the name field in the CR defined earlier. |
Run the following command to create the samplepod
Pod:
$ oc create -f samplepod.yaml
To verify that an additional network interface has been created and attached to the Pod, use the following command to list the IPv4 address information:
$ oc exec -it samplepod -- ip -4 addr
Three interfaces are listed in the output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 (1) inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP link-netnsid 0 (2) inet 10.244.1.4/24 scope global eth0 valid_lft forever preferred_lft forever 4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link-netnsid 0 (3) inet 192.168.1.203/24 scope global net1 valid_lft forever preferred_lft forever
1 | lo : A loopback interface. |
2 | eth0 : The interface that connects to the cluster-wide default network. |
3 | net1 : The new interface that you just created. |
To attach more than one additional interface to a Pod, specify multiple names,
in comma-delimited format, in the annotations
field in the Pod definition.
The following annotations
field in a Pod definition specifies different CRs
for the additional interfaces:
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf, tertiary-conf, quaternary-conf
The following annotations
field in a Pod definition specifies the same CR for
the additional interfaces:
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf, macvlan-conf
After the Pod is running, you can review the configurations of the additional interfaces created. To view the sample Pod from the earlier example, execute the following command.
$ oc describe pod samplepod
The metadata
section of the output contains a list of annotations, which are
displayed in JSON format:
Annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"ips": [
"10.131.0.10"
],
"default": true,
"dns": {}
},{
"name": "macvlan-conf", (1)
"interface": "net1", (2)
"ips": [ (3)
"192.168.1.200"
],
"mac": "72:00:53:b4:48:c4", (4)
"dns": {} (5)
}]
1 | name refers to the custom resource name, macvlan-conf . |
2 | interface refers to the name of the interface in the Pod. |
3 | ips is a list of IP addresses as assigned to the Pod. |
4 | mac is the MAC address of the interface. |
5 | dns refers DNS for the interface. |
The first annotation, k8s.v1.cni.cncf.io/networks: macvlan-conf
, refers to the
CR created in the example. This annotation was specified in the Pod definition.
The second annotation is k8s.v1.cni.cncf.io/networks-status
. There are two
interfaces listed under k8s.v1.cni.cncf.io/networks-status
.
The first interface describes the interface for the default network,
openshift-sdn
. This interface is created as eth0
. It is used for
communications within the cluster.
The second interface is the additional interface that you created, net1
. The
output above lists some key values that were configured when the interface was
created, for example, the IP addresses that were assigned to the Pod.
The host-device plug-in connects an existing network device on a node directly to a Pod.
The code below creates a dummy device using a dummy module to back a virtual
device, and assigns the dummy device name
to exampledevice0
.
$ modprobe dummy $ lsmod | grep dummy $ ip link add exampledevice0 type dummy
To connect the dummy network device to a Pod, label the host, so that you can assign a Pod to the node where the device exists.
$ oc label nodes <your-worker-node-name> exampledevice=true $ oc get nodes --show-labels
Create a YAML file called hostdevice-example.yaml
for a custom resource to
refer to this configuration:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: hostdevice-example
spec:
config: '{
"cniVersion": "0.3.0",
"type": "host-device",
"device": "exampledevice0"
}'
Run the following command to create the hostdevice-example
CR:
$ oc create -f hostdevice-example.yaml
Create a YAML file for a Pod which refers to this name in the annotation.
Include nodeSelector
to assign the Pod to the machine where you created the
alias.
apiVersion: v1
kind: Pod
metadata:
name: hostdevicesamplepod
annotations:
k8s.v1.cni.cncf.io/networks: hostdevice-example
spec:
containers:
- name: hostdevicesamplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: centos/tools
nodeSelector:
exampledevice: "true"
Run the following command to create the hostdevicesamplepod
Pod:
$ oc create -f hostdevicesamplepod.yaml
View the additional interface that you created:
$ oc exec hostdevicesamplepod -- ip a
SR-IOV multinetwork support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
OpenShift Container Platform includes the capability to use SR-IOV hardware on OpenShift Container Platform nodes, which enables you to attach SR-IOV virtual function (VF) interfaces to Pods in addition to other network interfaces.
Two components are required to provide this capability: the SR-IOV network device plug-in and the SR-IOV CNI plug-in.
The SR-IOV network device plug-in is a Kubernetes device plug-in for discovering, advertising, and allocating SR-IOV network virtual function (VF) resources. Device plug-ins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plug-ins give the Kubernetes scheduler awareness of which resources are exhausted, allowing Pods to be scheduled to worker nodes that have sufficient resources available.
The SR-IOV CNI plug-in plumbs VF interfaces allocated from the SR-IOV device plug-in directly into a Pod.
The following Network Interface Card (NIC) models are supported in OpenShift Container Platform:
Intel XXV710-DA2 25G card with vendor ID 0x8086 and device ID 0x158b
Mellanox MT27710 Family [ConnectX-4 Lx] 25G card with vendor ID 0x15b3 and device ID 0x1015
Mellanox MT27800 Family [ConnectX-5] 100G card with vendor ID 0x15b3 and device ID 0x1017
For Mellanox cards, ensure that SR-IOV is enabled in the firmware before provisioning VFs on the host. |
The creation of SR-IOV VFs is not handled by the SR-IOV device plug-in and SR-IOV CNI. To provision SR-IOV VF on hosts, you must configure it manually. |
To use the SR-IOV network device plug-in and SR-IOV CNI plug-in, run both plug-ins in daemon mode on each node in your cluster.
Create a YAML file for the openshift-sriov
namespace with the following
contents:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-sriov
labels:
name: openshift-sriov
openshift.io/run-level: "0"
annotations:
openshift.io/node-selector: ""
openshift.io/description: "Openshift SR-IOV network components"
Run the following command to create the openshift-sriov
namespace:
$ oc create -f openshift-sriov.yaml
Create a YAML file for the sriov-device-plugin
service account with the
following contents:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sriov-device-plugin
namespace: openshift-sriov
Run the following command to create the sriov-device-plugin
service account:
$ oc create -f sriov-device-plugin.yaml
Create a YAML file for the sriov-cni
service account with the following
contents:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sriov-cni
namespace: openshift-sriov
Run the following command to create the sriov-cni
service account:
$ oc create -f sriov-cni.yaml
Create a YAML file for the sriov-device-plugin
DaemonSet with the following
contents:
The SR-IOV network device plug-in daemon, when launched, will discover all the
configured SR-IOV VFs (of supported NIC models) on each node and advertise
discovered resources. The number of available SR-IOV VF resources that are
capable of being allocated can be reviewed by describing a node with the
|
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: sriov-device-plugin
namespace: openshift-sriov
annotations:
kubernetes.io/description: |
This daemon set launches the SR-IOV network device plugin on each node.
spec:
selector:
matchLabels:
app: sriov-device-plugin
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: sriov-device-plugin
component: network
type: infra
openshift.io/component: network
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
- operator: Exists
serviceAccountName: sriov-device-plugin
containers:
- name: sriov-device-plugin
image: quay.io/openshift/ose-sriov-network-device-plugin:v4.0.0
args:
- --log-level=10
securityContext:
privileged: true
volumeMounts:
- name: devicesock
mountPath: /var/lib/kubelet/
readOnly: false
- name: net
mountPath: /sys/class/net
readOnly: true
volumes:
- name: devicesock
hostPath:
path: /var/lib/kubelet/
- name: net
hostPath:
path: /sys/class/net
Run the following command to create the sriov-device-plugin
DaemonSet:
oc create -f sriov-device-plugin.yaml
Create a YAML file for the sriov-cni
DaemonSet with the following contents:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: sriov-cni
namespace: openshift-sriov
annotations:
kubernetes.io/description: |
This daemon set launches the SR-IOV CNI plugin on SR-IOV capable worker nodes.
spec:
selector:
matchLabels:
app: sriov-cni
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: sriov-cni
component: network
type: infra
openshift.io/component: network
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
- operator: Exists
serviceAccountName: sriov-cni
containers:
- name: sriov-cni
image: quay.io/openshift/ose-sriov-cni:v4.0.0
securityContext:
privileged: true
volumeMounts:
- name: cnibin
mountPath: /host/opt/cni/bin
volumes:
- name: cnibin
hostPath:
path: /var/lib/cni/bin
Run the following command to create the sriov-cni
DaemonSet:
$ oc create -f sriov-cni.yaml
Create a YAML file for the Custom Resource (CR) with SR-IOV configuration. The
name
field in the following CR has the value sriov-conf
.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-conf
annotations:
k8s.v1.cni.cncf.io/resourceName: openshift.io/sriov (1)
spec:
config: '{
"type": "sriov", (2)
"name": "sriov-conf",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
}'
1 | k8s.v1.cni.cncf.io/resourceName annotation is set to openshift.io/sriov . |
2 | type is set to sriov . |
Run the following command to create the sriov-conf
CR:
$ oc create -f sriov-conf.yaml
Create a YAML file for a Pod which references the name of the
NetworkAttachmentDefinition
and requests one openshift.io/sriov
resource:
apiVersion: v1
kind: Pod
metadata:
name: sriovsamplepod
annotations:
k8s.v1.cni.cncf.io/networks: sriov-conf
spec:
containers:
- name: sriovsamplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: centos/tools
resources:
requests:
openshift.io/sriov: '1'
limits:
openshift.io/sriov: '1'
Run the following command to create the sriovsamplepod
Pod:
$ oc create -f sriovsamplepod.yaml
View the additional interface by executing the ip
command:
$ oc exec sriovsamplepod -- ip a