$ oc edit networks.operator.openshift.io cluster
CNI VRF plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plug-in. The virtual network created by this plug-in is associated with a physical interface that you specify.
The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition
custom resource (CR) automatically.
Do not edit the |
Install the OpenShift Container Platform CLI (oc).
Log in to the OpenShift cluster as a user with cluster-admin privileges.
Create the CNO CR by running the following command:
$ oc edit networks.operator.openshift.io cluster
Extend the CR that you are creating by adding the rawCNIConfig
configuration for the additional network, as in the example CR below. The following YAML configures the CNI VRF plug-in:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
additionalNetworks:
- name: test-network-1
namespace: test-1
type: Raw
rawCNIConfig: '{
"cniVersion": "0.3.1",
"name": "macvlan-vrf",
"plugins": [ (1)
{
"type": "macvlan", (2)
"master": "eth1",
"ipam": {
"type": "static",
"addresses": [
{
"address": "191.168.1.23/24"
}
]
}
},
{
"type": "vrf",
"vrfname": "example-vrf-name", (3)
"table": 1001 (4)
}]
}'
1 | plugins must be a list. The first item in the list must be secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. |
2 | type must be set to vrf . |
3 | vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. |
4 | table is the routing table ID. Optional. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. |
VRF will function correctly only when the resource is of type |
Save your changes and quit the text editor to commit your changes.
Confirm that the CNO created the NetworkAttachmentDefinition
CR by running the following command. Replace <namespace>
with the namespace that you specified when configuring the network attachment. There might be a delay before the CNO creates the CR.
$ oc get network-attachment-definitions -n <namespace>
NAME AGE
additional-network-1 14m
To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following:
Create a network that uses the VRF CNI.
Assign the network to a pod.
Verify that the pod network attachment is connected to the VRF additional network. SSH into the pod and run the following command:
$ ip vrf show
Name Table
-----------------------
red 10
Confirm the VRF interface is master of the secondary interface:
$ ip link
5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode