Welcome to this guide on how to attach multiple network interfaces to pods in Kubernetes using Multus CNI, but before that, we need to know a few things.

Kubernetes is a container orchestration tool that has gained popularity in recent years. This has forced many companies to modernise applications and rethink their infrastructure strategies. It provides a powerful platform for automating the deployment, scaling, and management of containerized applications. It eliminates the underlying infrastructure while providing a consistent and declarative approach to managing applications. This makes it easier to deploy and scale services efficiently.

Container Network Interface (CNI) is a set of tools that define how container runtimes interact with networking plugins to establish and manage network connectivity for containers. It also provides a consistent and flexible approach to container networking, which makes it easier for container runtimes and network plugins to work together.

By default, only one network interface (apart from a loopback) can be attached to a pod. This can be a disadvantage in certain scenarios, especially when dealing with complex networking requirements. The disadvantages of that are limited network functionality, limited network segregation, etc.

Multus is a container network interface (CNI) plugin that enables one to attach multiple network interfaces to pods. It makes it easy to create a multi-homed pod with multiple interfaces. Multus serves as a “meta-plugin”, this is to mean that, it can call other multiple CNI plugins.

Below is a diagram that shows a simple Multus setup of two pods that use two interfaces eth0 and net1 to establish communication within the pods and two other network interfaces on the worker node (eth0 and eth1) to provide communication outside the node.

Attach multiple network interfaces to pods in Kubernetes using Multus CNI

The good feature is that Multus CNI adheres to the Network Custom Resource Definition (CRD) standard set by the Kubernetes Network Plumbing Working Group. By following this de-facto standard, it offers a consistent and standardized approach for defining configurations for additional network interfaces within Kubernetes clusters.

In this guide, we will learn how you can easily attach multiple network interfaces to pods in Kubernetes using Multus CNI.

1. Getting started with setup

For this guide, you need the following:

  • A Kubernetes cluster with cluster machines having multiple network interfaces attached. To set up a Kubernetes cluster, you can follow any of the below guides:

Next, install kubectl:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Export your admin config for the cluster:

##For RKE2
export PATH=$PATH:/var/lib/rancher/rke2/bin 
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

##For K0s
export KUBECONFIG=/var/lib/k0s/pki/admin.conf

Verify that the cluster is ready using the command:

$ kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
master    Ready    control-plane   14m     v1.27.4
worker1   Ready    <none>          9m55s   v1.27.4
worker2   Ready    <none>          9m49s   v1.27.4

2. Install and Configure Multus CNI

There are two options to deploy Multus CNI. These are;

  • Thin Plugin: this is the release that has limited features
  • Thick Plugin: this is the latest deployment plugin that consists of two binaries, multus-daemon and multus-shim CNI plugin. The ‘multus-daemon‘ will be deployed to all nodes as a local agent and provides additional features, such as metrics, which were not available with the ‘thin plugin’ deployment before.

To install Multus, you need to clone the Multus CNI repository:

git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni

In this repository, you get both the thick and thin options. Here, we will go for the Thin Plugin which can be installed quickly with the command:

##Thin Plugin
kubectl apply -f ./deployments/multus-daemonset.yml

If you want the Thick Plugin, you can use the command:

##Thick Plugin
kubectl apply -f ./deployments/multus-daemonset-thick.yml

Once executed, verify that the Multus pods are up and running:

$ kubectl get pods --all-namespaces | grep -i multus
kube-system    kube-multus-ds-6pn6j             1/1     Running   0               12s
kube-system    kube-multus-ds-hcp4r             1/1     Running   0               12s
kube-system    kube-multus-ds-tfrkd             1/1     Running   0               12s

You can also use the below command to check if Multus is running:

$ kubectl get pods -A | grep multus
kube-system    kube-multus-ds-6pn6j             1/1     Running            0               23s
kube-system    kube-multus-ds-hcp4r             1/1     Running            0               23s
kube-system    kube-multus-ds-tfrkd             1/1     Running            0               23s

3. Create Network Attachment Definition

The next thing to do is create a Network Attachment Definition for the CNI you wish to use as the plugin for the additional interface. First. you need to ensure that the CNI you want to use is supported and present in the /opt/cni/bin directory.

$ ls /opt/cni/bin 
bandwidth  dhcp   firewall  host-device  ipvlan    macvlan      portmap  sbr     tuning  vrf
bridge     dummy  flannel   host-local   loopback  multus-shim  ptp      static  vlan

We will use macvlan CNI for this guide, but you can also use ipvlan if interested. You also need to ensure that you have a secondary network interface attached to your worker nodes. For my case the secondary NIC is ens19

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether de:57:17:e0:f9:48 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 192.168.200.56/24 brd 192.168.200.255 scope global noprefixroute ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::dc57:17ff:fee0:f948/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether be:5a:31:04:c7:59 brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 192.168.200.175/24 brd 192.168.200.255 scope global dynamic noprefixroute ens19
       valid_lft 37300sec preferred_lft 37300sec
    inet6 fe80::e3dd:10fc:1651:1e5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Now create a Network Attachment Definition YAML file.

vim network-definition.yml

Add any of the below lines depending on your preferred option:

  • For Macvan CNI with bridged mode use:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
   name: multus-conf
spec:
   config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "ens19",
      "mode": "bridge",
      "ipam": {
         "type": "host-local",
         "subnet": "192.168.200.0/24",
         "rangeStart": "192.168.200.100",
         "rangeEnd": "192.168.200.216",
         "gateway": "198.168.200.1"
      }
 }'
  • For Ipvlan CNI with l3 node, use:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
   name: multus-conf
spec:
   config: '{
      "cniVersion": "0.3.0",
      "type": "ipvlan",
      "master": "ens19",
      "mode": "l3",
      "ipam": {
         "type": "host-local",
         "subnet": "192.168.200.0/24",
         "rangeStart": "192.168.200.100",
         "rangeEnd": "192.168.200.216",
         "gateway": "198.168.200.1"
      }
 }'

In the file, replace the network interface, ens19 to match an interface available on the hosts on the cluster and also the network range, subnet and gateway to match your own. Once the changes have been made, apply the manifest:

kubectl apply -f network-definition.yml

Verify if everything is okay:

$ kubectl get network-attachment-definitions
NAME          AGE
multus-conf   4s

$ kubectl describe network-attachment-definitions multus-conf
Name:         multus-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2023-08-11T09:27:41Z
  Generation:          1
  Resource Version:    176404
  UID:                 c7bb65eb-7f66-44fb-94a4-b4e1add45529
Spec:
  Config:  { "cniVersion": "0.3.0", "type": "macvlan", "master": "ens19", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.200.0/24", "rangeStart": "192.168.200.100", "rangeEnd": "192.168.200.216", "gateway": "198.168.200.1" } }
Events:    <none>

4. Attaching Multiple Network Interfaces to a Pod

Now we can test if the Multus CNI plugin works as desired by attaching an additional network to a Kubernetes pod. To test that, we will create a simple application that uses the network created in the above step.

cat <<EOF | kubectl apply -f - 
apiVersion: v1
kind: Pod
metadata:
  name: app1
  annotations:
    k8s.v1.cni.cncf.io/networks: multus-conf
spec:
  containers:
  - name: app1
    command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
EOF

We will also create another app on the same network:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: app2
  annotations:
    k8s.v1.cni.cncf.io/networks: multus-conf 
spec:
  containers:
  - name: app2
    command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
EOF

You should see the annotations if you describe the pod:

$ kubectl describe pod app1
Name:             app1
Namespace:        default
Priority:         0
Service Account:  default
Node:             node2/192.168.200.175
Start Time:       Fri, 11 Aug 2023 12:28:56 +0300
Labels:           <none>
Annotations:      k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "mynet",
                        "interface": "eth0",
                        "ips": [
                            "10.244.2.8"
                        ],
                        "mac": "86:69:28:4f:54:b3",
                        "default": true,
                        "dns": {},
                        "gateway": [
                            "10.244.2.1"
                        ]
                    },{
                        "name": "default/multus-conf",
                        "interface": "net1",
                        "ips": [
                            "192.168.200.100"
                        ],
                        "mac": "2a:1b:4d:89:66:c0",
                        "dns": {}
                    }]
                  k8s.v1.cni.cncf.io/networks: multus-conf
Status:           Running
IP:               10.244.2.8
IPs:
  IP:  10.244.2.8
Containers:
.....

To add more network interfaces, you need to have multiple network attachment definitions and then declare them under the annotations for the pod when creating it. For example:

....
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
            { "name" : "macvlan-conf" },
            { "name" : "ipvlan-conf" }
....

Now view the IP address of the first app:

$ kubectl exec -it app1 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 4a:80:ec:0f:4b:8a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.15/24 brd 10.244.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::4880:ecff:fe0f:4b8a/64 scope link 
       valid_lft forever preferred_lft forever
3: net1@if7: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 62:2f:dd:ec:66:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.100/24 brd 192.168.200.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::622f:dd00:1ec:66f5/64 scope link 
       valid_lft forever preferred_lft forever

For app2, use:

$ kubectl exec -it app2 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether a2:e4:af:05:43:01 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.16/24 brd 10.244.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a0e4:afff:fe05:4301/64 scope link 
       valid_lft forever preferred_lft forever
3: net1@if7: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 62:2f:dd:ec:66:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.101/24 brd 192.168.200.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::622f:dd00:2ec:66f5/64 scope link 
       valid_lft forever preferred_lft forever

From the output, the two apps have an additional interface, net1 with each pod having an IP address in the specified range. You can now test if the pods can communicate:

kubectl exec -it app1 -- ping -I net1 192.168.200.101

Sample Output:

Attach multiple network interfaces to pods in Kubernetes using Multus CNI 1

Test the other way:

kubectl exec -it app2 -- ping -I net1 192.168.200.100

Sample Output:

Attach multiple network interfaces to pods in Kubernetes using Multus CNI 2

If you used macvlan and set the mode to bridge, you should also be able to access the pods from your local network.

$ ping 192.168.200.100
PING 192.168.200.100 (192.168.200.100) 56(84) bytes of data.
64 bytes from 192.168.200.100: icmp_seq=1 ttl=64 time=0.618 ms
64 bytes from 192.168.200.100: icmp_seq=2 ttl=64 time=0.286 ms
64 bytes from 192.168.200.100: icmp_seq=3 ttl=64 time=0.334 ms
64 bytes from 192.168.200.100: icmp_seq=4 ttl=64 time=0.277 ms
64 bytes from 192.168.200.100: icmp_seq=5 ttl=64 time=0.328 ms
64 bytes from 192.168.200.100: icmp_seq=6 ttl=64 time=0.281 ms
64 bytes from 192.168.200.100: icmp_seq=7 ttl=64 time=0.348 ms
64 bytes from 192.168.200.100: icmp_seq=8 ttl=64 time=0.317 ms
64 bytes from 192.168.200.100: icmp_seq=9 ttl=64 time=0.330 ms
......

Verdict

By following this guide, you should be able to attach multiple network interfaces to pods in Kubernetes using Multus CNI. I hope this was of great importance to you.

Interested in more?

1 COMMENT

  1. I get an error “failed to find plugin “macvlan” in path [/opt/cni/bin]” even though I see its in there.

LEAVE A REPLY

Please enter your comment!
Please enter your name here