Kubernetes has become one of the popular container orchestration tools in the tech industry. This is because it provides a lot of features and benefits that include automatic load balancing, service discovery and DNS, auto-scaling, self-healing, rolling updates and rollbacks, resource isolation, multi-cloud support, RBAC, etc. There are many Kubernetes distributions in the market today. The popular ones are AKS, EKS, GKE, Racher, OpenShift, IBM Cloud Kubernetes Service, MicroK8s, K3s, Minikube, HPE Ezmeral Container Platform etc.

In this guide, we will learn how to install and use Cilium CNI in your Kubernetes Cluster.

What is Cilium CNI?

Cilium is an open-source security and networking project used to offer high load balancing and networking performances on Kubernetes. Its main objective is to enhance the container orchestration platform capabilities while enabling secure, scalable and structured communication between the various resources in the cluster.

The amazing features and benefits tagged to Cilium CNI are:

  • High-Performance Cloud Native Networking (CNI): It can be used to enhance the speed and efficiency of the cloud native and Kubernetes networks. It is built from the ground up for highly dynamic cloud native environments that contain 1000s of containers that are created and destroyed within seconds.
  • Scalability: It is built to scale, and it can be used for both small and thousands of workloads as it is powered by eBPF networking.
  • Network Security: Callium does not only focus on performance but also security. It comes with robust security features such as identity-based security that spans beyond the traditional IP address-based ACLs
  • Integration: It can be easily integrated with most Kubernetes distributions, providing networking and a layer of security through the CNI plugin. It has been tested and validated across most cloud and Kubernetes distros.
  • Loadbalancing: It can be used as a layer 4 load balancer for your Kubernetes cluster. It can attract traffic with BGP and accelerate it leveraging XDP and eBPF. When used together, these technologies create a robust and secure implementation of Load Balancing.
  • Deep Visibility and Monitoring: It can also be used to get deep insights into the cluster service mesh, network policies, and traffic metrics which can be vital when troubleshooting and monitoring.

Let’s plunge in!

1. Install a Kubernetes Cluster

Before we proceed, you need to have a Kubernetes cluster deployed. Callum can be used to both an existing or new Kubernetes cluster.

Below are some of the guides to help you spin a cluster.

You can choose to ignore installing any CNI for the cluster and proceed with the below steps. But if you have already installed it, you can also proceed, but you will have to disable/delete the CNI.

For Minikube, you need to start it as shown:

minikube start --network-plugin=cni --cni=false

2. Install Cilium CLI

When installing Cilium CNI for Kubernetes, we must first install the Cilium CLI. It is recommended that you install cilium-cli v0.15.0 or later.

The Cilium CLI is used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features such as Clustermesh, Hubble etc.

To install the latest Cilium CLI version, use:

  • On Linux
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  • On MacOS
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}

For more details on how to install Cilium CLI on other systems, see the releases page.

Verify the installation using:

$ cilium version --client
cilium-cli: v0.15.16 compiled with go1.21.4 on linux/amd64
cilium image (default): v1.14.5
cilium image (stable): v1.14.5

3. Install Cilium CNI on Kubernetes

Once Cilium CLI has been installed, you can install Cilium CNI on any Kubernetes cluster. But first, if you have an existing CNI, you need to disable it.

This may differ depending on the Kubernetes distribution.

  • On RKE, you need to make changesin your cluster.yml file form:
network:
  options:
    flannel_backend_type: "vxlan"
  plugin: "canal"

To:

network:
  plugin: none
  • On k3s, disable the default CNI:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none --disable-network-policy' sh -

Once the default or existing CNI has been disabled, you can install Cilium CNI. In this guide, we will learn the two methods of installing Cilium CNI.

Method 1: Install Cilium CNI using the CLI

With the CLI installed, we can easily install Cilium CNI with the command:

cilium install

Sample Output:

🔮 Auto-detected Kubernetes kind: K3s
ℹ️  Using Cilium version 1.14.5
🔮 Auto-detected cluster name: default

Verify the installation with the command:

cilium status --wait

Sample Output:

Install and Use Cilium CNI in your Kubernetes Cluster

Method 2: Install Cilium CNI using Helm

For this method, you need to have Helm 3 installed To achieve that, you can use the dedicated guide below:

Now add the Helm chart:

helm repo add cilium https://helm.cilium.io/

Now install Cilium CNI using the commands:

##On AKS
helm install cilium cilium/cilium \
  --namespace kube-system \
  --set aksbyocni.enabled=true \
  --set nodeinit.enabled=true

##On RKE
helm install cilium cilium/cilium --version \
   --namespace $CILIUM_NAMESPACE

##On k3s
helm install cilium cilium/cilium --version \
   --namespace $CILIUM_NAMESPACE \
   --set operator.replicas=1

##Generic
helm install cilium cilium/cilium --version \
  --namespace kube-system

Verify the installation:

$ kubectl -n kube-system get pods --watch
NAME                               READY   STATUS    RESTARTS        AGE
cilium-4mj94                       1/1     Running   0               52s
cilium-operator-f5dcdcc8d-925rp    0/1     Pending   0               52s
cilium-operator-f5dcdcc8d-pn75j    1/1     Running   0               52s
coredns-5dd5756b68-d6gqp           1/1     Running   0               31s
etcd-minikube                      1/1     Running   0               6m19s
kube-apiserver-minikube            1/1     Running   0               6m19s
kube-controller-manager-minikube   1/1     Running   0               6m19s
kube-proxy-6khtl                   1/1     Running   0               6m6s
kube-scheduler-minikube            1/1     Running   0               6m19s
storage-provisioner                1/1     Running   1 (5m36s ago)   6m18s

Incase there were pods running bfore Cilium was deployed, the pods will still be connected to the old CNI. For that cause, you need to delete them. For example:

$ kubectl --namespace kube-system delete pods -l k8s-app=kube-dns
pod "coredns-77ccd57875-7tpzn" deleted

Now the pods will start again and connect to the new CNI:

$ kubectl --namespace kube-system get pods
NAME                                     READY   STATUS      RESTARTS   AGE
cilium-operator-6d77c7bddb-dld99         1/1     Running     0          27m
cilium-xcplf                             1/1     Running     0          27m
local-path-provisioner-957fdf8bc-pcp5z   1/1     Running     0          28m
helm-install-traefik-crd-9xbfp           0/1     Completed   0          28m
metrics-server-648b5df564-gdpn5          1/1     Running     0          28m
helm-install-traefik-tczgj               0/1     Completed   2          28m
svclb-traefik-2aa7ae53-45zpz             2/2     Running     0          25m
traefik-768bdcdcdd-j9w8f                 1/1     Running     0          25m
coredns-77ccd57875-99sbp                 1/1     Running     0          5s

Restart all the unmanaged nodes:

kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod

4. Test Cilium CNI on Kubernetes

To test if all is okay, we can deploy the connectivity-check that tests the connectivity between the pods. First create a dedicated namespace for this:

kubectl create ns cilium-test

Then proceed and deploy the check:

kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/connectivity-check/connectivity-check.yaml

This command will deploy a series of deployments with various connectivity paths. The readiness of the pods shows that all is working and connections are established.

View the pods with the command:

kubectl get pods -n cilium-test

Sample Output:

Install and Use Cilium CNI in your Kubernetes Cluster 2

The pods that check multi-node functionalities will remain in the Pending state. This is stil okay beacuse these pods need at least 2 nodes to be scheduled successfully. So dont be worried if you see that.

Once the test is done, you can delete the namespace:

kubectl delete ns cilium-test

You can also use the Cilium CLI command below to test connectivity:

cilium connectivity test

Sample Output:

Install and Use Cilium CNI in your Kubernetes Cluster 1

Verdict

In this guide, we have learned how to install and use Cilium CNI in your Kubernetes Cluster. Now you can enjoy the amazing features associated with Cilium.

See more on our page:

LEAVE A REPLY

Please enter your comment!
Please enter your name here