If you’re running a Kubernetes Cluster in an AWS Cloud using Amazon EKS, the default Container Network Interface (CNI) plugin for Kubernetes is  amazon-vpc-cni-k8s. By using this CNI plugin your Kubernetes pods will have the same IP address inside the pod as they do on the VPC network. The problem with this CNI is the large number of VPC IP addresses required to run and manage huge clusters. This is the reason why other CNI plugins such as Calico is an option.

Calico is a free to use and open source networking and network security plugin that supports a broad range of platforms including Docker EE, OpenShift, Kubernetes, OpenStack, and bare metal services. Calico offers true cloud-native scalability and delivers blazing fast performance. With Calico you have the options to use either Linux eBPF or the Linux kernel’s highly optimized standard networking pipeline to deliver high performance networking.

For multi-tenant Kubernetes environments where isolation of tenants from each other is key, Calico network policy enforcement can be used to implement network segmentation and tenant isolation. You can easily create network ingress and egress rules to ensure proper network controls are applied to services.

Install Calico CNI plugin on Amazon EKS Kubernetes Cluster

These are the points to note before implementing the solution:

  • If using Fargate with Amazon EKS Calico is not supported.
  • If you have rules outside of Calico policy consider adding existing iptables rules to your Calico policies to avoid having rules outside of Calico policy overridden by Calico.
  • If you’re using security groups for pods, traffic flow to pods on branch network interfaces is not subjected to Calico network policy enforcement and is limited to Amazon EC2 security group enforcement only

Step 1: Setup EKS Cluster

I assume you have a newly created EKS Kubernetes Cluster. Our guide can be used to deploy an EKS cluster as below.

Easily Setup Kubernetes Cluster on AWS with EKS

Once the cluster is running, confirm it is available with eksctl:

$ eksctl get cluster -o yaml
- name: My-EKS-Cluster
  region: eu-west-1

Step 2: Delete AWS VPC networking Pods

Since in our EKS cluster we’re going to use Calico for networking, we must delete the aws-node daemon set to disable AWS VPC networking for pods.

$ kubectl delete ds aws-node -n kube-system
daemonset.apps "aws-node" deleted

Confirm all aws-node Pods have been deleted.

$ kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6987776bbd-4hj4v   1/1     Running   0          15h
coredns-6987776bbd-qrgs8   1/1     Running   0          15h
kube-proxy-mqrrk           1/1     Running   0          14h
kube-proxy-xx28m           1/1     Running   0          14h

Step 3: Install Calico CNI on EKS Kubernetes Cluster

Download Calico Yaml manifest.

wget https://docs.projectcalico.org/manifests/calico-vxlan.yaml

Then apply the manifest yaml file to deploy Calico CNI on Amazon EKS cluster.

kubectl apply -f calico-vxlan.yaml

This is my deployment output showing all objects being created.

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

Get list of DaemonSets deployed in the kube-system namespace.

$ kubectl get ds calico-node --namespace kube-system

calico-node   2         2         0       2            0           kubernetes.io/os=linux   14s

The calico-node DaemonSet should have the DESIRED number of pods in the READY state.

$ kubectl get ds calico-node --namespace kube-system

calico-node   2         2         2       2            2           kubernetes.io/os=linux   48s

Running pods can be checked with kubectl command as well.

$ kubectl get pods -n kube-system | grep calico
calico-node-bmshb                                     1/1     Running   0          4m7s
calico-node-skfpt                                     1/1     Running   0          4m7s
calico-typha-69f668897f-zfh56                         1/1     Running   0          4m11s
calico-typha-horizontal-autoscaler-869dbcdddb-6sx2h   1/1     Running   0          4m7s

Step 4: Create new nodegroup and delete old one

If you had nodes already added to your cluster, we’ll need to add another node group the remove the old node groups and the machines in it.

To create an additional nodegroup, use:

eksctl create nodegroup --cluster=<clusterName> [--name=<nodegroupName>]

List your clusters to get clustername:

$ eksctl get cluster

Node group can be created from CLI or Config file.

  • Create Node group from CLI
eksctl create nodegroup --cluster <clusterName> --name <nodegroupname> --node-type <instancetype> --node-ami auto

To change maximum number of Pods per node, add:

--max-pods-per-node <maxpodsnumber>


eksctl create nodegroup --cluster my-eks-cluster --name eks-ng-02 --node-type t3.medium --node-ami auto --max-pods-per-node 150
  • Create from Configuration file – Update nodeGroups section. See be
  - name: eks-ng-01
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 80
    minSize: 2
    maxSize: 3
    privateNetworking: true

  - name: eks-ng-02
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 80
    minSize: 2
    maxSize: 3
    privateNetworking: true

For Managed replace nodeGroups with managedNodeGroups. When done apply the configuration to create Node group.

eksctl create nodegroup --config-file=my-eks-cluster.yaml

Once the new nodegroup is created, delete old one to cordon and migrate all pods.

eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>

Or from Config file:

eksctl delete nodegroup --config-file=my-eks-cluster.yaml --include=<nodegroupName> --approve

If you check the nodes in your cluster, at first scheduling is disabled:

$ kubectl get nodes
NAME                                           STATUS                     ROLES    AGE     VERSION
ip-10-255-101-100.eu-west-1.compute.internal   Ready                      <none>   3m57s   v1.17.11-eks-cfdc40
ip-10-255-103-17.eu-west-1.compute.internal    Ready,SchedulingDisabled   <none>   15h     v1.17.11-eks-cfdc40
ip-10-255-96-32.eu-west-1.compute.internal     Ready                      <none>   4m5s    v1.17.11-eks-cfdc40
ip-10-255-98-25.eu-west-1.compute.internal     Ready,SchedulingDisabled   <none>   15h     v1.17.11-eks-cfdc40

After few minutes they are deleted.

$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-10-255-101-100.eu-west-1.compute.internal   Ready    <none>   4m45s   v1.17.11-eks-cfdc40
ip-10-255-96-32.eu-west-1.compute.internal     Ready    <none>   4m53s   v1.17.11-eks-cfdc40

If you describe new Pods you should notice a change in its IP address:

$ kubectl describe pods coredns-6987776bbd-mvchx -n kube-system
Name:                 coredns-6987776bbd-mvchx
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 ip-10-255-101-100.eu-west-1.compute.internal/
Start Time:           Mon, 26 Oct 2020 15:24:16 +0300
Labels:               eks.amazonaws.com/component=coredns
Annotations:          cni.projectcalico.org/podIP:
                      eks.amazonaws.com/compute-type: ec2
                      kubernetes.io/psp: eks.privileged
Status:               Running
Controlled By:  ReplicaSet/coredns-6987776bbd

Step 5: Install calicoctl command line tool

The calicoctl enables cluster users to read, create, update, and delete Calico objects from the command line interface. Run the commands below to install calicoctl.


curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep linux-amd64 | grep -v wait | cut -d '"' -f 4 | wget -i -
chmod +x calicoctl-linux-amd64
sudo mv calicoctl-linux-amd64 /usr/local/bin/calicoctl


curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep darwin-amd64| grep -v wait | cut -d '"' -f 4 | wget -i -
chmod +x calicoctl-darwin-amd64
sudo mv calicoctl-darwin-amd64 /usr/local/bin/calicoctl

Next read how Configure calicoctl to connect to your datastore.


  • How to switch to Calico CNI on AWS EKS Cluster
  • Using Calico CNI on EKS Kubernetes Cluster
  • Switch networking CNI on Amazon EKS to Calico

Your support is our everlasting motivation,
that cup of coffee is what keeps us going!

As we continue to grow, we would wish to reach and impact more people who visit and take advantage of the guides we have on our blog. This is a big task for us and we are so far extremely grateful for the kind people who have shown amazing support for our work over the time we have been online.

Thank You for your support as we work to give you the best of guides and articles. Click below to buy us a coffee.


Please enter your comment!
Please enter your name here