Rocky Linux 9 / AlmaLinux 9 Linux distributions will both be supported until May 31st, 2032 (10 Years support lifespan), making it an ideal platform to run your Kubernetes cluster. In this article we share the steps to be followed when setting up a 3 node Kubernetes cluster on Rocky Linux 9 / AlmaLinux 9 system.

  • 1 Control Plane kubernetes node
  • 2 Kubernetes worker nodes

My Server setup is as shown below.

Server IPServer HostnameRole
37.27.37.63k8smaster.mylab.ioMaster Node (Control Plane)
37.27.6.95k8snode01.mylab.ioWorker Node 01
135.181.195.155k8snode02.mylab.ioWorker Node 02

The infrastructure used in this guide is powered by Hetzner Cloud.

$ hcloud server list
ID         NAME                 STATUS    IPV4              IPV6                      PRIVATE NET   DATACENTER
41815406   k8snode01.mylab.io   running   37.27.6.95        2a01:4f9:c011:bf23::/64   -             hel1-dc2
41815407   k8snode02.mylab.io   running   135.181.195.155   2a01:4f9:c011:b7d7::/64   -             hel1-dc2
41815408   k8smaster.mylab.io   running   37.27.37.63       2a01:4f9:c012:c08a::/64   -             hel1-dc2

This setup is semi-automated using Ansible Playbook which will run a series of tasks to configure things like;

  • Update system and install dependency packages
  • Disable swap (Must be off when installing kubernetes cluster)
  • Set timezone and configure NTP time synchronization
  • Load required kernel modules and configure other sysctl configs
  • Configure /etc/hosts file on each node
  • Install and configure container run-time; Containerd, CRI-O, or Docker with Mirantis cri-dockerd
  • Configure firewalld if activated

1. Prepare your workstation machine

The workstation is where ansible commands will be executed. This can also be one of the cluster nodes.

Install basic CLI tools on the machine.

### Ubuntu / Debian ###
sudo apt update
sudo apt install git wget curl vim bash-completion tmux

### CentOS / RHEL / Fedora / Rocky Linux ###
sudo yum -y install git wget curl vim bash-completion tmux

Next we install ansible if not already available.

### Python3 ###
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --user
python3 -m pip install ansible --user

### Python2 ###
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py --user
python -m pip install ansible --user

Check Ansible version after installation:

$ ansible --version
ansible [core 2.15.8]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/.local/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/.local/bin/ansible
  python version = 3.9.18 (main, Sep  7 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
  jinja version = 3.1.3
  libyaml = True

Update /etc/hosts file in your workstaion machine:

$ sudo vim /etc/hosts
37.27.37.63      k8smaster.mylab.io  k8smaster
37.27.6.95       k8snode01.mylab.io  k8snode01
135.181.195.155  k8snode02.mylab.io  k8snode02

Generate SSH keys:

$ ssh-keygen -t rsa -b 4096 -N ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:wAufyZb3Zn/aEjo2Ds9/wnJrTTM2L3LsTlvtFXMiZcw [email protected]
The key's randomart image is:
+---[RSA 4096]----+
|OOo              |
|B**.             |
|EBBo. .          |
|===+ . .         |
|=*+++ . S        |
|*=++.o . .       |
|=.o. .. . .      |
| o. .    .       |
|   .             |
+----[SHA256]-----+

Create SSH client configuration file with the following parameters.

$ vim ~/.ssh/config
Host *
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no
    IdentitiesOnly yes
    ConnectTimeout 0
    ServerAliveInterval 30

Copy SSH keys to all Kubernetes cluster nodes

ssh-copy-id username@ServerIP #Loop for all nodes.

# Example
ssh-copy-id  root@k8smaster
ssh-copy-id  root@k8snode01
ssh-copy-id  root@k8snode02

2. Set correct hostname on all nodes

Login to each node in the cluster and configure correct hostname:

# Examples
# Master Node 01
sudo hostnamectl set-hostname k8smaster.mylab.io

# Worker Node 01
sudo hostnamectl set-hostname k8snode01.mylab.io

Logout then back in to confirm the hostname is set correctly:

$ hostnamectl
 Static hostname: k8smaster.mylab.io
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: b4c55e2d211141d786f56d387aafff8e
         Boot ID: 190eaf2401b14942bef3c55920f4c6de
  Virtualization: kvm
Operating System: Rocky Linux 9.3 (Blue Onyx)
     CPE OS Name: cpe:/o:rocky:rocky:9::baseos
          Kernel: Linux 5.14.0-362.13.1.el9_3.x86_64
    Architecture: x86-64
 Hardware Vendor: Hetzner
  Hardware Model: vServer
Firmware Version: 20171111

For a cloud instance using cloud-init check out the guide below:

3. Prepare your cluster nodes for k8s install

I created an Ansible playbook in my github repository which helps simplify these standard operations.

  • Install standard packages required to manage nodes
  • Setup standard system requirements – Disable Swap, Modify sysctl, Disable SELinux
  • Install and configure a container runtime of your Choice – cri-o, Docker, Containerd
  • Install the Kubernetes packages – kubelet, kubeadm and kubectl
  • Configure Firewalld on Kubernetes Master and Worker nodes – open all ports required

Clone the git repo to your workstation machine:

git clone https://github.com/jmutai/k8s-pre-bootstrap.git

Navigate to the k8s-pre-bootstrap directory

cd k8s-pre-bootstrap

Update the inventory file with your Kubernetes cluster nodes. Example;

$ vim hosts
[k8snodes]
k8smaster
k8snode01
k8snode02

We also need to update the variables in playbook file. Most important being;

  • Kubernetes version: k8s_version
  • Your timezone: timezone
  • Kubernetes CNI to use: k8s_cni
  • Container runtime: container_runtime
$ vim  k8s-prep.yml
- name: Prepare Kubernetes Nodes for Cluster bootstrapping
  hosts: k8snodes
  remote_user: root
  become: yes
  become_method: sudo
  #gather_facts: no
  vars:
    k8s_version: "1.30"                                  # Kubernetes version to be installed
    selinux_state: permissive                            # SELinux state to be set on k8s nodes
    timezone: "Africa/Nairobi"                           # Timezone to set on all nodes
    k8s_cni: calico                                      # calico, flannel
    container_runtime: cri-o                             # docker, cri-o, containerd
    pod_network_cidr: "172.18.0.0/16"                    # pod subnet if using cri-o runtime
    configure_firewalld: false                           # true / false (keep it false, k8s>1.19 have issues with firewalld)
    # Docker proxy support
    setup_proxy: false                                   # Set to true to configure proxy
    proxy_server: "proxy.example.com:8080"               # Proxy server address and port
    docker_proxy_exclude: "localhost,127.0.0.1"          # Adresses to exclude from proxy
  roles:
    - kubernetes-bootstrap

Validate your Ansible Playbook syntax:

$ ansible-playbook  --syntax-check -i hosts k8s-prep.yml
playbook: k8s-prep.yml

For SSH private keys with a passphrase, save it to prevent prompts at the time of executing playbook:

eval `ssh-agent -s` && ssh-add

We can now execute the playbook to prepare our cluster nodes.

ansible-playbook -i hosts k8s-prep.yml

Extract from my ansible execution is shown below.

PLAY [Prepare Kubernetes Nodes for Cluster bootstrapping] ********************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************************************************
ok: [k8snode02]
ok: [k8smaster]
ok: [k8snode01]

TASK [kubernetes-bootstrap : Add the OS specific variables] ******************************************************************************************************************************************
ok: [k8smaster] => (item=/private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat9.yml)
ok: [k8snode01] => (item=/private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat9.yml)
ok: [k8snode02] => (item=/private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat9.yml)

TASK [kubernetes-bootstrap : Include Pre-reqs setup task] ********************************************************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/pre_setup.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Put SELinux in permissive mode] *****************************************************************************************************************************************
ok: [k8smaster]
ok: [k8snode02]
ok: [k8snode01]

TASK [kubernetes-bootstrap : Update system packages] *************************************************************************************************************************************************
ok: [k8smaster]
ok: [k8snode02]
ok: [k8snode01]

TASK [kubernetes-bootstrap : Install some packages needed to configure the nodes] ********************************************************************************************************************
changed: [k8smaster] => (item=['vim', 'bash-completion', 'wget', 'curl', 'firewalld', 'python3-firewall', 'yum-utils', 'lvm2', 'device-mapper-persistent-data', 'iproute-tc'])
changed: [k8snode02] => (item=['vim', 'bash-completion', 'wget', 'curl', 'firewalld', 'python3-firewall', 'yum-utils', 'lvm2', 'device-mapper-persistent-data', 'iproute-tc'])
changed: [k8snode01] => (item=['vim', 'bash-completion', 'wget', 'curl', 'firewalld', 'python3-firewall', 'yum-utils', 'lvm2', 'device-mapper-persistent-data', 'iproute-tc'])

TASK [kubernetes-bootstrap : Disable firewalld service] **********************************************************************************************************************************************
changed: [k8snode02]
changed: [k8smaster]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Include task to disable swap] *******************************************************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/disable_swap.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Disable SWAP since kubernetes can't work with swap enabled (1/2)] *******************************************************************************************************
changed: [k8smaster]
changed: [k8snode02]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Disable SWAP in fstab since kubernetes can't work with swap enabled (2/2)] **********************************************************************************************
ok: [k8smaster]
ok: [k8snode02]
ok: [k8snode01]

TASK [kubernetes-bootstrap : Include task to configure timezone and ntp] *****************************************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/configure_timezone_ntp.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Configure timezone on all nodes] ****************************************************************************************************************************************
changed: [k8smaster]
changed: [k8snode02]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Ensure chrony package is installed] *************************************************************************************************************************************
ok: [k8smaster]
ok: [k8snode01]
ok: [k8snode02]

TASK [kubernetes-bootstrap : Enable and start chronyd service] ***************************************************************************************************************************************
ok: [k8smaster]
ok: [k8snode02]
ok: [k8snode01]

TASK [kubernetes-bootstrap : Synchronize time manually] **********************************************************************************************************************************************
changed: [k8smaster]
changed: [k8snode02]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Include task to load required kernel modules and sysctl configs] ********************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/load_kernel_modules_sysctl.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Load required modules] **************************************************************************************************************************************************
changed: [k8snode02] => (item=br_netfilter)
changed: [k8smaster] => (item=br_netfilter)
changed: [k8snode01] => (item=br_netfilter)
changed: [k8snode02] => (item=overlay)
changed: [k8smaster] => (item=overlay)
changed: [k8snode01] => (item=overlay)
changed: [k8smaster] => (item=ip_vs)
changed: [k8snode02] => (item=ip_vs)
changed: [k8snode01] => (item=ip_vs)
changed: [k8smaster] => (item=ip_vs_rr)
changed: [k8snode02] => (item=ip_vs_rr)
changed: [k8snode01] => (item=ip_vs_rr)
changed: [k8smaster] => (item=ip_vs_wrr)
changed: [k8snode02] => (item=ip_vs_wrr)
changed: [k8snode01] => (item=ip_vs_wrr)
changed: [k8smaster] => (item=ip_vs_sh)
changed: [k8snode02] => (item=ip_vs_sh)
changed: [k8snode01] => (item=ip_vs_sh)
ok: [k8smaster] => (item=nf_conntrack)
ok: [k8snode02] => (item=nf_conntrack)
ok: [k8snode01] => (item=nf_conntrack)

TASK [kubernetes-bootstrap : Create the .conf file to load the modules at bootup] ********************************************************************************************************************
changed: [k8smaster]
changed: [k8snode02]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Modify sysctl entries] **************************************************************************************************************************************************
changed: [k8smaster] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
changed: [k8snode02] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
changed: [k8snode01] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
changed: [k8snode02] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
changed: [k8smaster] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
changed: [k8snode01] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
changed: [k8smaster] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
changed: [k8snode02] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
changed: [k8snode01] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})

TASK [kubernetes-bootstrap : Include task to configure /etc/hosts file on each node] *****************************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/configure_etc_host_file.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Generate /etc/hosts file] ***********************************************************************************************************************************************
changed: [k8smaster]
changed: [k8snode02]
changed: [k8snode01]

TASK [kubernetes-bootstrap : Include task to configure docker] ***************************************************************************************************************************************
skipping: [k8smaster]
skipping: [k8snode01]
skipping: [k8snode02]

TASK [kubernetes-bootstrap : Include task to configure cri-o container runtime] **********************************************************************************************************************
included: /private/tmp/k8s-pre-bootstrap/roles/kubernetes-bootstrap/tasks/setup_crio.yml for k8smaster, k8snode01, k8snode02

TASK [kubernetes-bootstrap : Configure Cri-o YUM repository] *****************************************************************************************************************************************
changed: [k8smaster]
changed: [k8snode01]
changed: [k8snode02]

TASK [kubernetes-bootstrap : Setup required sysctl params] *******************************************************************************************************************************************
ok: [k8smaster] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
ok: [k8snode02] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
ok: [k8snode01] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
ok: [k8smaster] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
ok: [k8snode02] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
ok: [k8snode01] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
ok: [k8smaster] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
ok: [k8snode02] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
ok: [k8snode01] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})

Screenshot showing a successful playbook execution. Confirm there are no errors in the output:

kubernetes ansible prepare

Login to one of the nodes and validate below settings:

  • Configured /etc/hosts file contents can be checked using cat command.
[root@k8smaster ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

37.27.37.63 k8smaster.mylab.io k8smaster
37.27.6.95 k8snode01.mylab.io k8snode01
135.181.195.155 k8snode02.mylab.io k8snode02
  • Status of cri-o service:
[root@k8smaster ~]# systemctl status crio
 crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; preset: disabled)
     Active: active (running) since Thu 2024-01-11 12:13:14 EAT; 17min ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 13950 (crio)
      Tasks: 8
     Memory: 21.2M
        CPU: 977ms
     CGroup: /system.slice/crio.service
             └─13950 /usr/bin/crio

Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.364003994+03:00" level=info msg="Updated default CNI network name to crio"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.364030052+03:00" level=info msg="CNI monitoring event CHMOD         \"/usr/libexec/cni/vrf;659fb144\""
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.364049032+03:00" level=info msg="CNI monitoring event RENAME        \"/usr/libexec/cni/vrf;659fb144\""
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.370310945+03:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conflist"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.374147282+03:00" level=info msg="Found CNI network loopback (type=loopback) at /etc/cni/net.d/200-loopback.conflist"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.374193281+03:00" level=info msg="Updated default CNI network name to crio"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.374220253+03:00" level=info msg="CNI monitoring event CREATE        \"/usr/libexec/cni/vrf\""
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.380391706+03:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conflist"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.384774768+03:00" level=info msg="Found CNI network loopback (type=loopback) at /etc/cni/net.d/200-loopback.conflist"
Jan 11 12:13:41 k8smaster.mylab.io crio[13950]: time="2024-01-11 12:13:41.384832716+03:00" level=info msg="Updated default CNI network name to crio"
  • Configured sysctl kernel parameters
[root@k8smaster ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
  • Firewalld opened ports:
[root@k8smaster ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources:
  services: cockpit dhcpv6-client ssh
  ports: 22/tcp 80/tcp 443/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp 4789/udp 5473/tcp 179/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

4. Bootstrap Kubernetes Control Plane

The kubeadm init command is used to initialize Kubernetes control-plane. It runs a series of pre-flight checks to validate the system state before making changes

Below are the key options you should be aware of:

  • –apiserver-advertise-address: The IP address the API Server will advertise it’s listening on. If not set the default network interface will be used.
  • –apiserver-bind-port: Port for the API Server to bind to; default is 6443
  • –control-plane-endpoint: Specify a stable IP address or DNS name for the control plane.
  • –cri-socket: Path to the CRI socket to connect.
  • –dry-run: Don’t apply any changes; just output what would be done
  • –image-repository: Choose a container registry to pull control plane images from; Default: “k8s.gcr.io
  • –kubernetes-version: Choose a specific Kubernetes version for the control plane.
  • –pod-network-cidr: Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
  • –service-cidr: Use alternative range of IP address for service VIPs.  Default: “10.96.0.0/12

The following table lists container runtimes and their associated socket paths:

RuntimePath to Unix domain socket
Dockerunix://var/run/dockershim.sock
containerdunix://run/containerd/containerd.sock
CRI-Ounix://var/run/crio/crio.sock

Checking Kubernetes Version release notes

Release notes can be found by reading the Changelog that matches your Kubernetes version

Option 1: Bootstrapping single control plane node cluster

If you have plans to upgrade a single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes.

But if this is meant for test environment with single node control plane then you can ignore –control-plane-endpoint option.

Login to the master node:

ssh root@k8smaster

Then initialize the Control Plane by executing the following commands.

sudo kubeadm init --pod-network-cidr=172.18.0.0/16

Take note of the kubeconfig commands and cluster join commands for worker nodes.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 37.27.37.63:6443 --token h1q8cy.8tkc1fb6yrmtroxs \
	--discovery-token-ca-cert-hash sha256:0a8b2738df4d9116f698c68a85afafc7ac82677736cc2cef934cf5e93daeb7c4

Configure kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Confirm it works.

[root@k8smaster ~]# kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
k8smaster.mylab.io   Ready    control-plane   14m   v1.29.4

Deploy Calico network plugin to the cluster

Install the Tigera Calico operator and custom resource definitions.

VER=$(curl --silent "https://api.github.com/repos/projectcalico/calico/releases/latest"|grep '"tag_name"'|sed -E 's/.*"([^"]+)".*/\1/'|sed 's/v//')
echo $VER
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v$VER/manifests/tigera-operator.yaml

Command execution output:

namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

Next install Calico by creating the necessary custom resource. For more information on configuration options available in this manifest, see the installation reference.

wget https://raw.githubusercontent.com/projectcalico/calico/v$VER/manifests/custom-resources.yaml
sed -ie 's/192.168.0.0/172.18.0.0/g' custom-resources.yaml
kubectl apply -f custom-resources.yaml

Remove the taints on the control plane so that you can schedule pods on it.

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

Wait for the pods to be running with the following command.

[root@k8smaster ~]# watch kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-85fdd68cb8-t68b9   1/1     Running   0          67s
calico-node-xp4j5                          1/1     Running   0          68s
calico-typha-79db89578b-9478m              1/1     Running   0          68s
csi-node-driver-2b4wg                      2/2     Running   0          68s

Option 2: Bootstrapping Multi-node Control Plane Cluster

Use the --control-plane-endpoint option to set the shared endpoint for all control-plane nodes. This option allows both IP addresses and DNS names that can map to IP addresses

Example of A records in Bind DNS server

; Create entries for the master nodes
k8s-master-01		IN	A	192.168.200.10
k8s-master-02		IN	A	192.168.200.11
k8s-master-03		IN	A	192.168.200.12

;
; The Kubernetes cluster ControlPlaneEndpoint point these to the IP of the masters
k8s-endpoint	IN	A	192.168.200.10
k8s-endpoint	IN	A	192.168.200.11
k8s-endpoint	IN	A	192.168.200.12

Example of A records /etc/hosts file

$ sudo vim /etc/hosts
192.168.200.10 k8s-master-01.example.com k8s-master-01
192.168.200.11 k8s-master-02.example.com k8s-master-02
192.168.200.12 k8s-master-03.example.com k8s-master-03
##  Kubernetes cluster ControlPlaneEndpoint Entries ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Using Load Balancer IP for ControlPlaneEndpoint

The most ideal approach for HA setups is mapping ControlPlane Endpoint to a Load balancer IP. The LB will then point to the Control Plane nodes with some form of health checks.

# Entry in Bind DNS Server
k8s-endpoint	IN	A	192.168.200.8

# Entry in /etc/hosts file
192.168.200.8 k8s-endpoint.example.com  k8s-endpoint

Bootstrap Multi-node Control Plane Kubernetes Cluster

Login to Master Node 01 from the bastion server or your workstation machine:

[root@k8s-bastion ~]# ssh k8s-master-01
Warning: Permanently added 'k8s-master-01' (ED25519) to the list of known hosts.
Last login: Fri Sep 24 18:07:55 2021 from 192.168.200.9
[root@k8s-master-01 ~]#

Update /etc/hosts file with this node IP address and a custom DNS name that maps to this IP:

[root@k8s-master-01 ~]# vim /etc/hosts
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint

To initialize the control-plane node run:

[root@k8s-master-01 ~]# kubeadm init \
  --pod-network-cidr=172.18.0.0/16 \
  --control-plane-endpoint=k8s-endpoint.example.com \
  --cri-socket=unix://var/run/crio/crio.sock \
  --upload-certs

Where:

  • k8s-endpoint.example.com is a valid DNS name configured for ControlPlane Endpoint
  • /var/run/crio/crio.sock is Cri-o runtime socket file
  • 172.18.0.0/16 is your Pod network to be used in Kubernetes
  • –upload-certs Flag used ti upload the certificates that should be shared across all the control-plane instances to the cluster

If successful you’ll get an output with contents similar to this:

...output omitted...
[mark-control-plane] Marking the node k8s-master-01.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p11op9.eq9vr8gq9te195b9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
	--discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca \
	--control-plane --certificate-key 999110f4a07d3c430d19ca0019242f392e160216f3b91f421da1a91f1a863bba

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
	--discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca

Configure Kubectl as shown in the output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Test by checking active Nodes:

[root@k8smaster ~]# kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
k8smaster.mylab.io   Ready    control-plane   14m   v1.29.4

Deploy Calico network plugin to the cluster

Install the Tigera Calico operator and custom resource definitions.

VER=$(curl --silent "https://api.github.com/repos/projectcalico/calico/releases/latest"|grep '"tag_name"'|sed -E 's/.*"([^"]+)".*/\1/'|sed 's/v//')
echo $VER
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v$VER/manifests/tigera-operator.yaml

Command execution output:

namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

Next install Calico by creating the necessary custom resource. For more information on configuration options available in this manifest, see the installation reference.

wget https://raw.githubusercontent.com/projectcalico/calico/v$VER/manifests/custom-resources.yaml
sed -ie 's/192.168.0.0/172.18.0.0/g' custom-resources.yaml
kubectl apply -f custom-resources.yaml

Remove the taints on the control plane so that you can schedule pods on it.

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

Wait for the pods to be running with the following command.

[root@k8smaster ~]# watch kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-85fdd68cb8-t68b9   1/1     Running   0          67s
calico-node-xp4j5                          1/1     Running   0          68s
calico-typha-79db89578b-9478m              1/1     Running   0          68s
csi-node-driver-2b4wg                      2/2     Running   0          68s

Add other control plane nodes

This is only applicable for multiple control plane nodes (master nodes)

Update /etc/hostsfile by setting the ControlPlaneEndpoint to first control node from where bootstrap process was initiated:

192.168.200.10 k8s-endpoint.example.com   k8s-endpoint
#192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Then use the command printed after a successful initialization:

kubeadm join k8s-endpoint.example.com:6443 --token <token> \
  --discovery-token-ca-cert-hash <hash> \
  --control-plane --certificate-key <certkey>

Check Control Plane Nodes List

From one of the master nodes with Kubectl configured check the list of nodes:

[root@k8s-master-03 ~]# kubectl get nodes
NAME                                  STATUS   ROLES                  AGE   VERSION
k8s-master-01.example.com             Ready    control-plane,master   11m   v1.29.4
k8s-master-02.example.com             Ready    control-plane,master   5m    v1.29.4
k8s-master-03.example.com             Ready    control-plane,master   32s   v1.29.4

You can now remove uncomment other lines in /etc/hosts file on each control node if not using Load Balancer IP:

# Perform on all control plane nodes
[root@k8s-master-03 ~]# vim /etc/hosts
###  Kubernetes cluster ControlPlaneEndpoint Entries ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

5. Adding Worker Nodes to the cluster

Login to each of the worker machines using ssh:

ssh username@nodeip

Update /etc/hosts file on each node with master and worker nodes hostnames/ip address if no DNS in place (Only for API DNS use)

### Also add Kubernetes cluster ControlPlaneEndpoint Entries for multiple control plane nodes(masters) ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
1192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Join your worker machines to the cluster using the commands given earlier:

kubeadm join k8s-endpoint.example.com:6443 \
  --token <tonek> \
  --discovery-token-ca-cert-hash <hash>

Once done run kubectl get nodes on the control-plane to see the nodes join the cluster:

[root@k8smaster ~]#  kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
k8smaster.mylab.io   Ready    control-plane   21m   v1.29.4
k8snode01.mylab.io   Ready    <none>          2s    v1.29.4
k8snode02.mylab.io   Ready    <none>          21s   v1.29.4

You can also create and use new join command.

[root@k8smaster ~]# kubeadm token create --print-join-command

6. Deploy test application on cluster

We need to validate that our cluster is working by deploying an application. We’ll work with Guestbook application.

For single node cluster check out our guide on how to run pods on control plane nodes:

Create a temporary namespace:

$ kubectl create namespace temp
namespace/temp created

Deploy guestbook application in temp namespace created.

kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml

Query the list of Pods to verify that they are running after few minutes:

kubectl get all -n temp

Run the following command to forward port 8080 on your local machine to port 80 on the service.

$ kubectl -n temp port-forward svc/frontend 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now load the page http://localhost:8080 in your browser to view your guestbook.

setup kubernetes cluster rocky linux 03

7. Installing other addons / third party tools

Here we share the links to different guides on the installation of other components that makes the use of Kubernetes better.

#1. Metrics Server

Metrics Server will fetch resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA.

See guide below on how to deploy it.

#2. Install Ingress Controller

An Ingress controller is used to provide a secure public access to your K8s services. We have Nginx and Traefik installation guides in our website.

#3. Deploy Prometheus / Grafana Monitoring

Use the guide below to install and configure Prometheus and Grafana.

#4. Deploy Kubernetes Dashboard (Optional)

Kubernetes dashboard can be usable when troubleshooting your containerized applications, and in general administration of the cluster resources.

#5. Persistent Storage (Optional)

We have many guides on the persistent storage, see below.

#6. Deploy MetalLB on Kubernetes

For the installation of MetalLB Load balancer, refer to the following article.

Top Kubernetes Administration books to read:

Conclusion

In this article we’ve been able to install and configure a three node kubernetes cluster with a single control plane and two worker nodes. The Kubernetes infrastructure can be scaled out for high availability at ease. We hope this tutorial was of great help to you. Thank you and see you again in other articles.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here