Welcome to this guide on deploying a highly available Kubernetes Cluster Using k3sup. But before we dive into the crux of this matter, we need to know what this tool is all about.
Kubernetes is a free and open-source container orchestration tool that has been highly adopted in the past decade. It allows one to run applications over a cluster of hosts while offering other benefits such as autoscaling, automatic bin packing, self-healing, automated rollouts and rollbacks, service discovery storage orchestration and load balancing e.t.c.
There are several methods one can use to deploy a Kubernetes cluster. These include Minikube, Kubeadm, Kubernetes on AWS (Kube-AWS), Amazon EKS, RKE2 e.t.c
What is k3sup?
Many times, members of the cloud-native computing world find themselves in situations where Kubernetes falls short when running on resource-constrained environments. The simplest solution to this is k3sup(pronounced as “ketchup“). This is a lightweight tool that can be used to get deploy a Kubernetes cluster on a local or remote system. All you need for the deployment is SSH access and the k3sup binary.
k3sup developed by Alex Ellis (the founder of OpenFaaS ® & inlets) can be used for:
- Bootstrapping Kubernetes with k3s onto any VM, either manually, during CI or through
cloudinit
- Getting from zero to
kubectl
with k3s on Raspberry Pi (RPi), VMs, AWS EC2, Packet bare-metal, DigitalOcean, Civo, Scaleway, e.t.c - Fetching a working
KUBECONFIG
from an existing k3s cluster - Building a HA, multi-master (server) cluster
- Joining nodes into an existing k3s cluster with
k3sup join
In this guide, we will deploy Kubernetes in a Highly Available Mode bearing a similar architecture as shown below

Setup Pre-requisites
For this deployment, we will use 5 servers configured as shown:
Hostname | IP Address | Role |
workstation | 192.168.205.2 | Workstation |
master1 | 192.168.205.11 | etcd, server |
master2 | 192.168.205.22 | etcd, server |
master3 | 192.168.205.4 | etcd, server |
worker1 | 192.168.205.12 | agent |
Before you proceed, ensure that:
- Passwordless SSH is configured on all the other servers. You need to copy SSH keys from your workstation to all the servers
ssh-copy-id remote_username@Remote_IP
Remember to replace remote_username and Remote_IP appropriately.
- Passwordless
sudo
is should be configured on all the servers as shown below
sudo vim /etc/sudoers
Edit the user permissions as shown:
<username> ALL=(ALL) NOPASSWD:ALL
- Ensure
curl
is installed on all the nodes:
##On Debian/Ubuntu
sudo apt update && sudo apt install curl -y
##On CentOS/Rocky/Alma Linux
sudo yum install curl -y
Step 1 – Install k3sup on your System.
To install k3sup, you need to obtain a binary file from the Release page. On Linux workstation, you can easily pull the binary using the command:
curl -sLS https://get.k3sup.dev | sh
Now install k3sup using the command:
sudo install k3sup /usr/local/bin/
Verify the installation with the command:
$ k3sup --help
Usage:
k3sup [flags]
k3sup [command]
Examples:
# Install k3s on a server with embedded etcd
k3sup install \
--cluster \
--host $SERVER_1 \
--user $SERVER_1_USER \
--k3s-channel stable
# Join a second server
k3sup join \
--server \
--host $SERVER_2 \
--user $SERVER_2_USER \
--server-host $SERVER_1 \
--server-user $SERVER_1_USER \
--k3s-channel stable
.......
Step 2 – Set up a multi-master (HA) Kubernetes Cluster
Check your Server IP addess
$ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 7a:b1:46:97:20:e7 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 192.168.205.11/24 brd 192.168.205.255 scope global noprefixroute ens18
valid_lft forever preferred_lft forever
inet6 fe80::b86c:4229:16ad:d06/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Once k3sup has been installed, you can set up a Kubernetes cluster using the command with the below syntax:
k3sup install --ip <ip-of-server> --user <user-name> --local-path ~/.kube/config --context <mycontext>
Several other options can be used along with the install
command. These are:
- –cluster: this starts the server in clustering mode using embedded etcd (embedded HA)
- –skip-install: this is used if you already have k3s installed, you run this command to get the kubeconfig
- –ssh-key: specifies a specific path for the SSH key for remote login.
- –ssh-port: specifies an alternative port i.e. 2222. The default port is 22
- –local: Performs a local install without using ssh
- –local-path: default is ./kubeconfig. You can set the file where you want to save your cluster’s kubeconfig. By default, this file will be overwritten.
- –merge: Merges the config into an existing file instead of overwriting (e.g. to add config to the default kubectl config, use –local-path ~/.kube/config –merge).
- –context: the default value is default but you can also set the name of the kubeconfig context.
- –k3s-extra-args: this is used to pass extra arguments to the k3s installer, wrapped in quotes. For example, –k3s-extra-args ‘–no-deploy traefik’
- –k3s-channel: sets a specific version of k3s based upon a channel i.e. stable
- –k3s-version: sets the specific version of k3s, i.e. v1.21.1
- –datastore: this is used to pass an SQL connection-string to the
--datastore-endpoint
flag of k3s. You must use the format required by k3s in the Rancher docs.
More options can be obtained by running the command:
$ k3sup install --help
Install k3s on a server via SSH.
🐳 k3sup needs your support: https://github.com/sponsors/alexellis
Usage:
k3sup install [flags]
Examples:
# Simple installation of stable version, outputting a
# kubeconfig to the working directory
k3sup install --ip IP --user USER
# Merge kubeconfig into local file under custom context
k3sup install \
--host HOST \
--merge \
--local-path $HOME/.kube/kubeconfig \
--context k3s-prod-eu-1
.......
For this guide, we will set up a Highly Available Kubernetes Cluster with embedded etcd. This requires a quorum of servers (an odd number of nodes say 3 masters).
1. Adding first server
Export the variables of the first server. For example:
export SERVER1_IP=192.168.205.11
export USER1=ubuntu
For this example, I have used a remote IP as 192.168.205.11 and a remote username as ubuntu. Replace these variables with your own.
Now initialize the cluster with the first server using the command:
k3sup install \
--ip $SERVER1_IP \
--user $USER1 \
--cluster
Sample Output:

Install kubectl:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/ /bin
Export the config:
export KUBECONFIG=`pwd`/kubeconfig
Or copy it to default configuration path.
mkdir ~/.kube
sudo cp `pwd`/kubeconfig ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
Verify if the server has been added:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 109s v1.24.4+k3s1
2. Adding second and third server
Now add a second server. Begin by exporting its variables:
export SERVER2_IP=192.168.205.22
export USER2=ubuntu
Now join the cluster with the command:
k3sup join \
--ip $SERVER2_IP \
--user $USER2 \
--server-user $USER1 \
--server-ip $SERVER1_IP \
--server
Join the third server:
export SERVER3_IP=192.168.205.4
export USER3=debian
Use the command below to join the cluster;
k3sup join \
--ip $SERVER3_IP \
--user $USER3 \
--server-user $USER1 \
--server-ip $SERVER1_IP \
--server
Now verify if the control nodes have been added:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7m32s v1.24.4+k3s1
master2 Ready control-plane,etcd,master 103s v1.24.4+k3s1
master3 Ready control-plane,etcd,master 19s v1.24.4+k3s1
Step 3 – Join Worker Nodes to the Cluster
To join agent nodes to your cluster, export the variables:
##variables of the agent
export AGENT1_IP=192.168.205.12
export AGENT1_USER=rocky9
##Variables for the Control Node
export SERVER1_IP=192.168.205.11
export USER1=ubuntu
For Rhel-based systems, set SELinux in permissive mode on the agent before running the join command below.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Now add the agent as shown:
k3sup join \
--user $AGENT1_USER \
--ip $AGENT1_IP \
--server-ip $SERVER1_IP \
--server-user $USER1
Verify if the node has been added:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 32m v1.24.4+k3s1
master2 Ready control-plane,etcd,master 26m v1.24.4+k3s1
master3 Ready control-plane,etcd,master 25m v1.24.4+k3s1
worker1 Ready <none> 15s v1.24.4+k3s1
Step 4 – Deploy an Application on the Kubernetes Cluster
To test if the created Kubernetes Cluster is working as desired, we will deploy a sample Nginx application using the command:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
Verify if the pod is running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-544dc8b7c4-bztwj 1/1 Running 0 31s
nginx-deployment-544dc8b7c4-hzkw4 1/1 Running 0 31s
Now expose the deployment. For example with NodePort:
$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed
Obtain the port to which the service has been exposed.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 40m
nginx-deployment NodePort 10.43.244.44 <none> 80:30200/TCP 5s
If you have a firewall running, allow the port through it:
##For UFW
sudo ufw allow 30200
##For Firewalld
sudo firewall-cmd --add-port=30200/tcp --permanent
sudo firewall-cmd --reload
Verify if the service can be accessed on the port. Use the node IP and port to access the service as shown.

Books For Learning Kubernetes Administration:
Verdict
That marks the end of this guide on how to deploy a Highly Available Kubernetes Cluster Using k3sup. I hope this was able to solve the problem faced when installing Kubernetes in resource-constrained environments.
Related posts:
- How To Deploy MetalLB Load Balancer on Kubernetes Cluster
- Deploy k0s Kubernetes on Rocky Linux 9 using k0sctl
- Deploy HA Kubernetes Cluster on Rocky Linux 8 using RKE2
- Install MicroK8s Kubernetes on Rocky Linux 9 / AlmaLinux 9