Do you want to run a Kubernetes cluster in your old hardware with minimal hardware resources?. K3s is a distribution of Kubernetes designed to be lightweight and stable. With k3s you can deploy your containerized applications in a resource-constrained compute environment. Don’t mistake the name lightweight for immature solution, in fact k3s is a fully certified Kubernetes distribution fit for use in production environments.
The other beauty of k3s is low dependencies required to run. The installation process is so streamlined that it can be done in like 30 seconds. The default data store used in k3s is SQLite but it has support for etcd or other relational database systems. This kubernetes distribution can be installed on Intel CPUs or on ARM. Most applications of k3s kubernetes distribution is personal learning lab, edge computing, IoT, as an integral part in your CI/CD pipelines.
In this post we are installing k3s and adding a node into the cluster. There are no much pre-requisites apart from internet access and user that can install packages in the system. In this tutorial you will to:
- Update your Ubuntu system
- Install k3s on Ubuntu 24.04 Linux
- Manage your k3s kubernetes using kubectl
- Add extra nodes into your k3s kubernetes
- Install applications on k3s
- Destroy k3s cluster
Step 1: Update System
Begin with the updating of OS system package list.
sudo apt update
When done proceed to the next.
Step 2: Install k3s kubernetes on Ubuntu 24.04
Login to server and run the commands below to install k3s on Ubuntu 24.04. This is an automated installation that requires you do nothing other than executing the commands in your terminal.
curl -sfL https://get.k3s.io | sudo bash -
From my installation the process was complete in less than one minute. Even with slower internet connection it won’t take long.
[INFO] Finding release for channel stable
[INFO] Using v1.29.3+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.3+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.3+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Checking k3s
version.
$ k3s --version
k3s version v1.29.3+k3s1 (8aecc26b)
go version go1.21.8
Confirm if the service is running. It should be started automatically.
$ systemctl status k3s.service
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-04-23 16:30:51 UTC; 2min 54s ago
Docs: https://k3s.io
Process: 1451 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null (code=exited, status=0/SUCCESS)
Process: 1453 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 1457 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 1459 (k3s-server)
Tasks: 86
Memory: 1.3G (peak: 1.3G)
CPU: 29.611s
CGroup: /system.slice/k3s.service
├─1459 "/usr/local/bin/k3s server"
├─1479 "containerd "
├─2076 /var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66/bin/containerd-shim-runc-v2 -namespace k8s.io -id c287e86311c79438f6213ec3fd19ac0a6daf>
├─2144 /var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66/bin/containerd-shim-runc-v2 -namespace k8s.io -id 783a53bc98fe6c22e24c6b3a8d663567ed3c>
├─2170 /var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66/bin/containerd-shim-runc-v2 -namespace k8s.io -id 695859056315e86e8ab5313903a7bec64a28>
├─2978 /var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66/bin/containerd-shim-runc-v2 -namespace k8s.io -id e52ea19529373b00db4f1e27ba78d54e9100>
└─2993 /var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66/bin/containerd-shim-runc-v2 -namespace k8s.io -id 808ca908fd0f55d0f3e78152da7d8c5af1bc>
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.394229 1459 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingressroutes.traefik.containo.us"
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.394246 1459 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="middlewares.traefik.containo.us"
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.394434 1459 shared_informer.go:311] Waiting for caches to sync for resource quota
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.595295 1459 shared_informer.go:318] Caches are synced for resource quota
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.835681 1459 shared_informer.go:311] Waiting for caches to sync for garbage collector
Apr 23 16:32:35 ubuntu-2404-server k3s[1459]: I0423 16:32:35.835715 1459 shared_informer.go:318] Caches are synced for garbage collector
Apr 23 16:32:48 ubuntu-2404-server k3s[1459]: I0423 16:32:48.977864 1459 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/traefik-f4564c4f4" duration="43.415µs"
Apr 23 16:32:50 ubuntu-2404-server k3s[1459]: I0423 16:32:50.980815 1459 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/traefik-f4564c4f4-zwqpw" podStartSLO>
Apr 23 16:32:51 ubuntu-2404-server k3s[1459]: I0423 16:32:51.013325 1459 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/traefik-f4564c4f4" duration="33.200193ms"
Apr 23 16:32:51 ubuntu-2404-server k3s[1459]: I0423 16:32:51.015859 1459 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/traefik-f4564c4f4" duration="129.992µs"
lines 1-30/30 (END)
Step 3: Manage your k3s kubernetes using kubectl
Create local directory for kubeconfig file storage.
mkdir ~/.kube
Copy generated configuration file into ~/.kube/config
file.
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
echo 'export KUBECONFIG=~/.kube/config'|tee -a ~/.bashrc
source ~/.bashrc
Test cluster connection using kubectl
command.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-2404-server Ready control-plane,master 8m6s v1.29.3+k3s1
The first node will act as control plane and worker node.
Step 4: Add extra nodes into your k3s kubernetes
Print out the value of K3S_TOKEN is stored on your server node.
$ sudo cat /var/lib/rancher/k3s/server/node-token
K10181fd9008092a622884dac5e7da8cd51c5e1e21518c575c357e4cb435e1e9511::server:8fa1e7abe9ba4e2f90a81c49cb22e713
Get your server IP address.
$ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether bc:24:11:29:d6:a2 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet 192.168.1.201/24 brd 192.168.1.255 scope global ens18
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fe29:d6a2/64 scope link
To add additional worker node in your k3s cluster, execute the installer script and pass the K3S_URL and K3S_TOKEN environment variables.
curl -sfL https://get.k3s.io | sudo K3S_URL=https://<ServerIP>:6443 K3S_TOKEN=<K3S_TOKEN> sh -
Here is an example that shows how you will join an agent.
ServerIP="192.168.1.201"
TOKEN="K10181fd9008092a622884dac5e7da8cd51c5e1e21518c575c357e4cb435e1e9511::server:8fa1e7abe9ba4e2f90a81c49cb22e713"
curl -sfL https://get.k3s.io | sudo K3S_URL=https://$ServerIP:6443 K3S_TOKEN=$TOKEN sh -
Output from commands executed to add an agent.
[INFO] Finding release for channel stable
[INFO] Using v1.29.3+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.3+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.3+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent
Confirm the count of nodes from the master / control plane node.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-2404-desktop Ready <none> 3m12s v1.29.3+k3s1
ubuntu-2404-server Ready control-plane,master 20m v1.29.3+k3s1
We can confirm two nodes are in the cluster. Adding extra ones can be done in same manner.
Step 5: Install applications on k3s
We can deploy nginx web application in our cluster to test it works.
Create a deployment file called deploy-nginx.yaml
vim deploy-nginx.yaml
Copy and paste the contents below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Deploy the application in kubernetes using the kubectl apply
command.
$ kubectl apply -f deploy-nginx.yaml
deployment.apps/nginx-deployment created
Create a service exposed on NodePort.
$ vim nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Create the service.
$ kubectl apply -f nginx-service.yaml
service/nginx-service created
Get service and Node Port to access at.
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 26m
nginx-service NodePort 10.43.62.3 <none> 80:32464/TCP 41s
Our node port is 32464, let’s access nginx service at http://ClusterNodeIP:32464

Since this was just for testing, we can clean now.
$ kubectl delete -f nginx-service.yaml
service "nginx-service" deleted
$ kubectl delete -f deploy-nginx.yaml
deployment.apps "nginx-deployment" deleted
Step 6: Destroy k3s cluster
To remove your k3s cluster, start by stopping the services
# Control plane
sudo systemctl stop k3s
# Agent node
sudo systemctl stop k3s-agent
Then run uninstallation script to remove the components.
# Control plane
sudo k3s-uninstall.sh
# Agent node
sudo k3s-agent-uninstall.sh
You are now on clean OS state and can be used for any other purpose. In this article we’ve seen how easy it can be to have a running Kubernetes cluster in seconds using k3s. If you need an environment for demos, learning, or running your containerized workloads without deep understanding of Kubernetes components, then this is the right solution for you.
Reference: k3s documentation