How To

Install Kubernetes on Ubuntu 24.04 Using K3s

K3s packs a full Kubernetes cluster into a single binary under 100 MB. No Docker dependency, no heavy etcd deployment, no 20-step install. One curl command on the server node, one on each agent, and you have a production-capable cluster with Traefik ingress, CoreDNS, metrics-server, and local persistent storage all running.

Original content from computingforgeeks.com - post 75879

This guide walks through building a multi-node k3s cluster on Ubuntu 24.04 LTS, from first install through deploying a real workload with ingress routing and persistent volumes. K3s is maintained by SUSE/Rancher and is CNCF certified, so everything you learn here applies to standard Kubernetes. For alternative lightweight distributions, see our guides on MicroK8s and k0s, or our k0s vs k3s vs MicroK8s comparison.

Verified working: March 2026 on Ubuntu 24.04.4 LTS (kernel 6.8.0-106), k3s v1.35.3+k3s1, Helm v3.20.1

What k3s Includes by Default

K3s bundles everything a Kubernetes cluster needs into one process. No separate installs required for these components:

ComponentVersion (v1.35.3+k3s1)Purpose
containerd2.2.2Container runtime (replaces Docker)
Flannelv0.28.2Pod networking (CNI)
CoreDNS1.14.2Cluster DNS
Traefikv3.6.10Ingress controller
ServiceLBBuilt-inLoadBalancer for bare-metal
Local-path-provisionerv0.0.35Persistent volume storage
Metrics-serverv0.8.1Resource metrics for kubectl top
SQLite33.51.2Default datastore (etcd also supported)

Docker is not needed. K3s uses containerd directly, which is lighter and what most production Kubernetes distributions use under the hood anyway.

Prerequisites

  • Two or more Ubuntu 24.04 LTS servers (also works on Ubuntu 22.04)
  • 2 CPU cores and 2 GB RAM minimum per node (4 GB recommended for the server node)
  • Network connectivity between all nodes on ports 6443/tcp and 8472/udp
  • Root or sudo access on all nodes
  • Tested on: Ubuntu 24.04.4 LTS with k3s v1.35.3+k3s1

Our test cluster uses two nodes:

HostnameIP AddressRoleSpecs
k3s-master10.0.1.50Server (control plane)2 vCPU, 4 GB RAM
k3s-worker10.0.1.51Agent (worker)2 vCPU, 4 GB RAM

Prepare All Nodes

Run these steps on every node in the cluster (both server and agents).

Update the system packages:

sudo apt update && sudo apt -y upgrade

Reboot if a new kernel was installed:

sudo systemctl reboot

Set hostnames so nodes can identify each other in kubectl get nodes output:

sudo hostnamectl set-hostname k3s-master

On the worker node:

sudo hostnamectl set-hostname k3s-worker

Add host entries on all nodes so they can resolve each other by name. Open /etc/hosts:

sudo vi /etc/hosts

Append these lines (adjust IPs to match your environment):

10.0.1.50 k3s-master
10.0.1.51 k3s-worker

Install k3s on the Server Node

The server node runs the Kubernetes control plane (API server, scheduler, controller manager) alongside containerd and all the bundled components. One command handles everything:

curl -sfL https://get.k3s.io | sh -

The installer downloads the k3s binary, creates systemd service files, and starts the cluster:

[INFO]  Using v1.35.3+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Notice the installer also creates kubectl, crictl, and ctr as symlinks to the k3s binary. No separate kubectl install needed.

Verify the service is running:

sudo systemctl status k3s

Check the installed version:

k3s --version

The output confirms the version:

k3s version v1.35.3+k3s1 (be38e884)
go version go1.25.7

After about 30 seconds, the server node should show as Ready:

sudo kubectl get nodes -o wide

Output:

NAME         STATUS   ROLES           AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k3s-master   Ready    control-plane   26s   v1.35.3+k3s1   10.0.1.50     <none>        Ubuntu 24.04.4 LTS   6.8.0-106-generic   containerd://2.2.2-k3s1

All the bundled components deploy automatically as pods in the kube-system namespace. Give it about a minute for everything to start:

sudo kubectl get pods -A

You should see CoreDNS, Traefik, metrics-server, and local-path-provisioner all running:

NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-c4dbffb5f-c2fjq                   1/1     Running     0          77s
kube-system   helm-install-traefik-crd-hhpwj            0/1     Completed   0          73s
kube-system   helm-install-traefik-s47qc                0/1     Completed   2          73s
kube-system   local-path-provisioner-5c4dc5d66d-lmj2x   1/1     Running     0          77s
kube-system   metrics-server-786d997795-z4st4           1/1     Running     0          76s
kube-system   svclb-traefik-9b8809fe-ksstq              2/2     Running     0          45s
kube-system   traefik-59449f8f96-bf47r                  1/1     Running     0          45s

The helm-install-* pods show Completed because they are one-time jobs that deployed Traefik via the built-in Helm controller.

Configure Firewall Rules

If UFW is active on your nodes, open the required ports. The k3s server needs 6443/tcp for the Kubernetes API, and Flannel uses 8472/udp for VXLAN overlay traffic between nodes:

On the server node:

sudo ufw allow 6443/tcp
sudo ufw allow 8472/udp
sudo ufw allow 10250/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

On agent nodes:

sudo ufw allow 8472/udp
sudo ufw allow 10250/tcp

Port 6443 is the Kubernetes API. Port 8472 is Flannel VXLAN. Port 10250 is the kubelet API (used by metrics-server and kubectl logs). Ports 80 and 443 are for Traefik ingress traffic.

Join Agent Nodes to the Cluster

Each agent node needs a join token from the server. Retrieve it:

sudo cat /var/lib/rancher/k3s/server/node-token

The token looks like this (yours will differ):

K1040501a0f659447343ffbc265631c722c4f25c33397478274bc568623b3d62f2d::server:fb95046e3efb12c841f6c03302bfd4cb

On each agent node, run the installer with K3S_URL pointing to the server’s IP and K3S_TOKEN set to the value above:

curl -sfL https://get.k3s.io | K3S_URL=https://10.0.1.50:6443 K3S_TOKEN="your-token-here" sh -

The agent installer output:

[INFO]  Using v1.35.3+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

Notice the agent gets its own uninstall script (k3s-agent-uninstall.sh) and a separate systemd service (k3s-agent.service).

Verify the agent is running on the worker:

sudo systemctl status k3s-agent

Back on the server node, both nodes should now appear:

sudo kubectl get nodes -o wide

Output with both nodes Ready:

NAME         STATUS   ROLES           AGE     VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k3s-master   Ready    control-plane   3m48s   v1.35.3+k3s1   10.0.1.50     <none>        Ubuntu 24.04.4 LTS   6.8.0-106-generic   containerd://2.2.2-k3s1
k3s-worker   Ready    <none>          2m34s   v1.35.3+k3s1   10.0.1.51     <none>        Ubuntu 24.04.4 LTS   6.8.0-106-generic   containerd://2.2.2-k3s1

Configure kubectl for Non-Root Users

By default, the kubeconfig file at /etc/rancher/k3s/k3s.yaml is only readable by root. To use kubectl as a regular user, copy it to the user’s home directory:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
source ~/.bashrc

Now kubectl works without sudo:

kubectl get nodes

If you want all users on the server node to have kubectl access without copying configs, you can install k3s with relaxed kubeconfig permissions instead:

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

To access the cluster from a remote workstation, copy /etc/rancher/k3s/k3s.yaml to your local machine and replace 127.0.0.1 with the server’s IP address:

scp [email protected]:/etc/rancher/k3s/k3s.yaml ~/.kube/k3s-config
sed -i 's/127.0.0.1/10.0.1.50/' ~/.kube/k3s-config
export KUBECONFIG=~/.kube/k3s-config
kubectl get nodes

Deploy a Test Application

With the cluster running, deploy an Nginx web server to verify pods schedule across nodes, services route traffic, and Traefik ingress works.

Create a deployment with two replicas, a ClusterIP service, and a Traefik ingress rule. Save this as nginx-demo.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-demo
spec:
  ingressClassName: traefik
  rules:
  - host: nginx.example.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-demo
            port:
              number: 80

Apply it:

kubectl apply -f nginx-demo.yaml

The deployment, service, and ingress are created:

deployment.apps/nginx-demo created
service/nginx-demo created
ingress.networking.k8s.io/nginx-demo created

Check the pods. K3s schedules them across both nodes:

kubectl get pods -l app=nginx-demo -o wide

One pod landed on the worker, one on the master:

NAME                          READY   STATUS    RESTARTS   AGE   IP          NODE         NOMINATED NODE   READINESS GATES
nginx-demo-5c5cf68865-28ppk   1/1     Running   0          22s   10.42.1.3   k3s-worker   <none>           <none>
nginx-demo-5c5cf68865-k779c   1/1     Running   0          22s   10.42.0.9   k3s-master   <none>           <none>

Verify the ingress is active:

kubectl get ingress nginx-demo

Traefik picked up the ingress rule and assigned both node IPs:

NAME         CLASS     HOSTS                 ADDRESS                   PORTS   AGE
nginx-demo   traefik   nginx.example.local   10.0.1.50,10.0.1.51      80      24s

Test with curl using the Host header:

curl -s -o /dev/null -w "%{http_code}\n" -H "Host: nginx.example.local" http://127.0.0.1

A 200 response confirms Traefik is routing traffic to the Nginx pods through the ingress rule.

Clean up when done testing:

kubectl delete -f nginx-demo.yaml

Install Helm

Helm is the standard package manager for Kubernetes. Most third-party applications (Prometheus, Grafana, cert-manager) are distributed as Helm charts. Install it on the server node:

curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Verify the installation:

helm version --short

Output:

v3.20.1+ga2369ca

Helm automatically picks up the kubeconfig from KUBECONFIG or ~/.kube/config. No additional configuration needed. For a deeper dive into Helm with k3s, see our Nginx Ingress Controller with Helm guide.

Persistent Storage with Local-Path

K3s ships with the Rancher local-path-provisioner, which dynamically creates PersistentVolumes backed by host filesystem directories. It is set as the default StorageClass:

kubectl get storageclass

The (default) marker means any PVC that does not specify a StorageClass will use local-path automatically:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  2m32s

Test it by creating a PVC and a pod that writes data to the volume. Save as pvc-test.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pvc-test
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sh", "-c", "echo hello-from-pvc > /data/test.txt && cat /data/test.txt && sleep 3600"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-pvc

Apply and wait for the pod to start:

kubectl apply -f pvc-test.yaml

After about 20 seconds, the PVC should be Bound and the pod Running:

kubectl get pvc test-pvc

Output:

NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-2e3272f9-38e4-453e-a13a-0b0c53a04b53   1Gi        RWO            local-path     28s

Check the pod logs to confirm the data was written:

kubectl logs pvc-test

The output confirms the volume works:

hello-from-pvc

Local-path stores data on the node at /opt/local-path-provisioner/ by default. This is fine for development and single-node clusters. For production with multiple nodes, consider a distributed storage solution like Ceph RBD or Longhorn.

Clean up:

kubectl delete -f pvc-test.yaml

Common k3s Configuration Options

The default install works for most cases, but k3s supports many flags to customize the cluster. Pass them after sh -s - during installation, or add them to /etc/systemd/system/k3s.service.env after install.

Disable bundled components

If you want to use your own ingress controller (Nginx, HAProxy) instead of Traefik:

curl -sfL https://get.k3s.io | sh -s - --disable traefik

Disable multiple components at once:

curl -sfL https://get.k3s.io | sh -s - --disable traefik --disable servicelb --disable metrics-server

Pin a specific k3s version

Production environments should pin the version to avoid unexpected upgrades:

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.35.3+k3s1" sh -

Use etcd instead of SQLite

For multi-server (HA) setups, use embedded etcd:

curl -sfL https://get.k3s.io | sh -s - --cluster-init

Additional server nodes join with:

curl -sfL https://get.k3s.io | K3S_URL=https://10.0.1.50:6443 K3S_TOKEN="your-token" sh -s - server

For a full high-availability setup with multiple server nodes, see our HA Kubernetes with k3sup guide.

Set the node IP explicitly

On hosts with multiple network interfaces, tell k3s which IP to advertise:

curl -sfL https://get.k3s.io | sh -s - --node-ip 10.0.1.50 --advertise-address 10.0.1.50

Uninstall k3s

K3s provides dedicated uninstall scripts. On agent nodes:

sudo /usr/local/bin/k3s-agent-uninstall.sh

On the server node:

sudo /usr/local/bin/k3s-uninstall.sh

These scripts stop the services, remove binaries, clean up iptables rules, and delete the /var/lib/rancher/k3s data directory. No manual cleanup needed.

Troubleshooting

Agent node stuck in NotReady state

Check the agent logs for connection errors:

sudo journalctl -u k3s-agent -f

The most common cause is a firewall blocking port 6443 or 8472 between the nodes. Verify connectivity:

curl -k https://10.0.1.50:6443/ping

If this times out, open the port in UFW or check any cloud security groups.

Error: “Unable to read /etc/rancher/k3s/k3s.yaml, permission denied”

This happens when running kubectl as a non-root user. The default kubeconfig is root-only (mode 600). Either copy it to your user’s ~/.kube/config as shown above, or reinstall with --write-kubeconfig-mode 644.

Pods stuck in ContainerCreating

Usually means the container image is still being pulled. Check events:

kubectl describe pod <pod-name>

Look at the Events section at the bottom. If you see ErrImagePull, the node cannot reach the container registry. Verify DNS resolution and internet access from the node.

For a more comprehensive K3s walkthrough covering both Ubuntu and Rocky Linux, see our updated K3s Kubernetes quickstart. If you need a heavier distribution for production, RKE2 with HA is the next step up, and Rancher Desktop gives you a local K3s environment on macOS or Windows without any VMs.

Related Articles

Networking How To Install GNS3 on Ubuntu 22.04|20.04|18.04 Containers GNS3 Installation Guide for Ubuntu 24.04 (Fast & Easy) Debian How to run Linux on Android without root using UserLAnd Containers Install Kubernetes Cluster using Talos Container Linux

3 thoughts on “Install Kubernetes on Ubuntu 24.04 Using K3s”

Leave a Comment

Press ESC to close