Containers

K3s Kubernetes Quickstart on Ubuntu 24.04 and Rocky Linux 10

K3s strips Kubernetes down to a single binary under 100MB. No etcd cluster, no cloud-controller bloat, no separate container runtime install. You get a fully conformant Kubernetes cluster from one curl command, which makes it ideal for edge deployments, home labs, CI environments, and small production workloads where running full-fat K8s would be overkill.

Original content from computingforgeeks.com - post 165133

This guide walks through installing K3s on both Ubuntu 24.04 LTS and Rocky Linux 10, deploying a test workload, and understanding what ships out of the box. If you need a multi-node production cluster instead, check out deploying RKE2 on Rocky Linux and AlmaLinux. K3s is the right choice when you want Kubernetes without the operational overhead.

Tested March 2026 | K3s v1.35.3+k3s1 on Ubuntu 24.04.4 LTS (kernel 6.8) and Rocky Linux 10.1 (kernel 6.12), SELinux enforcing

Prerequisites

K3s is lightweight, but it still needs a reasonable baseline to run well.

  • A server or VM with at least 2 CPU cores and 2GB RAM (4GB recommended)
  • Ubuntu 24.04 LTS or Rocky Linux 10 (AlmaLinux 10 and RHEL 10 also work)
  • Root or sudo access
  • Internet connectivity to pull the install script and container images
  • Tested on: K3s v1.35.3+k3s1, Ubuntu 24.04.4 LTS, Rocky Linux 10.1

Install K3s

K3s uses a single install script that detects your OS, downloads the correct binary, sets up systemd services, and configures networking. The same command works on both Ubuntu and Rocky Linux.

curl -sfL https://get.k3s.io | sh -

The installer finishes in about 30 seconds on a decent connection. On Ubuntu 24.04, the output is clean:

[INFO]  Finding release for channel stable
[INFO]  Using v1.35.3+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.35.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s

Rocky Linux 10 with SELinux Enforcing

On Rocky Linux 10, the installer automatically detects SELinux and installs the required policies. You do not need to disable SELinux or switch to permissive mode. K3s ships with proper SELinux support.

You may see two non-critical warnings during the install on Rocky:

iptables-save/iptables-restore tools not found
br_netfilter module failed to load (ExecStartPre exit=1/FAILURE)

Neither of these blocks the installation. K3s falls back to nftables on Rocky 10 (the default firewall backend), and the br_netfilter module loads on subsequent restarts. If you want to suppress the warning, load it manually:

sudo modprobe br_netfilter
echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf

Open the required ports through firewalld on Rocky Linux:

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --reload

Port 6443 is the Kubernetes API server. Ports 80 and 443 are for the bundled Traefik ingress controller. Port 10250 is the kubelet API. Ubuntu’s UFW is not enabled by default, so no firewall changes are needed unless you enabled it manually.

Verify the Installation

Confirm the installed version:

k3s --version

You should see the version string with the Go compiler version:

k3s version v1.35.3+k3s1 (be38e884)
go version go1.25.7

Check that the node registered successfully:

sudo kubectl get nodes

The node should show Ready status within a minute of installation:

NAME         STATUS   ROLES                  AGE   VERSION
k3s-ubuntu   Ready    control-plane,master   2m    v1.35.3+k3s1

On Rocky Linux, the output is identical except for the hostname:

NAME        STATUS   ROLES                  AGE   VERSION
k3s-rocky   Ready    control-plane,master   2m    v1.35.3+k3s1

Next, verify the K3s service is running under systemd:

sudo systemctl status k3s

The output should show active (running). Finally, list all system pods to make sure the cluster components started:

sudo kubectl get pods -A

All pods should be in Running or Completed state:

NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-ccb96694c-xxxxx                    1/1     Running     0          3m
kube-system   local-path-provisioner-5d56847996-xxxxx    1/1     Running     0          3m
kube-system   metrics-server-587b667b55-xxxxx            1/1     Running     0          3m
kube-system   helm-install-traefik-crd-xxxxx             0/1     Completed   0          3m
kube-system   helm-install-traefik-xxxxx                 0/1     Completed   0          3m
kube-system   svclb-traefik-xxxxx-xxxxx                  2/2     Running     0          2m
kube-system   traefik-5b8c8f5f4b-xxxxx                   1/1     Running     0          2m

Six running components from a single install command. That is the K3s pitch in action.

Configure kubectl Access

K3s writes its kubeconfig to /etc/rancher/k3s/k3s.yaml, which is owned by root. The kubectl binary at /usr/local/bin/kubectl is a symlink to the K3s binary, so it works immediately with sudo.

To use kubectl without sudo, copy the kubeconfig to your user’s home directory:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
export KUBECONFIG=~/.kube/config

Make the KUBECONFIG variable persistent by adding it to your shell profile:

echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
source ~/.bashrc

Verify it works without sudo:

kubectl get nodes

If you plan to access the cluster remotely (from your workstation, for example), edit ~/.kube/config and replace 127.0.0.1 in the server: line with the node’s IP address. For a deeper kubectl reference, see the kubectl cheat sheet.

Deploy a Test Application

A cluster that shows “Ready” is good. A cluster that actually runs a workload is better. Create an nginx deployment with two replicas:

kubectl create deployment nginx --image=nginx:latest --replicas=2

Watch the pods come up:

kubectl get pods -l app=nginx

Both replicas should reach Running state within 30 seconds:

NAME                     READY   STATUS    RESTARTS   AGE
nginx-676b6c5db-4kx7p    1/1     Running   0          25s
nginx-676b6c5db-9rm2j    1/1     Running   0          25s

Expose the deployment as a NodePort service so you can reach it from outside the cluster:

kubectl expose deployment nginx --type=NodePort --port=80

Find the assigned NodePort:

kubectl get svc nginx

The output shows the mapped port (in the 30000-32767 range):

NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.43.45.120   <none>        80:31080/TCP   5s

Test it with curl using your node’s IP and the NodePort:

curl http://10.0.1.50:31080

You should see the default nginx welcome page HTML:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working.</p>

The cluster is fully functional. Clean up the test deployment when you are done:

kubectl delete deployment nginx
kubectl delete svc nginx

What K3s Bundles by Default

One of K3s’s strengths is shipping with sane defaults. Here is what you get out of the box, with no additional configuration:

  • Containerd (v2.2.2-k3s1) as the container runtime, replacing Docker
  • Flannel for pod-to-pod networking using VXLAN overlays
  • CoreDNS for in-cluster DNS resolution
  • Traefik as the default ingress controller, listening on ports 80 and 443 via a LoadBalancer service
  • local-path-provisioner for dynamic PersistentVolume provisioning (the default StorageClass)
  • Metrics Server for kubectl top resource usage queries
  • ServiceLB (formerly Klipper) to handle LoadBalancer services on bare metal

You can verify the default storage class:

kubectl get storageclass

The output confirms local-path is the default:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  10m

If you prefer a different CNI (Calico, Cilium), ingress controller, or want to disable Traefik, pass flags to the install script. The official K3s documentation covers all available options. For example, to install without Traefik: curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

Ubuntu 24.04 vs Rocky Linux 10

K3s works identically on both distributions, but the underlying OS differences are worth knowing when you troubleshoot or tune the system.

ItemUbuntu 24.04 LTSRocky Linux 10.1
Kernel6.8.0-101-generic6.12.0-124.8.1.el10_1
Package manageraptdnf
Mandatory access controlAppArmor (no changes needed)SELinux enforcing (K3s installs policies automatically)
FirewallUFW (disabled by default)firewalld (active, ports must be opened)
Firewall backendiptables/nftablesnftables
K3s binary path/usr/local/bin/k3s/usr/local/bin/k3s
Kubeconfig/etc/rancher/k3s/k3s.yaml/etc/rancher/k3s/k3s.yaml
Install warningsNoneiptables-save not found, br_netfilter load failure
K3s memory usage~966Mi~988Mi

The main operational difference is firewalld on Rocky. If pods cannot reach services or external traffic does not arrive at NodePorts, check that the required ports are open. On Ubuntu, the default install “just works” because UFW is inactive.

Uninstall K3s

K3s ships a cleanup script that removes the binary, systemd units, CNI configs, and iptables rules:

/usr/local/bin/k3s-uninstall.sh

This completely removes K3s and all cluster data. On agent nodes (if you added any), use /usr/local/bin/k3s-agent-uninstall.sh instead.

Hardening for Production

The single-node setup above is perfect for development and testing. Before running K3s in production, address these areas:

Backup etcd data. K3s uses an embedded SQLite database by default (stored at /var/lib/rancher/k3s/server/db/). For production, either switch to an external PostgreSQL/MySQL datastore or schedule regular snapshots with k3s etcd-snapshot save.

Restrict API access. The Kubernetes API listens on port 6443 on all interfaces. Use firewall rules to limit which IPs can reach it. On Rocky Linux:

sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.0.1.0/24" port protocol="tcp" port="6443" accept'
sudo firewall-cmd --permanent --remove-port=6443/tcp
sudo firewall-cmd --reload

Enable secrets encryption. By default, Kubernetes secrets are stored as base64 in the datastore (not encrypted). Enable encryption at rest with --secrets-encryption during install.

Replace local-path storage. The default StorageClass writes to the node’s local disk. For anything stateful in production, use Longhorn, Rook-Ceph, or NFS-backed persistent volumes.

Set resource limits. Without resource limits, a single misbehaving pod can starve the entire node. Define requests and limits in every deployment manifest, and consider enabling a LimitRange on the default namespace.

K3s keeps the Kubernetes learning curve short. The binary is small, the install is fast, and the cluster behaves exactly like any other conformant Kubernetes distribution. What changes in production is everything around it: networking, storage, access control, and monitoring. Start here, scale from here.

Related Articles

Containers Run Synapse Matrix homeserver in Docker Containers Debian How to Change Your MAC address in Kali Linux and Linux Containers Stream Desktop and Containerized Applications on Browser Networking Install PowerDNS and PowerDNS-Admin on Ubuntu 24.04

Leave a Comment

Press ESC to close