Running Kubernetes in production does not have to involve a pile of system dependencies, kernel modules, and package conflicts. k0s is a single-binary Kubernetes distribution that bundles everything needed to run a conformant cluster. No container runtime to install separately, no kubelet packages, no OS-level prerequisites beyond a Linux kernel. You download one binary, and you have a full Kubernetes control plane or worker node ready to go.
In this guide, you will deploy a multi-node k0s Kubernetes cluster on Linux servers using k0sctl, the official cluster lifecycle management tool. We will walk through the full process: installation, configuration, deployment, verification, CNI setup, scaling, upgrades, and backup/restore.
What Is k0s?
k0s is an open-source, CNCF-certified Kubernetes distribution built by Mirantis. The “zero” in k0s stands for zero friction, zero dependencies, and zero cost. Here is what makes it different from other lightweight distributions:
- Single binary – The entire control plane (API server, controller manager, scheduler, etcd) ships as one static binary. Workers get their own single binary that includes containerd and kubelet.
- No OS dependencies – k0s does not require systemd, though it works with it. It does not need pre-installed container runtimes or specific Linux distributions. Any modern Linux kernel (3.x+) works.
- Conformant Kubernetes – k0s passes the full CNCF conformance test suite. You get upstream Kubernetes APIs with no modifications or vendor lock-in.
- Embedded etcd – The default setup uses an embedded etcd that requires zero external database configuration. You can also switch to an external etcd or use kine for SQLite/MySQL/PostgreSQL-backed storage.
- Flexible networking – Ships with kube-router as the default CNI, but you can bring your own (Calico, Cilium, Flannel, or any other CNI plugin).
- Automated lifecycle management – The companion tool k0sctl handles cluster bootstrapping, scaling, and rolling upgrades from your workstation over SSH.
k0s vs kubeadm vs K3s vs MicroK8s
Choosing a Kubernetes distribution depends on your use case, environment, and operational preferences. Here is how k0s compares to the other popular options.
| Feature | k0s | kubeadm | K3s | MicroK8s |
|---|---|---|---|---|
| Packaging | Single binary | Multiple packages (kubelet, kubeadm, kubectl) | Single binary | Snap package |
| Container Runtime | Bundled containerd | External (containerd/CRI-O) | Bundled containerd | Bundled containerd |
| etcd | Embedded or external | External or stacked | SQLite by default (etcd optional) | Dqlite |
| Default CNI | kube-router | None (bring your own) | Flannel | Calico |
| OS Support | Any Linux | Debian/Ubuntu, RHEL/CentOS | Any Linux | Ubuntu (snap-based) |
| Cluster Management | k0sctl (SSH-based) | Manual or Ansible | k3sup or manual | microk8s add-node |
| Windows Workers | Yes | Yes | No | No |
| CNCF Conformant | Yes | Yes | Yes | Yes |
| Best For | Production, edge, air-gapped | Traditional production clusters | Edge, IoT, lightweight production | Developer workstations, CI/CD |
k0s sits in a sweet spot between kubeadm (full-featured but complex) and K3s (lightweight but opinionated). It gives you a fully conformant Kubernetes cluster without requiring you to manage separate packages for each component.
Prerequisites
For this guide, we will set up a three-node cluster: one controller and two workers. You will also need a separate workstation (your laptop or a bastion host) where k0sctl runs. k0sctl connects to the target nodes over SSH and handles everything remotely.
Node requirements:
- 3 Linux servers (Ubuntu 22.04/24.04, Debian 12, Rocky Linux 9, or any modern distribution)
- Controller node: minimum 1 vCPU, 1 GB RAM (2 vCPU and 2 GB RAM recommended)
- Worker nodes: minimum 1 vCPU, 1 GB RAM (adjust based on your workloads)
- Network connectivity between all nodes on ports 6443 (API), 8132 (konnectivity), 9443 (controller join), and 10250 (kubelet)
Workstation requirements:
- SSH access to all target nodes using key-based authentication
- k0sctl binary installed
- kubectl installed (for cluster verification)
The example node layout we will use throughout this guide:
| Hostname | IP Address | Role |
|---|---|---|
| controller-0 | 192.168.1.10 | controller |
| worker-0 | 192.168.1.11 | worker |
| worker-1 | 192.168.1.12 | worker |
Before moving forward, make sure your SSH key is distributed to all target nodes. From your workstation, confirm that you can reach each node without a password prompt:
ssh [email protected] 'hostname'
ssh [email protected] 'hostname'
ssh [email protected] 'hostname'
If you have not already set up SSH keys, generate one and copy it to each node:
ssh-keygen -t ed25519 -C "k0sctl-deploy-key"
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
Step 1: Install k0sctl on Your Workstation
k0sctl is a command-line tool that runs on your local machine. It reads a YAML configuration file, connects to your servers over SSH, and handles the full cluster lifecycle. Install it on your workstation (not on the target nodes).
Option A: Download the binary directly
Grab the latest release from GitHub. Replace the version number with the current release if a newer one is available:
K0SCTL_VERSION=$(curl -s https://api.github.com/repos/k0sproject/k0sctl/releases/latest | grep tag_name | cut -d '"' -f 4)
echo "Installing k0sctl ${K0SCTL_VERSION}"
curl -sSLo k0sctl "https://github.com/k0sproject/k0sctl/releases/download/${K0SCTL_VERSION}/k0sctl-linux-amd64"
chmod +x k0sctl
sudo mv k0sctl /usr/local/bin/
For macOS (Apple Silicon):
curl -sSLo k0sctl "https://github.com/k0sproject/k0sctl/releases/download/${K0SCTL_VERSION}/k0sctl-darwin-arm64"
chmod +x k0sctl
sudo mv k0sctl /usr/local/bin/
Option B: Install with Homebrew (macOS/Linux)
brew install k0sproject/tap/k0sctl
Verify the installation:
k0sctl version
You should see output showing the k0sctl version and the k0s version it defaults to.
Step 2: Create the k0sctl.yaml Configuration
k0sctl uses a YAML file to define your cluster topology, SSH connection details, and k0s configuration. You can generate a skeleton config and then customize it:
k0sctl init > k0sctl.yaml
Edit the file to match your environment. Below is a complete configuration for our three-node cluster:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: my-k0s-cluster
spec:
k0s:
version: 1.31.4+k0s.0
config:
spec:
telemetry:
enabled: false
hosts:
- role: controller
ssh:
address: 192.168.1.10
user: root
port: 22
keyPath: ~/.ssh/id_ed25519
installFlags:
- --debug
- role: worker
ssh:
address: 192.168.1.11
user: root
port: 22
keyPath: ~/.ssh/id_ed25519
- role: worker
ssh:
address: 192.168.1.12
user: root
port: 22
keyPath: ~/.ssh/id_ed25519
Key fields to pay attention to:
- spec.k0s.version – Pin a specific k0s version. If omitted, k0sctl uses its bundled default version. Check the k0s GitHub releases page for the latest stable version.
- role – Set to
controller,worker, orcontroller+worker(for single-node or combined setups). - ssh.user – The user k0sctl connects as. This user needs sudo privileges (or use root directly).
- ssh.keyPath – Path to your SSH private key on the workstation.
If you are using a non-root user, that user must have passwordless sudo access. You can configure this on each node:
echo "deployuser ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/deployuser
Step 3: Deploy the Cluster
With the configuration file ready, deploying the cluster is a single command. k0sctl will SSH into each node, download the k0s binary, configure the services, and join everything together:
k0sctl apply --config k0sctl.yaml
The process takes a few minutes depending on your network speed. k0sctl provides detailed output as it progresses through each phase:
- Connects to all hosts and gathers system information
- Downloads the k0s binary to each node
- Configures and starts the controller
- Generates join tokens for worker nodes
- Configures and starts each worker
- Waits for all nodes to report Ready status
When the deployment finishes successfully, you will see a summary showing all nodes and their roles.
Step 4: Get the Kubeconfig
After deployment, retrieve the kubeconfig file so you can interact with the cluster using kubectl:
k0sctl kubeconfig --config k0sctl.yaml > ~/.kube/k0s-cluster.conf
export KUBECONFIG=~/.kube/k0s-cluster.conf
To make this persistent across terminal sessions, add the export line to your shell profile:
echo 'export KUBECONFIG=~/.kube/k0s-cluster.conf' >> ~/.bashrc
source ~/.bashrc
Step 5: Verify the Cluster
Check that all nodes are in a Ready state:
kubectl get nodes -o wide
Expected output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE
worker-0 Ready <none> 3m v1.31.4+k0s 192.168.1.11 Ubuntu 24.04 LTS
worker-1 Ready <none> 3m v1.31.4+k0s 192.168.1.12 Ubuntu 24.04 LTS
Note that controller nodes do not appear in the kubectl get nodes output by default. This is expected behavior in k0s. Controllers run the control plane but do not register as Kubernetes nodes unless you use the controller+worker role.
Check system pods:
kubectl get pods -n kube-system
You should see pods for coredns, kube-proxy, kube-router (or your chosen CNI), and metrics-server.
Verify cluster component health:
kubectl get --raw='/readyz?verbose'
Step 6: Configure CNI Networking
k0s ships with kube-router as the default CNI plugin. It handles pod networking, network policies, and service proxying out of the box. For many deployments, the default works perfectly and you can skip this section.
If you prefer to use a different CNI like Calico, you need to tell k0s to skip its built-in CNI and deploy your own. Update the k0sctl.yaml config before deployment:
spec:
k0s:
config:
spec:
network:
provider: custom
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
After deploying with provider: custom, install the Calico operator and custom resource:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
Then create the Calico Installation resource. Make sure the CIDR matches what you set in the k0s config:
cat <<EOF | kubectl apply -f -
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
EOF
Wait for Calico pods to come up:
kubectl get pods -n calico-system --watch
Step 7: Deploy a Test Workload
Let us confirm the cluster is fully functional by deploying an nginx workload and exposing it:
kubectl create deployment nginx-test --image=nginx:latest --replicas=3
Expose the deployment as a NodePort service:
kubectl expose deployment nginx-test --port=80 --type=NodePort
Find the assigned NodePort:
kubectl get svc nginx-test
Test connectivity from your workstation using the worker node IP and the NodePort shown in the output (for example, port 31234):
curl http://192.168.1.11:31234
You should see the default nginx welcome page. Clean up the test resources once you have confirmed everything works:
kubectl delete deployment nginx-test
kubectl delete svc nginx-test
Step 8: Add or Remove Nodes
Scaling the cluster with k0sctl is straightforward. You modify the configuration file and reapply it.
Adding a worker node: Add a new host entry to the hosts section of your k0sctl.yaml:
- role: worker
ssh:
address: 192.168.1.13
user: root
port: 22
keyPath: ~/.ssh/id_ed25519
Then reapply the configuration. k0sctl detects what already exists and only provisions the new node:
k0sctl apply --config k0sctl.yaml
Verify the new node joined:
kubectl get nodes
Removing a worker node: First, drain the node to safely evict all workloads:
kubectl drain worker-2 --ignore-daemonsets --delete-emptydir-data
Remove the host entry from k0sctl.yaml, then reapply. You can also reset a specific node to wipe all k0s data from it:
k0sctl reset --config k0sctl.yaml
Be careful with k0sctl reset as it tears down the entire cluster unless you specify individual hosts. For removing a single node, draining it and removing it from the config before reapplying is the safer approach.
Step 9: Upgrade the Cluster with k0sctl
k0sctl handles rolling upgrades. The process is: update the version in your config file and reapply. k0sctl upgrades controllers first, then workers one at a time, cordoning and draining each node before upgrading it.
Update the version in k0sctl.yaml to the target release:
spec:
k0s:
version: 1.32.1+k0s.0
Run the apply command to start the rolling upgrade:
k0sctl apply --config k0sctl.yaml
Monitor the upgrade progress in the terminal output. After completion, verify all nodes are running the new version:
kubectl get nodes -o wide
Important upgrade guidelines:
- Always upgrade one minor version at a time (for example, 1.30 to 1.31, not 1.30 to 1.32)
- Read the k0s release notes for any breaking changes before upgrading
- Test upgrades in a staging environment first
- Back up your cluster state before starting the upgrade (see the next section)
Step 10: Backup and Restore
k0s includes built-in backup functionality that captures the etcd datastore and k0s configuration. This is essential before upgrades or any major cluster changes.
Creating a backup: SSH into the controller node and run the backup command:
ssh [email protected] 'k0s backup --save-path /tmp/'
This creates a compressed archive in /tmp/ containing the etcd snapshot, certificates, and k0s configuration. Copy the backup file to a safe location:
scp [email protected]:/tmp/k0s_backup_*.tar.gz ./backups/
Restoring from backup: To restore, you need a fresh controller node (or a wiped one). Stop the k0s service first if it is running:
ssh [email protected] 'k0s stop'
Run the restore command with the path to your backup archive:
ssh [email protected] 'k0s restore /tmp/k0s_backup_2026-03-19.tar.gz'
Start the k0s service again after the restore completes:
ssh [email protected] 'k0s start'
Worker nodes should reconnect automatically once the controller is back online. If they do not, restart the k0s worker service on each node.
For production environments, automate backups with a cron job on the controller node:
0 2 * * * /usr/local/bin/k0s backup --save-path /opt/k0s-backups/ && find /opt/k0s-backups/ -mtime +7 -delete
This takes a daily backup at 2 AM and removes backups older than 7 days.
Troubleshooting
Here are the most common issues you will encounter when deploying and operating k0s clusters, along with their solutions.
k0sctl cannot connect to a host
Verify SSH connectivity manually. Check that the user has passwordless sudo access and that the SSH key path in k0sctl.yaml is correct. Also confirm that no firewall is blocking SSH (port 22) between your workstation and the target node:
ssh -v -i ~/.ssh/id_ed25519 [email protected]
Worker nodes stuck in NotReady state
This usually indicates a CNI issue. Check that the CNI plugin pods are running in kube-system (or calico-system if using Calico). Inspect the kubelet logs on the worker node for errors:
ssh [email protected] 'k0s kubectl get pods -n kube-system'
ssh [email protected] 'journalctl -u k0sworker -f'
Controller service fails to start
Check the controller logs for errors related to etcd or certificate generation. Port conflicts on 6443 or 2380 are a common cause:
ssh [email protected] 'journalctl -u k0scontroller -f'
ssh [email protected] 'ss -tlnp | grep -E "6443|2380|9443"'
Pods stuck in Pending state
Describe the pod to check for scheduling issues. Common causes include insufficient resources, taints on nodes, or no worker nodes available:
kubectl describe pod <pod-name>
kubectl get events --sort-by='.lastTimestamp'
Resetting a failed installation
If the deployment fails partway through, reset the affected nodes before retrying. This wipes all k0s data from the node:
ssh [email protected] 'k0s reset'
Then run k0sctl apply again from your workstation.
Checking k0s status on a node
The k0s binary has a built-in status command that provides a quick overview of the node role, process state, and k0s version:
ssh [email protected] 'k0s status'
Wrapping Up
You now have a production-capable k0s Kubernetes cluster running on Linux, fully managed by k0sctl from your workstation. The combination of k0s and k0sctl gives you a clean operational workflow: define your cluster in a YAML file, apply it, and let the tooling handle the rest. Upgrades, scaling, and day-two operations all follow the same pattern of editing the config and reapplying.
For multi-controller high availability setups, add additional controller nodes to your k0sctl.yaml (use an odd number, typically 3 or 5, for proper etcd quorum). If you are running in environments without internet access, k0s supports air-gapped installations with pre-packaged image bundles. Check the official k0s documentation for details on those advanced configurations.



























































