MicroK8s is a lightweight, production-ready Kubernetes distribution built and maintained by Canonical. It ships as a single snap package, which means the entire Kubernetes stack (API server, kubelet, container runtime, etcd) lives inside one binary that you can install in under two minutes. Unlike full-blown installers such as kubeadm, MicroK8s targets developers, edge deployments, IoT clusters, and small production workloads where simplicity matters more than deep customization.

This guide walks you through installing and configuring a MicroK8s Kubernetes cluster on Rocky Linux 10, AlmaLinux 10, and Ubuntu 24.04 LTS. We will cover single-node setup, multi-node clustering, add-on management, persistent storage, high availability, dashboard access, and day-two operations like upgrades and troubleshooting.

MicroK8s vs kubeadm vs K3s vs Kind

Choosing the right Kubernetes installer depends on what you are building. Here is a quick comparison to help you decide.

ToolBest ForRuntimeHA SupportNotes
MicroK8sDev, edge, IoT, small prodcontainerd (bundled)Yes (3+ nodes, Dqlite)Snap-based, single binary, built-in add-ons
kubeadmProduction, full controlcontainerd / CRI-OYes (external etcd or stacked)Official upstream installer, most flexibility
K3sEdge, ARM, resource-constrainedcontainerd (bundled)Yes (embedded etcd or external DB)CNCF sandbox project by SUSE, very small footprint
KindCI pipelines, local testingDocker containers as nodesNoRuns clusters inside Docker, not for production

When to pick MicroK8s: You want a zero-configuration Kubernetes that installs with one command, bundles its own container runtime, and gives you an add-on system for DNS, ingress, storage, and observability without writing manifests by hand. It is particularly good for teams that need a consistent experience from laptop to edge node to small production cluster.

Prerequisites

Before you begin, make sure each node meets the following minimum requirements:

  • 2 CPU cores (4 recommended for multi-node)
  • 4 GB RAM (8 GB recommended)
  • 20 GB free disk space
  • Root or sudo access
  • Internet connectivity to pull snap packages and container images
  • Unique hostname on every node if building a cluster

Install MicroK8s on Ubuntu 24.04 LTS

Ubuntu ships with snapd pre-installed, so MicroK8s installation is straightforward. Start by making sure your system is up to date.

sudo apt update && sudo apt upgrade -y

Install MicroK8s from the stable channel. The --classic flag is required because MicroK8s needs host-level access to networking and storage.

sudo snap install microk8s --classic --channel=1.31/stable

Replace 1.31/stable with whatever Kubernetes version you want. You can list available channels with snap info microk8s.

Verify the installation by checking the snap list.

snap list microk8s

That is all it takes on Ubuntu. MicroK8s is now installed and the Kubernetes services are starting in the background.

Install MicroK8s on Rocky Linux 10 / AlmaLinux 10

RHEL-based distributions do not ship with snapd, so you need to install it first. The snap daemon is available through the EPEL (Extra Packages for Enterprise Linux) repository.

Start with a system update.

sudo dnf update -y

Install the EPEL repository if it is not already present.

sudo dnf install -y epel-release

Now install snapd.

sudo dnf install -y snapd

Enable and start the snapd socket, then create the required symlink for classic snap support.

sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap

Log out and log back in (or reboot) so that the snap paths are picked up by your shell. After logging back in, confirm snapd is working.

snap version

With snapd ready, install MicroK8s exactly the same way you would on Ubuntu.

sudo snap install microk8s --classic --channel=1.31/stable

If your system runs SELinux in enforcing mode, you may need to allow the MicroK8s snap to operate. Check the status and set it to permissive temporarily if you run into issues during initial setup.

getenforce
sudo setenforce 0

For a permanent change that survives reboot, edit /etc/selinux/config and set SELINUX=permissive. In production, the better approach is to create a custom SELinux policy module for MicroK8s rather than disabling enforcement entirely.

Also make sure the firewall allows traffic between cluster nodes. At minimum, open port 25000 (used for node joining) and port 16443 (the Kubernetes API).

sudo firewall-cmd --permanent --add-port=25000/tcp
sudo firewall-cmd --permanent --add-port=16443/tcp
sudo firewall-cmd --reload

Post-Install Configuration

These steps apply to all three distributions. First, add your user to the microk8s group so you can run commands without sudo.

sudo usermod -aG microk8s $USER
mkdir -p ~/.kube
sudo chown -R $USER:$USER ~/.kube

Log out and back in for group membership to take effect. Then verify the node is ready.

microk8s status --wait-ready

You should see output showing that MicroK8s is running along with a list of available add-ons. Next, set up an alias so you can use the standard kubectl command instead of typing microk8s kubectl every time.

echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc
source ~/.bashrc

Alternatively, you can export the kubeconfig to use an externally installed kubectl.

microk8s config > ~/.kube/config

Confirm everything works by listing the nodes and checking cluster info.

kubectl get nodes
kubectl cluster-info

Your single-node MicroK8s cluster is now operational.

Enable Essential Add-ons

MicroK8s ships with a rich add-on ecosystem that you can toggle with a single command. For a functional cluster, you will want to enable at least the following.

microk8s enable dns storage ingress dashboard metrics-server registry

Here is what each add-on provides:

  • dns (CoreDNS) – Cluster DNS resolution. Almost every workload needs this.
  • storage (hostpath-storage) – Default StorageClass backed by host directories. Good for testing and single-node clusters.
  • ingress – NGINX-based Ingress controller for routing external HTTP/HTTPS traffic to services.
  • dashboard – The official Kubernetes Dashboard web UI.
  • metrics-server – Collects resource metrics from kubelets. Required for kubectl top and Horizontal Pod Autoscaler.
  • registry – A private Docker registry running inside the cluster on port 32000.

Check which add-ons are currently active.

microk8s status

To disable an add-on you no longer need, run microk8s disable <addon-name>.

Build a Multi-Node Cluster

MicroK8s makes clustering dead simple. On the first node (the one that will generate the join token), run the following.

microk8s add-node

This prints a join command containing a token and the IP address of the control plane. The output looks something like this:

# Example output (do not copy this token, use the one from your own cluster)
microk8s join 192.168.1.10:25000/a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4/abc123def456

Copy that join command and run it on the second node. Make sure MicroK8s is already installed there.

microk8s join 192.168.1.10:25000/a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4/abc123def456

Each token is single-use. To add a third node, go back to the first node, run microk8s add-node again, and use the new token on the third machine.

If you want a node to function only as a worker (no control-plane duties), append the --worker flag to the join command.

microk8s join 192.168.1.10:25000/a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4/abc123def456 --worker

After joining, verify the cluster from any control-plane node.

kubectl get nodes -o wide

You should see all nodes listed with a Ready status.

Deploy a Test Application

Let us deploy a simple NGINX web server to confirm the cluster is functioning properly. Create a deployment and expose it with a NodePort service.

kubectl create deployment nginx-test --image=nginx:latest --replicas=2

Wait for the pods to reach the Running state.

kubectl get pods -l app=nginx-test -w

Expose the deployment as a NodePort service so you can reach it from outside the cluster.

kubectl expose deployment nginx-test --type=NodePort --port=80

Find the assigned NodePort.

kubectl get svc nginx-test

Grab the port from the output (for example, 80:31234/TCP) and test with curl.

curl http://localhost:31234

If you see the default NGINX welcome page in the response, your cluster is working correctly. Clean up the test resources when you are done.

kubectl delete deployment nginx-test
kubectl delete svc nginx-test

Access the Kubernetes Dashboard

If you enabled the dashboard add-on earlier, it is already running. To access it, first retrieve the access token.

microk8s kubectl create token default

Copy the token output. Then start the dashboard proxy so it is reachable from your browser.

microk8s dashboard-proxy

This command prints a URL (typically https://127.0.0.1:10443) and a token you can use to log in. Open that URL in a browser, accept the self-signed certificate warning, paste the token, and you will land on the Kubernetes Dashboard.

If you are accessing the dashboard from a remote machine, you have two options. You can use SSH port forwarding from your workstation:

ssh -L 10443:127.0.0.1:10443 user@your-server-ip

Then open https://127.0.0.1:10443 in your local browser. Alternatively, you can expose the dashboard service as a NodePort, though this is not recommended for production.

Persistent Storage with hostpath-storage

The storage add-on (also called hostpath-storage) creates a default StorageClass that provisions PersistentVolumes backed by directories on the host filesystem. This is the simplest way to get persistent storage working in MicroK8s.

Verify the StorageClass exists.

kubectl get storageclass

You should see microk8s-hostpath listed as the default. Now create a PersistentVolumeClaim to test it.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF

Check that the PVC is bound.

kubectl get pvc test-pvc

The status should show Bound. The actual data is stored under /var/snap/microk8s/common/default-storage/ on the host.

Important: hostpath-storage is designed for single-node clusters and development use. In a multi-node setup, a pod that gets rescheduled to a different node will not have access to data stored on the original node. For multi-node persistent storage, consider NFS, Ceph (via the Rook add-on), or a cloud-native CSI driver.

Clean up the test PVC if you do not need it.

kubectl delete pvc test-pvc

High Availability with 3+ Nodes

MicroK8s supports high availability out of the box when you have three or more control-plane nodes. It uses Dqlite, a distributed SQLite database developed by Canonical, as its datastore instead of etcd. When three or more nodes join the cluster without the --worker flag, Dqlite automatically forms a consensus group with Raft-based replication.

To build an HA cluster, install MicroK8s on three machines and join them as described in the multi-node section above. Do not use the --worker flag for any of them.

# On node1 (run this for each additional node)
microk8s add-node

# On node2
microk8s join 192.168.1.10:25000/<token>

# On node1 again (generate a new token for node3)
microk8s add-node

# On node3
microk8s join 192.168.1.10:25000/<new-token>

Once all three nodes have joined, verify the HA status.

microk8s status

The output should indicate that high availability is enabled. You can now lose one control-plane node and the cluster will continue operating. The Dqlite consensus requires a majority of nodes to be online, so with three nodes you can tolerate one failure. With five nodes, you can tolerate two.

To remove a node from the cluster (for example, during maintenance), run the following from any remaining control-plane node.

microk8s remove-node <node-name-or-ip>

On the node being removed, reset it to a clean state.

microk8s leave

Upgrade and Channel Tracking

MicroK8s uses snap channels to track Kubernetes versions. Each channel follows the format <major>.<minor>/<risk-level>. For example, 1.31/stable tracks the latest patch release of Kubernetes 1.31.

Check your current channel.

snap info microk8s | grep tracking

To upgrade to a newer Kubernetes version, switch the channel.

sudo snap refresh microk8s --channel=1.32/stable

The snap daemon will download the new version and restart MicroK8s services. On a multi-node cluster, upgrade each node one at a time to avoid downtime. Start with the worker nodes, then move to control-plane nodes.

If you want MicroK8s to stay on a specific patch version and not auto-update, you can hold the snap.

sudo snap refresh --hold microk8s

To resume automatic updates later, unhold it.

sudo snap refresh --unhold microk8s

Tip: In production, always test upgrades on a staging cluster first. Also review the Kubernetes changelog for any deprecations or breaking changes between minor versions before switching channels.

Troubleshooting

Here are the most common issues you will encounter and how to resolve them.

MicroK8s is not starting

Run the built-in inspection tool to get a detailed diagnostic report.

microk8s inspect

This generates a tarball with logs, configuration, and network diagnostics. Check the output for warnings and errors. Common causes include insufficient memory, port conflicts, or firewall rules blocking internal communication.

Nodes stuck in NotReady

Check the kubelet logs for errors.

journalctl -u snap.microk8s.daemon-kubelite -n 50 --no-pager

Common causes: DNS resolution failure (enable the dns add-on), network plugin not running, or the node ran out of disk space. Also verify that all required ports are open between the nodes.

Pod DNS resolution not working

Make sure the dns add-on is enabled.

microk8s enable dns

Then verify the CoreDNS pod is running.

kubectl get pods -n kube-system -l k8s-app=kube-dns

If CoreDNS is crashing, check its logs with kubectl logs -n kube-system -l k8s-app=kube-dns.

Snap-related issues on Rocky Linux / AlmaLinux

If snap commands hang or fail after installation, make sure the snapd socket is active.

sudo systemctl status snapd.socket
sudo systemctl restart snapd.socket

Also verify the /snap symlink exists. Without it, classic snaps like MicroK8s will fail to install.

ls -la /snap

If the symlink is missing, recreate it.

sudo ln -s /var/lib/snapd/snap /snap

SELinux blocking MicroK8s

Check the audit log for denials.

sudo ausearch -m avc -ts recent

If you see denials related to snap or microk8s, you can generate a custom policy module.

sudo ausearch -m avc -ts recent | audit2allow -M microk8s-custom
sudo semodule -i microk8s-custom.pp

This approach is preferred over setting SELinux to permissive mode in production environments.

Resetting MicroK8s

If things go sideways and you want a clean slate, reset the installation. This removes all workloads, add-ons, and configuration.

microk8s reset

For a complete removal including the snap package:

sudo snap remove microk8s --purge

Summary

MicroK8s gives you a fully functional Kubernetes cluster with minimal setup overhead. Whether you are running it on Ubuntu 24.04 where snap is native or on Rocky Linux 10 / AlmaLinux 10 where you install snapd through EPEL, the experience after installation is identical. The add-on system handles DNS, ingress, storage, monitoring, and the dashboard without requiring you to manage Helm charts or raw manifests. For production use, scale to three or more control-plane nodes to get automatic high availability through Dqlite. Keep your cluster current by tracking snap channels and upgrading nodes one at a time.

1 COMMENT

  1. Thank you very much for detail instruction , I would like to add port 19001 to firewall list ,for enabling the HA ,

    Also wondering if the instruction can be extended to jaeger

LEAVE A REPLY

Please enter your comment!
Please enter your name here