Containers

RKE2 Single Node Kubernetes Setup on Rocky Linux 10

RKE2 (also called RKE Government) is Rancher’s next-generation Kubernetes distribution built for security-conscious environments. Unlike K3s, which optimizes for simplicity and edge deployments, RKE2 targets CIS Kubernetes Benchmark compliance out of the box. It ships with FIPS 140-2 validated crypto modules, runs etcd as a static pod instead of SQLite, and uses Canal CNI with network policies enabled by default. If you need a hardened, single-binary Kubernetes that still feels like upstream, this is it.

Original content from computingforgeeks.com - post 165135

This guide walks through a complete single-node RKE2 deployment on Rocky Linux 10 with SELinux enforcing. A single-node setup works well for development, CI/CD runners, homelab clusters, or any workload where high availability isn’t the primary concern. We’ll cover the install, a critical kernel module gotcha that catches most people on Rocky 10 cloud images, cluster verification, and a test deployment. For a lighter alternative, see our K3s quickstart guide. The official RKE2 documentation covers multi-node and HA configurations.

Verified working: March 2026 on Rocky Linux 10.1 (kernel 6.12), SELinux enforcing, RKE2 v1.35.3+rke2r1

Prerequisites

You need a single Rocky Linux 10 server (physical or virtual) with the following minimum specs:

  • CPU: 2 cores minimum (4 recommended for running workloads alongside the control plane)
  • RAM: 4 GB minimum. RKE2 with all system pods consumes roughly 1.7 GB at idle
  • Disk: 20 GB free (container images add up quickly)
  • OS: Rocky Linux 10.1 (Red Quartz), fresh minimal or cloud image
  • Access: Root or a user with sudo privileges
  • SELinux: Enforcing (the installer handles SELinux policies automatically)
  • Firewall: Ports 6443 (API server), 9345 (RKE2 supervisor), and 10250 (kubelet) open if you plan to add nodes later

Confirm your OS release and SELinux status before proceeding:

cat /etc/rocky-release
getenforce

The output should confirm Rocky Linux 10.1 and Enforcing mode:

Rocky Linux release 10.1 (Red Quartz)
Enforcing

Install RKE2

RKE2 provides a single install script that downloads the binary, creates systemd units, and installs the SELinux policy module. No package repositories to configure manually.

curl -sfL https://get.rke2.io | sudo sh -

The installer pulls several packages including the SELinux policies and container runtime dependencies:

  container-selinux-4:2.240.0-1.el10.noarch
  iptables-nft-1.8.11-11.el10.x86_64
  rke2-common-1.35.3~rke2r1-0.el10.x86_64
  rke2-selinux-0.22-1.el10.noarch
  rke2-server-1.35.3~rke2r1-0.el10.x86_64

Notice that the installer also pulls a newer kernel package. On Rocky 10 cloud/minimal images, the shipping kernel may lack the br_netfilter module that Kubernetes networking requires. The RKE2 installer resolves this by installing a kernel that includes it. This matters a lot, as explained in the next section.

Reboot for Kernel Modules (Critical)

This is the single biggest gotcha with RKE2 on Rocky Linux 10 cloud images, and the reason this section exists as its own heading. The br_netfilter kernel module is required for Kubernetes pod networking. Without it, the Canal CNI plugin cannot configure network bridges and pod-to-pod communication fails entirely.

The RKE2 installer pulls a newer kernel that includes br_netfilter, but you must reboot into that kernel for the module to become available. If you skip the reboot and try to start RKE2 immediately, the Canal CNI pods will crash-loop with timeout errors trying to reach the API server.

Reboot now:

sudo reboot

After the system comes back up, verify the new kernel is loaded and the module is available:

uname -r
modprobe br_netfilter
lsmod | grep br_netfilter

You should see the updated kernel version and the module loaded:

6.12.0-124.40.1.el10_1.x86_64
br_netfilter           32768  0
bridge                421888  1 br_netfilter

With the module loaded, RKE2 networking will function correctly. If you’re running on bare metal or a VM that already has kernel 6.12.0-124 or newer, the reboot may not be strictly necessary, but it’s safer to reboot regardless.

Start and Enable RKE2

Enable and start the RKE2 server service:

sudo systemctl enable --now rke2-server.service

The first start takes 2 to 3 minutes. RKE2 is pulling container images for etcd, the API server, controller manager, scheduler, CoreDNS, Canal CNI, the ingress controller, and metrics server. Be patient. You can watch the progress in the journal:

sudo journalctl -u rke2-server -f

Once the service is fully up, check its status:

sudo systemctl status rke2-server

The service should show active (running). If it shows activating, give it another minute for the image pulls to complete.

Configure kubectl Access

RKE2 installs its own kubectl binary in /var/lib/rancher/rke2/bin/ and writes the kubeconfig to /etc/rancher/rke2/rke2.yaml. That’s a lot of typing for every command. Set up your environment to make kubectl work directly.

Add the RKE2 bin directory to your PATH and export the KUBECONFIG variable. Append these lines to your shell profile so they persist across sessions:

echo 'export PATH=$PATH:/var/lib/rancher/rke2/bin' >> ~/.bashrc
echo 'export KUBECONFIG=/etc/rancher/rke2/rke2.yaml' >> ~/.bashrc
source ~/.bashrc

Verify kubectl works:

kubectl version

If you’re running as a non-root user, copy the kubeconfig to your home directory and adjust permissions:

mkdir -p ~/.kube
sudo cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
export KUBECONFIG=~/.kube/config

The RKE2 bin directory also contains crictl, ctr, and containerd for container-level debugging. These come in handy when troubleshooting image pull issues or inspecting running containers directly.

Verify the Cluster

Check that the node is in Ready state:

kubectl get nodes -o wide

The output confirms a single control-plane node with etcd, running containerd as the runtime:

NAME          STATUS   ROLES                       AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                    CONTAINER-RUNTIME
rke2-single   Ready    control-plane,etcd   5m    v1.35.3+rke2r1    10.0.1.50     <none>        Rocky Linux 10.1 (Red Quartz)  6.12.0-124.40.1.el10_1.x86_64   containerd://2.2.2-k3s1

List all pods across every namespace to confirm the system components are healthy:

kubectl get pods -A

Every pod should show Running (or Completed for the one-shot helm install jobs):

NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-rke2-single                    1/1     Running     0          5m
kube-system   etcd-rke2-single                                        1/1     Running     0          5m
kube-system   helm-install-rke2-canal-xxxxx                           0/1     Completed   0          5m
kube-system   helm-install-rke2-coredns-xxxxx                         0/1     Completed   0          5m
kube-system   helm-install-rke2-ingress-nginx-xxxxx                   0/1     Completed   0          5m
kube-system   helm-install-rke2-metrics-server-xxxxx                  0/1     Completed   0          5m
kube-system   helm-install-rke2-snapshot-controller-xxxxx             0/1     Completed   0          5m
kube-system   helm-install-rke2-snapshot-validation-webhook-xxxxx     0/1     Completed   0          5m
kube-system   kube-apiserver-rke2-single                              1/1     Running     0          5m
kube-system   kube-controller-manager-rke2-single                     1/1     Running     0          5m
kube-system   kube-proxy-rke2-single                                  1/1     Running     0          5m
kube-system   kube-scheduler-rke2-single                              1/1     Running     0          5m
kube-system   rke2-canal-xxxxx                                        2/2     Running     0          5m
kube-system   rke2-coredns-rke2-coredns-xxxxx                         1/1     Running     0          5m
kube-system   rke2-coredns-rke2-coredns-autoscaler-xxxxx              1/1     Running     0          5m
kube-system   rke2-ingress-nginx-controller-xxxxx                     1/1     Running     0          5m
kube-system   rke2-metrics-server-xxxxx                               1/1     Running     0          5m
kube-system   rke2-snapshot-controller-xxxxx                          1/1     Running     0          5m

That’s a fully functional single-node Kubernetes cluster. The control plane components (API server, scheduler, controller manager, etcd) run as static pods managed by the kubelet. Canal provides both network policy enforcement and pod networking. The ingress NGINX controller is ready to route external traffic to your services.

Deploy a Test Workload

Spin up a quick nginx deployment to confirm the cluster actually schedules and runs workloads:

kubectl create deployment nginx-test --image=nginx:latest --replicas=2

Expose it as a NodePort service so you can reach it from outside the cluster:

kubectl expose deployment nginx-test --port=80 --type=NodePort

Check that both replicas are running and have received pod IPs from the Canal CNI:

kubectl get pods -o wide -l app=nginx-test

Both pods should show Running with IPs in the 10.42.0.0/16 range (the default Canal pod CIDR):

NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
nginx-test-7c5b8d65b8-k4x2m  1/1     Running   0          30s   10.42.0.14   rke2-single   <none>           <none>
nginx-test-7c5b8d65b8-r9p3n  1/1     Running   0          30s   10.42.0.15   rke2-single   <none>           <none>

Find the assigned NodePort:

kubectl get svc nginx-test

The service maps port 80 to a random high port on the node:

NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-test   NodePort   10.43.85.192   <none>        80:30476/TCP   15s

Test it with curl using the node’s IP and the NodePort:

curl -s http://10.0.1.50:30476 | head -5

You should see the nginx welcome page HTML, confirming end-to-end connectivity from the node through the service to the pod.

Clean up the test resources when you’re done:

kubectl delete deployment nginx-test
kubectl delete svc nginx-test

RKE2 vs K3s: Which One to Use

Both come from Rancher, but they target different use cases. This table captures the practical differences that matter when choosing between them.

FeatureRKE2K3s
Target environmentProduction, government, regulatedEdge, IoT, dev, homelab
CIS Benchmark complianceOut of the boxRequires manual hardening
Default CNICanal (Calico + Flannel)Flannel
Default ingressIngress NGINXTraefik
Default storageNone (bring your own)Local-path-provisioner
DatastoreEmbedded etcdSQLite (single), etcd (HA)
FIPS 140-2 supportYesNo
Idle memory usage~1.7 GB~500 MB
SELinux supportNative (rke2-selinux package)Native (k3s-selinux package)

Pick K3s when resources are tight or you want the fastest path to a running cluster. Pick RKE2 when compliance, network policies, or production hardening matter more than minimal footprint. For a deeper look at K3s, see our K3s quickstart guide.

Troubleshooting

The most common failure on Rocky Linux 10 is network-related, caused by the missing kernel module discussed earlier. Here’s what it looks like and how to fix it.

Canal CNI fails: “Unable to create token for CNI kubeconfig”

If you started RKE2 without rebooting after the install, the Canal pods will fail with timeout errors trying to reach the kube-apiserver. The rke2-canal pods show CrashLoopBackOff, and the journal logs include messages about failing to create a CNI kubeconfig token.

The root cause is the br_netfilter kernel module not being loaded. Kubernetes networking requires this module for iptables to process bridged traffic between pods. Check whether it’s loaded:

lsmod | grep br_netfilter

If the output is empty, the module is not available in your running kernel. The fix is straightforward: reboot into the newer kernel that RKE2 installed, then restart the service.

sudo reboot

After the reboot, RKE2 starts automatically (since you enabled the service). Give it 2 to 3 minutes and check the pods again. All Canal pods should transition to Running.

Node shows NotReady after initial start

The node stays in NotReady state until the CNI plugin initializes successfully. If you check kubectl get nodes within the first 2 minutes, NotReady is expected. Wait for the Canal pods to finish starting. Once rke2-canal shows 2/2 Running, the node transitions to Ready.

If the node stays NotReady for more than 5 minutes, check the kubelet logs:

sudo journalctl -u rke2-server -f --no-pager | grep -i "not ready\|cni\|network"

Common causes include the br_netfilter issue above, firewall rules blocking pod CIDR traffic, or SELinux denials on new container paths. Check for SELinux denials with:

sudo ausearch -m avc -ts recent

The rke2-selinux package handles most policy requirements automatically, but custom workloads mounting host paths may trigger denials. Address those with targeted semanage fcontext rules rather than disabling SELinux.

No default StorageClass

Unlike K3s, RKE2 does not ship a default storage provisioner. If you deploy a workload with a PersistentVolumeClaim, it will stay in Pending because no StorageClass exists to fulfill it. For single-node setups, the local-path-provisioner from Rancher works well. For production, consider Longhorn, Rook-Ceph, or your cloud provider’s CSI driver.

For reference on kubectl commands and cluster management, check our kubectl cheat sheet. The RKE2 GitHub repository tracks issues and releases if you run into edge cases not covered here.

Related Articles

Containers Perform Safe & Automatic Node Reboots on Kubernetes with Kured AlmaLinux Install Prometheus and Grafana on Rocky Linux 10 / AlmaLinux 10 Containers Install MicroK8s Lightweight Kubernetes on Linux Virtualization Run Rocky Linux 10 VM using Vagrant on KVM / VirtualBox / VMware

Leave a Comment

Press ESC to close