Containers

Kubernetes Storage Basics with K3s and RKE2

Containers are ephemeral by design. When a pod dies, everything inside it vanishes. That works fine for stateless web frontends, but the moment you run a database, a message queue, or anything that needs to remember state across restarts, you need persistent storage. Kubernetes solves this with a layered abstraction: PersistentVolumes, PersistentVolumeClaims, and StorageClasses. The concepts are simple once you see them in practice.

Original content from computingforgeeks.com - post 165153

This guide walks through Kubernetes storage from scratch using K3s and RKE2, the two lightweight Kubernetes distributions from Rancher. We cover the core concepts, show how K3s ships with storage ready to go while RKE2 requires manual setup, create a PVC, mount it in a pod, build a StatefulSet, and tackle a real SELinux issue on Rocky Linux that blocks storage provisioning.

Verified working: March 2026 on Ubuntu 24.04 (K3s v1.35.3) and Rocky Linux 10.1 (RKE2 v1.35.3, SELinux enforcing)

Storage Concepts in Kubernetes

Before touching any YAML, it helps to understand the three objects that make Kubernetes storage work.

PersistentVolume (PV) is the actual storage resource in the cluster. Think of it as a disk that exists independently of any pod. It can be a local directory on a node, an NFS share, a cloud block device, or a Ceph volume. PVs have a lifecycle separate from pods, which is the whole point.

PersistentVolumeClaim (PVC) is a request for storage. A pod doesn’t reference a PV directly. Instead, it creates a PVC asking for a certain size and access mode, and Kubernetes matches it to an available PV. This decouples the “I need 10Gi of storage” request from the “here’s a specific disk” implementation.

StorageClass automates PV creation. Without a StorageClass, an admin must manually create PVs before anyone can claim them. With a StorageClass, Kubernetes dynamically provisions a PV whenever a PVC is created. This is what makes storage practical in real clusters.

Two other settings matter in practice:

  • Access Modes define how many nodes can mount the volume simultaneously. ReadWriteOnce (RWO) allows one node at a time, which is the most common for local storage. ReadWriteMany (RWX) allows multiple nodes, but requires a shared filesystem like NFS or Longhorn
  • Reclaim Policy controls what happens to the PV when its PVC is deleted. Delete removes the data along with the PV. Retain keeps the data for manual cleanup. For local development storage, Delete is standard

K3s Ships with Storage Ready

One of the reasons K3s is popular for learning and edge deployments is that it includes the local-path-provisioner out of the box. No extra installation, no configuration. The moment K3s is running, you can create PVCs and they just work.

Confirm the StorageClass exists on your K3s cluster:

kubectl get storageclass

You should see local-path listed as the default:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  4d

The key details here: the provisioner is rancher.io/local-path, the reclaim policy is Delete, and the volume binding mode is WaitForFirstConsumer. That last setting means the PV won’t be created until a pod actually needs it, which ensures the volume lands on the same node as the pod.

Install Storage on RKE2

RKE2 takes a different approach. It ships as a more production-oriented distribution and does not include a default StorageClass. If you try to create a PVC on a fresh RKE2 cluster, it will sit in Pending state indefinitely because nothing knows how to provision the volume.

Check for yourself:

kubectl get storageclass

On a fresh RKE2 installation, the output is empty:

No resources found

Install the same local-path-provisioner that K3s uses. Version 0.0.30 is the latest stable release as of March 2026:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml

The provisioner deploys into the local-path-storage namespace. Verify it’s running:

kubectl -n local-path-storage get pod

You should see the provisioner pod in Running state:

NAME                                      READY   STATUS    RESTARTS   AGE
local-path-provisioner-6c5764b6d4-k7m2w   1/1     Running   0          35s

Now set local-path as the default StorageClass so PVCs don’t need to explicitly name it:

kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Confirm the StorageClass now shows as default:

kubectl get storageclass

The output should show (default) next to local-path:

NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  2m

Create a PersistentVolumeClaim

With a StorageClass in place (whether from K3s default or manual installation on RKE2), you can create a PVC. This requests 1Gi of storage with ReadWriteOnce access.

Create the PVC manifest:

sudo vi test-pvc.yaml

Add the following content:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi

Apply it:

kubectl apply -f test-pvc.yaml

Check the PVC status:

kubectl get pvc test-pvc

Because the binding mode is WaitForFirstConsumer, the PVC stays in Pending until a pod references it:

NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Pending                                      local-path     10s

This is normal. The PV gets created only when a pod is scheduled that needs this claim.

Mount Storage in a Pod

Now create a pod that mounts the PVC, writes some data, and proves the storage works.

sudo vi test-pod.yaml

Add this pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: busybox
      image: busybox:latest
      command: ["sh", "-c", "echo 'Hello from persistent storage' > /data/hello.txt && sleep 3600"]
      volumeMounts:
        - name: test-storage
          mountPath: /data
  volumes:
    - name: test-storage
      persistentVolumeClaim:
        claimName: test-pvc

Apply the pod:

kubectl apply -f test-pod.yaml

Wait a few seconds, then check both the pod and PVC:

kubectl get pod test-pod
kubectl get pvc test-pvc

The PVC should now show Bound and the pod should be Running:

NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-a1b2c3d4-5678-90ab-cdef-1234567890ab   1Gi        RWO            local-path     2m

NAME       READY   STATUS    RESTARTS   AGE
test-pod   1/1     Running   0          45s

Verify the data was written:

kubectl exec test-pod -- cat /data/hello.txt

The output confirms persistent storage is working:

Hello from persistent storage

To prove persistence, delete the pod and recreate it. The data survives because it lives on the PV, not inside the container:

kubectl delete pod test-pod
kubectl apply -f test-pod.yaml

After the pod starts again, read the file:

kubectl exec test-pod -- cat /data/hello.txt

Still there. That’s the entire point of persistent storage.

StatefulSet with Volume Claim Templates

Pods are disposable. For workloads that need stable storage tied to a specific identity (databases, Kafka brokers, Elasticsearch nodes), Kubernetes provides StatefulSets. Each replica in a StatefulSet gets its own PVC that follows it across rescheduling.

Here’s a minimal StatefulSet that creates three replicas, each with its own 2Gi volume:

sudo vi statefulset-demo.yaml

Add this manifest:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "web"
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx:1.27
          volumeMounts:
            - name: www-data
              mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
    - metadata:
        name: www-data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: local-path
        resources:
          requests:
            storage: 2Gi

Apply it and watch the pods come up in order:

kubectl apply -f statefulset-demo.yaml
kubectl get pods -w -l app=web

StatefulSet pods are created sequentially (web-0, then web-1, then web-2), and each gets a dedicated PVC:

kubectl get pvc -l app=web

Three separate PVCs, one per replica:

NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-data-web-0   Bound    pvc-11111111-2222-3333-4444-555555555555   2Gi        RWO            local-path     60s
www-data-web-1   Bound    pvc-66666666-7777-8888-9999-aaaaaaaaaaaa   2Gi        RWO            local-path     45s
www-data-web-2   Bound    pvc-bbbbbbbb-cccc-dddd-eeee-ffffffffffff   2Gi        RWO            local-path     30s

If web-1 is deleted and rescheduled, it reattaches to the same PVC (www-data-web-1) and picks up right where it left off. This is what makes StatefulSets the right choice for anything stateful.

SELinux and Storage on Rocky Linux

This catches most people off guard when running RKE2 on Rocky Linux with SELinux enforcing. The local-path-provisioner stores volumes under /opt/local-path-provisioner by default. When the helper pod tries to create that directory, SELinux blocks it because the context is wrong.

The error shows up in the provisioner logs:

kubectl -n local-path-storage logs deployment/local-path-provisioner

You’ll see something like:

time="2026-03-28T14:22:01Z" level=error msg="Failed to create volume" err="helper pod failed: mkdir /opt/local-path-provisioner: permission denied"

Confirm SELinux is the culprit by checking the audit log on the RKE2 node:

sudo ausearch -m avc -ts recent | grep local-path

You’ll find an AVC denial for the container process trying to write to /opt/local-path-provisioner.

The fix is to create the directory manually and apply the correct SELinux context so container processes can read and write to it:

sudo mkdir -p /opt/local-path-provisioner

Set the SELinux file context for the provisioner directory:

sudo semanage fcontext -a -t container_file_t "/opt/local-path-provisioner(/.*)?"
sudo restorecon -Rv /opt/local-path-provisioner

Verify the context was applied:

ls -Zd /opt/local-path-provisioner

The output should show container_file_t:

unconfined_u:object_r:container_file_t:s0 /opt/local-path-provisioner

After applying the context, retry the PVC creation. The provisioner should now be able to create subdirectories and bind volumes without SELinux interference. Never disable SELinux to work around this. The container_file_t context is the correct, targeted fix that keeps your system secure.

Beyond Local Storage

Local-path storage has one fundamental limitation: the data lives on a single node. If that node goes down, pods using local volumes can’t be rescheduled to another node without losing access to their data. For single-node clusters and development environments, that’s perfectly fine. For production workloads that need high availability, you need distributed storage.

Longhorn is the natural next step for K3s and RKE2 clusters. Built by the same Rancher team, it replicates volumes across multiple nodes, supports snapshots and backups, and integrates cleanly with both distributions. It provides ReadWriteMany access mode, automatic replica rebuilding when a node fails, and backup to S3-compatible storage.

Other options in the Kubernetes storage ecosystem include Rook-Ceph for large-scale deployments, OpenEBS for container-attached storage, and cloud provider CSI drivers (EBS, Persistent Disk, Azure Disk) if you’re running in a public cloud. Each trades complexity for capability.

For a practical reference on kubectl commands for managing these resources, including inspecting PVs, describing PVCs, and debugging pod volume mounts, keep the kubectl cheat sheet bookmarked. Storage troubleshooting in Kubernetes almost always starts with kubectl describe pvc and kubectl describe pod to see where the binding or mounting failed.

What port does the local-path-provisioner use?

The local-path-provisioner doesn’t expose any network ports. It runs as a Kubernetes controller that watches for PVC events and creates helper pods to provision local directories on the node’s filesystem. There is no service endpoint to configure or firewall port to open.

Can I use local-path storage for databases in production?

Only on single-node clusters where you accept the risk of node failure causing downtime. For multi-node production databases, use Longhorn or a cloud CSI driver that replicates data across nodes. The database itself may handle replication (PostgreSQL streaming replication, MySQL Group Replication), but the storage layer should still be resilient.

Clean up the test resources when you’re done experimenting:

kubectl delete -f test-pod.yaml
kubectl delete -f test-pvc.yaml
kubectl delete -f statefulset-demo.yaml

With storage sorted out, you may want to explore Kubernetes networking with Services and Ingress to expose your workloads, or set up a production RKE2 HA cluster if you’re still running a single node.

Related Articles

Ansible How To Install Ansible AWX on CentOS 7 / RHEL 7 Docker Install Docker & Compose on Rocky Linux 9 Containers How To Run Nginx Proxy Manager in Docker Container Containers Build container images on Kubernetes using img image builder

Leave a Comment

Press ESC to close