How To

Expand PVC in OpenShift with ODF Storage

OpenShift Data Foundation (ODF) – formerly OpenShift Container Storage (OCS) – provides Ceph-backed persistent storage for OpenShift clusters. As workloads grow, the persistent volume claims (PVCs) backing your databases, registries, and stateful applications need more space. The good news is that ODF supports online volume expansion, so you can resize PVCs without downtime or pod restarts in most cases.

Original content from computingforgeeks.com - post 76394

This guide walks through expanding PVCs on OpenShift with ODF storage. We cover checking StorageClass settings, expanding via oc patch and YAML edit, handling StatefulSet PVCs, and troubleshooting common expansion failures. All steps work on OpenShift 4.12+ with OpenShift Data Foundation 4.12 or later.

Prerequisites

  • OpenShift 4.12+ cluster with ODF (OpenShift Data Foundation) deployed and healthy
  • Cluster admin or namespace admin access with oc CLI configured
  • PVCs provisioned by ODF StorageClasses (ocs-storagecluster-ceph-rbd or ocs-storagecluster-cephfs)
  • Sufficient free capacity in the Ceph cluster to accommodate the expansion

Step 1: Check if StorageClass Allows Volume Expansion

Before expanding any PVC, confirm that the StorageClass has allowVolumeExpansion set to true. Without this field, the API server rejects resize requests. List all StorageClasses and check the ALLOWVOLUMEEXPANSION column.

oc get sc

The output shows each StorageClass with its provisioner and expansion support status:

NAME                                    PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
ocs-storagecluster-ceph-rbd             openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   90d
ocs-storagecluster-cephfs (default)     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   90d
openshift-storage.noobaa.io             openshift-storage.noobaa.io/obc         Delete          Immediate              false                  90d

If ALLOWVOLUMEEXPANSION shows false for your StorageClass, you need to enable it. StorageClasses are immutable in Kubernetes, so you must export, delete, and recreate with the field added. Export the current StorageClass definition first.

oc get sc ocs-storagecluster-ceph-rbd -o yaml > ocs-storagecluster-ceph-rbd.yaml

Edit the file and add allowVolumeExpansion: true at the top level (same level as apiVersion). Then delete and recreate the StorageClass:

oc delete sc ocs-storagecluster-ceph-rbd
oc apply -f ocs-storagecluster-ceph-rbd.yaml

Verify the StorageClass now allows expansion:

oc get sc ocs-storagecluster-ceph-rbd -o jsonpath='{.allowVolumeExpansion}'

The output should return true. Existing PVCs using this StorageClass can now be expanded.

Step 2: Check Current PVC Size

List PVCs in your namespace to identify the one that needs expansion and its current capacity.

oc get pvc -n my-project

The output shows PVC name, status, bound volume, current capacity, and access mode:

NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
data-postgres-0     Bound    pvc-a3b1c2d4-5e6f-7890-abcd-ef1234567890   10Gi       RWO            ocs-storagecluster-ceph-rbd   30d
app-uploads         Bound    pvc-f1e2d3c4-b5a6-7890-1234-567890abcdef   20Gi       RWX            ocs-storagecluster-cephfs     45d

For detailed information about a specific PVC including its actual filesystem usage, describe it:

oc describe pvc data-postgres-0 -n my-project

Look for the Capacity field in the output. This is what the expansion will change.

Step 3: Expand PVC with oc patch

The fastest way to expand a PVC is with oc patch. This method works well for scripted and automated workflows. The following command increases the PVC data-postgres-0 from its current size to 50Gi.

oc patch pvc data-postgres-0 -n my-project -p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'

The command returns confirmation that the PVC was patched:

persistentvolumeclaim/data-postgres-0 patched

You can only increase PVC size – Kubernetes does not support shrinking volumes. Attempting to set a smaller size than the current capacity results in an error. The new size must always be larger than the existing spec.resources.requests.storage value.

Step 4: Expand PVC via YAML Edit

For a more interactive approach, edit the PVC directly. This is useful when you want to review the full PVC spec before making changes.

oc edit pvc data-postgres-0 -n my-project

In the editor, locate the spec.resources.requests.storage field and change it to the new size:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: ocs-storagecluster-ceph-rbd
  volumeName: pvc-a3b1c2d4-5e6f-7890-abcd-ef1234567890

Save and exit the editor. OpenShift immediately begins the volume expansion process. You can also expand PVCs from the OpenShift web console by navigating to Storage > PersistentVolumeClaims, selecting the PVC, and clicking Actions > Expand PVC.

Expand PVC option in OpenShift web console

Set the desired capacity in the dialog and confirm:

Setting new PVC capacity in OpenShift web console

Step 5: Verify Filesystem Expansion

After patching the PVC, check that the expansion completed successfully. ODF handles the underlying Ceph volume resize and filesystem expansion automatically for both CephFS and RBD volumes.

oc get pvc data-postgres-0 -n my-project

The CAPACITY column should reflect the new size:

NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
data-postgres-0   Bound    pvc-a3b1c2d4-5e6f-7890-abcd-ef1234567890   50Gi       RWO            ocs-storagecluster-ceph-rbd   30d

Check the PVC conditions to confirm the resize completed without errors:

oc get pvc data-postgres-0 -n my-project -o jsonpath='{.status.conditions[*].type}'

If the expansion is still in progress, you will see a FileSystemResizePending condition. Once complete, the conditions field will be empty or show no resize-related entries. To verify the filesystem inside the pod reflects the new size, exec into the pod and run df.

oc exec -it deploy/postgres -n my-project -- df -h /var/lib/postgresql/data

The output confirms the filesystem sees the expanded volume:

Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0        50G  8.2G   42G  17% /var/lib/postgresql/data

Step 6: Expand PVCs for StatefulSets

StatefulSets create one PVC per replica from a volumeClaimTemplate. There is no way to resize all PVCs by editing the StatefulSet spec – Kubernetes does not propagate template changes to existing PVCs. You must patch each PVC individually. If you have a Redis StatefulSet cluster with three replicas, each has its own PVC.

First, list the PVCs belonging to the StatefulSet:

oc get pvc -n my-project -l app=redis

The output shows one PVC per StatefulSet replica:

NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
data-redis-0        Bound    pvc-11111111-2222-3333-4444-555555555555   5Gi        RWO            ocs-storagecluster-ceph-rbd   60d
data-redis-1        Bound    pvc-66666666-7777-8888-9999-aaaaaaaaaaaa   5Gi        RWO            ocs-storagecluster-ceph-rbd   60d
data-redis-2        Bound    pvc-bbbbbbbb-cccc-dddd-eeee-ffffffffffff   5Gi        RWO            ocs-storagecluster-ceph-rbd   60d

Patch all PVCs in a loop to expand them to the same size:

for i in 0 1 2; do
  oc patch pvc data-redis-$i -n my-project -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
done

Each PVC is patched independently. Verify all three expanded successfully:

oc get pvc -n my-project -l app=redis -o custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage

All PVCs should show the new 20Gi capacity. Also update the volumeClaimTemplate in the StatefulSet spec so that future replicas get the new size. Note that this change only affects newly created PVCs – existing ones are already resized by the patch commands above.

Step 7: CephFS vs Ceph RBD Expansion Differences

ODF provides two main storage backends, and they handle PVC expansion differently. Understanding these differences helps you plan for maintenance windows if needed. Both backends are managed by Ceph under ODF, but the expansion behavior varies based on how the volume is attached to pods. If you are new to Ceph RBD persistent storage or CephFS storage for Kubernetes, review those guides first.

FeatureCephFS (ocs-storagecluster-cephfs)Ceph RBD (ocs-storagecluster-ceph-rbd)
Access modesRWO, RWXRWO, RWX (block)
Online expansionYes – immediate, no pod restartYes – filesystem resize happens on next mount or online
Filesystem resizeAutomatic (CephFS quota update)Automatic (ext4/xfs online resize)
Pod restart neededNoUsually no (CSI handles online resize)
Best forShared storage, RWX workloadsDatabases, single-writer workloads

CephFS expansion is essentially instant because it updates a Ceph quota – no block device or filesystem resize is needed. RBD expansion involves resizing the Ceph block device and then expanding the filesystem (ext4 or xfs) on top of it. The CSI driver handles the filesystem resize automatically when the volume is mounted. In rare cases where the filesystem resize does not trigger automatically, deleting the pod forces a remount which completes the expansion.

Step 8: Troubleshoot PVC Expansion Issues

PVC expansion can fail for several reasons. Here are the most common issues and their fixes.

StorageClass does not allow expansion

If you see this error when patching a PVC, the StorageClass lacks allowVolumeExpansion: true:

error: persistentvolumeclaims "data-postgres-0" could not be patched: persistentvolumeclaims "data-postgres-0" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

Fix this by following Step 1 to enable volume expansion on the StorageClass.

FileSystemResizePending condition stuck

For RBD volumes, the Ceph block device may be resized but the filesystem expansion waits for the next pod mount. Check the PVC conditions:

oc describe pvc data-postgres-0 -n my-project

If you see FileSystemResizePending in the conditions, the filesystem resize has not happened yet. Delete the pod to trigger a remount:

oc delete pod postgres-0 -n my-project

The StatefulSet or Deployment controller recreates the pod, and the CSI driver resizes the filesystem during the mount phase.

Insufficient Ceph capacity

If the Ceph cluster is running low on space, the volume expansion fails silently or stays pending. Check Ceph health from the ODF toolbox pod. You can monitor your Ceph cluster with Prometheus and Grafana to catch capacity issues early.

oc exec -n openshift-storage $(oc get pod -n openshift-storage -l app=rook-ceph-tools -o name) -- ceph status

Look for HEALTH_WARN or HEALTH_ERR related to capacity. The usage section shows total and available space. If the cluster is above 80% usage, add more OSDs or clean up unused volumes before attempting expansion.

PVC expansion events and logs

Check events on the PVC for detailed expansion status and error messages:

oc get events -n my-project --field-selector involvedObject.name=data-postgres-0 --sort-by='.lastTimestamp'

For CSI driver-level issues, check the logs of the Ceph CSI provisioner and resizer pods in the openshift-storage namespace:

oc logs -n openshift-storage -l app=csi-rbdplugin-provisioner --tail=50

These logs show whether the Ceph resize RPC succeeded and whether the filesystem expansion was triggered. If you manage container images in your cluster with Harbor registry on OpenShift, the registry PVC is a common candidate for expansion as image layers accumulate.

Conclusion

Expanding PVCs on OpenShift with ODF storage is straightforward once the StorageClass has allowVolumeExpansion enabled. Both CephFS and Ceph RBD support online expansion through the CSI driver, with CephFS being slightly simpler since it uses quota updates rather than block device resizing. For production clusters, always verify the Ceph cluster has sufficient capacity before expanding volumes, and check that the filesystem inside the pod reflects the new size after expansion.

For large-scale clusters, consider automating PVC expansion with OpenShift volume expansion policies or monitoring PVC usage with Prometheus alerts to trigger expansions before applications run out of space.

Related Articles

Ansible How To Run Ansible AWX on Kubernetes / OpenShift Cluster Kubernetes How To Install and Use Helm 3 on Kubernetes Cluster Storage Integrate TrueNAS with LDAP / Active Directory for user Authentication Containers Install MicroK8s Kubernetes Cluster on Arch|Manjaro|Garuda

Leave a Comment

Press ESC to close