How To

LVM on Linux – Create, Extend, Snapshot, and Manage Logical Volumes

Most Linux servers eventually run out of space on a partition that was sized too small at install time. With traditional partitions, fixing this means downtime, backup dances, and repartitioning. LVM (Logical Volume Manager) eliminates that problem entirely. It lets you resize volumes while the filesystem is mounted, add new disks to an existing pool without reformatting, and take instant snapshots before risky changes.

Original content from computingforgeeks.com - post 115472

This guide covers every practical LVM operation: creating volumes from scratch, extending them online, shrinking ext4 volumes, adding new disks to an existing volume group, snapshotting and restoring, and tearing everything down cleanly. Every command was tested on a live system with real output captured. If you manage databases on these volumes, the PostgreSQL and MariaDB backup with PITR guide covers the data protection side.

Verified working: April 2026 on Rocky Linux 9.5 (LVM 2.03.32). Commands are identical on Rocky Linux 10, AlmaLinux, RHEL 9/10, Ubuntu 24.04, and Debian 13.

How LVM Works

LVM sits between your physical disks and the filesystems you mount. It adds a layer of abstraction that makes storage flexible:

  • Physical Volumes (PV) are raw disks or partitions initialized for LVM. Think of them as the building blocks.
  • Volume Groups (VG) pool one or more PVs into a single storage bucket. A VG can span multiple physical disks.
  • Logical Volumes (LV) are carved out of a VG. Each LV behaves like a partition: you format it with a filesystem, mount it, and use it. The difference is that you can resize it on the fly.

The data flow looks like this: Physical Disks → PV → VG → LV → Filesystem → Mount Point. You can add more PVs to a VG at any time, then extend existing LVs into the new space without unmounting anything.

Prerequisites

  • Rocky Linux 10/9, AlmaLinux, RHEL 10/9, Ubuntu 24.04, or Debian 13
  • Root or sudo access
  • One or more unused disks, partitions, or loopback devices for testing
  • The lvm2 package installed

Install the LVM tools if not already present:

sudo dnf install -y lvm2

On Ubuntu/Debian:

sudo apt install -y lvm2

Create Physical Volumes

Physical volumes are the raw storage that LVM manages. You can use whole disks (/dev/sdb), partitions (/dev/sdb1), or loopback devices for testing. In this guide we use /dev/sdb and /dev/sdc as the target disks. Replace these with your actual device names.

First, identify your available disks:

lsblk

Look for disks that have no mount point and no partitions. Never initialize a disk that contains data you need.

Initialize the disks as physical volumes:

sudo pvcreate /dev/sdb /dev/sdc

Confirm the PVs were created:

sudo pvs

The output shows both disks ready for LVM:

  PV         VG Fmt  Attr PSize   PFree
  /dev/sdb      lvm2 ---  500.00m 500.00m
  /dev/sdc      lvm2 ---  500.00m 500.00m

For detailed information on a specific PV, use pvdisplay:

sudo pvdisplay /dev/sdb

Create a Volume Group

A volume group pools multiple physical volumes into one storage space. Choose a descriptive name. In production you might use names like vg_data, vg_postgres, or vg_docker.

sudo vgcreate data_vg /dev/sdb /dev/sdc

The combined capacity is now available:

sudo vgs

Output confirms both PVs are pooled:

  VG      #PV #LV #SN Attr   VSize   VFree
  data_vg   2   0   0 wz--n- 992.00m 992.00m

The VG has ~992 MB usable (a small amount is reserved for LVM metadata on each PV). For full details, use vgdisplay data_vg.

Create Logical Volumes

Logical volumes are what you actually format and mount. The -n flag sets the name, -L sets the size. You do not need to allocate all the VG space at once. Leaving free space in the VG makes it easy to extend volumes later or create snapshots.

sudo lvcreate -n app_lv -L 400M data_vg
sudo lvcreate -n logs_lv -L 200M data_vg

Verify both volumes were created:

sudo lvs

Both LVs appear with their sizes:

  LV      VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  app_lv  data_vg -wi-a----- 400.00m
  logs_lv data_vg -wi-a----- 200.00m

The LV device paths follow the pattern /dev/VG_NAME/LV_NAME. In this case: /dev/data_vg/app_lv and /dev/data_vg/logs_lv. You can also use the mapper path: /dev/mapper/data_vg-app_lv.

Format and Mount

Create filesystems on the logical volumes. Use XFS for general-purpose workloads (default on RHEL/Rocky) or ext4 if you need the ability to shrink the volume later (XFS cannot be reduced).

sudo mkfs.xfs /dev/data_vg/app_lv
sudo mkfs.ext4 /dev/data_vg/logs_lv

Create mount points and mount:

sudo mkdir -p /mnt/app /mnt/logs
sudo mount /dev/data_vg/app_lv /mnt/app
sudo mount /dev/data_vg/logs_lv /mnt/logs

Confirm both are mounted with the correct filesystem type:

df -hT /mnt/app /mnt/logs

Output shows both volumes active:

Filesystem                  Type  Size  Used Avail Use% Mounted on
/dev/mapper/data_vg-app_lv  xfs   336M   23M  314M   7% /mnt/app
/dev/mapper/data_vg-logs_lv ext4  182M   14K  168M   1% /mnt/logs

Make Mounts Persistent (fstab)

Without an fstab entry, the volumes will not mount after a reboot. Get the UUIDs first:

sudo blkid /dev/data_vg/app_lv /dev/data_vg/logs_lv

The output shows the UUID and filesystem type for each volume:

/dev/data_vg/app_lv: UUID="d71200c0-f3a7-4734-824d-973c9f76a2cd" TYPE="xfs"
/dev/data_vg/logs_lv: UUID="94756390-6f13-471b-97e0-3f0d4c005fb0" TYPE="ext4"

Add entries to /etc/fstab. You can use either the UUID or the LVM device path. The device path is simpler for LVM volumes since it does not change:

sudo vi /etc/fstab

Add these lines:

/dev/data_vg/app_lv   /mnt/app   xfs    defaults   0 0
/dev/data_vg/logs_lv  /mnt/logs  ext4   defaults   0 0

Test the fstab entries without rebooting:

sudo umount /mnt/app /mnt/logs
sudo mount -a
df -hT /mnt/app /mnt/logs

If both mount successfully, the fstab entries are correct.

Extend a Logical Volume (Online)

This is where LVM earns its keep. Both XFS and ext4 support online extension, meaning you grow the volume while it is mounted and in use. No downtime required.

Extend XFS Volume

Two steps: extend the LV, then grow the filesystem to fill the new space.

sudo lvextend -L +200M /dev/data_vg/app_lv

The LV is now 600 MB:

  Size of logical volume data_vg/app_lv changed from 400.00 MiB (100 extents) to 600.00 MiB (150 extents).
  Logical volume data_vg/app_lv successfully resized.

Grow the XFS filesystem to fill the extended LV:

sudo xfs_growfs /mnt/app

Confirm the filesystem now uses the full 600 MB:

df -hT /mnt/app

The available space has grown from 314M to 512M:

Filesystem                 Type  Size  Used Avail Use% Mounted on
/dev/mapper/data_vg-app_lv xfs   536M   25M  512M   5% /mnt/app

Extend ext4 Volume

For ext4, the process is similar but uses resize2fs instead of xfs_growfs:

sudo lvextend -L +100M /dev/data_vg/logs_lv
sudo resize2fs /dev/data_vg/logs_lv

Both commands work while the filesystem is mounted. The output confirms the online resize:

resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/data_vg/logs_lv is mounted on /mnt/logs; on-line resizing required
The filesystem on /dev/data_vg/logs_lv is now 307200 (1k) blocks long.

Shortcut: Extend LV and Filesystem in One Command

The -r flag tells lvextend to resize the filesystem automatically after extending the LV. This works for both XFS and ext4:

sudo lvextend -r -L +50M /dev/data_vg/logs_lv

LVM detects the filesystem type and calls the right resize tool:

  Size of logical volume data_vg/logs_lv changed from 200.00 MiB (50 extents) to 252.00 MiB (63 extents).
  Extending file system ext4 to 252.00 MiB on data_vg/logs_lv...
  Extended file system ext4 on data_vg/logs_lv.
  Logical volume data_vg/logs_lv successfully resized.

The -r flag is the recommended approach for day-to-day operations since it reduces the chance of forgetting to resize the filesystem after extending the LV.

Add a New Disk to an Existing Volume Group

When a server runs low on space, you can add a new disk to the VG without touching the existing volumes. This is the most common LVM operation in production.

Initialize the new disk as a physical volume:

sudo pvcreate /dev/sdd

Add it to the existing volume group:

sudo vgextend data_vg /dev/sdd

The VG now spans three disks:

sudo vgs

Output shows the additional capacity:

  VG      #PV #LV #SN Attr   VSize  VFree
  data_vg   3   2   0 wz--n- <1.26g 388.00m

Check which PVs belong to the VG and how much free space each has:

sudo pvs

The new disk is fully available:

  PV         VG      Fmt  Attr PSize   PFree
  /dev/sdb   data_vg lvm2 a--  496.00m      0
  /dev/sdc   data_vg lvm2 a--  496.00m  92.00m
  /dev/sdd   data_vg lvm2 a--  296.00m 296.00m

You can now extend any existing LV into the new space with lvextend -r.

LVM Snapshots

Snapshots capture the state of a logical volume at a specific moment. They are copy-on-write, so creating a snapshot is instant regardless of volume size. Snapshots are useful for backups, testing upgrades, or any situation where you want a rollback point.

Create a Snapshot

Write some test data first, then snapshot the volume:

echo "production data - do not delete" | sudo tee /mnt/app/important.txt

Create a snapshot of app_lv. The -s flag indicates a snapshot, and -L sets the snapshot size. The snapshot only needs enough space to hold the changes made to the original volume while the snapshot exists:

sudo lvcreate -s -n app_snap -L 100M /dev/data_vg/app_lv

The snapshot appears alongside regular LVs:

sudo lvs

Notice the snapshot attributes (s) and its origin volume:

  LV       VG      Attr       LSize   Pool Origin Data%  Meta%
  app_lv   data_vg owi-aos--- 600.00m
  app_snap data_vg swi-a-s--- 100.00m      app_lv 0.00
  logs_lv  data_vg -wi-ao---- 300.00m

The Data% column shows how full the snapshot is. When it reaches 100%, the snapshot becomes invalid. Size your snapshots based on the expected change rate during the snapshot’s lifetime.

Restore from a Snapshot

Simulate a disaster by corrupting the data:

echo "CORRUPTED" | sudo tee /mnt/app/important.txt
cat /mnt/app/important.txt

The file now contains CORRUPTED. To restore, merge the snapshot back into the original volume:

sudo umount /mnt/app
sudo lvconvert --merge /dev/data_vg/app_snap

The merge starts immediately:

  Merging of volume data_vg/app_snap started.
  data_vg/app_lv: Merged: 100.00%

Deactivate and reactivate the LV to complete the merge, then remount:

sudo lvchange -an /dev/data_vg/app_lv
sudo lvchange -ay /dev/data_vg/app_lv
sudo mount /dev/data_vg/app_lv /mnt/app

Verify the data is restored:

cat /mnt/app/important.txt

The original content is back:

production data - do not delete

The snapshot is consumed during the merge. It no longer exists after the restore.

Reduce a Logical Volume (ext4 Only)

XFS does not support shrinking. If you need to reduce a volume, it must use ext4 (or ext3). Reducing is a three-step process: unmount, shrink the filesystem, then shrink the LV. Order matters. Shrinking the LV first would corrupt the filesystem.

sudo umount /mnt/logs

Check the filesystem for errors (required before resize):

sudo e2fsck -fy /dev/data_vg/logs_lv

The filesystem check passes:

e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/data_vg/logs_lv: 11/77824 files (0.0% non-contiguous), 25715/307200 blocks

Shrink the filesystem to the target size first:

sudo resize2fs /dev/data_vg/logs_lv 150M

Then shrink the LV to match:

sudo lvreduce -L 150M /dev/data_vg/logs_lv -y

LVM rounds to the nearest extent boundary (4 MB by default), so the final size may differ slightly from the requested value:

  Rounding size to boundary between physical extents: 152.00 MiB.
  Size of logical volume data_vg/logs_lv changed from 300.00 MiB (75 extents) to 152.00 MiB (38 extents).
  Logical volume data_vg/logs_lv successfully resized.

Remount and verify:

sudo mount /dev/data_vg/logs_lv /mnt/logs
df -hT /mnt/logs

The volume is now smaller with the freed space returned to the VG:

Filesystem                  Type  Size  Used Avail Use% Mounted on
/dev/mapper/data_vg-logs_lv ext4  135M   14K  125M   1% /mnt/logs

Remove LVM Components

Removal follows the reverse order of creation: unmount, remove LV, remove VG, remove PV.

Remove a Single Logical Volume

sudo umount /mnt/logs
sudo lvremove -f /dev/data_vg/logs_lv

Confirm it is gone:

sudo lvs

Only app_lv remains:

  LV     VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  app_lv data_vg -wi-ao---- 600.00m

Remove a Disk from a Volume Group

If the disk holds data from an active LV, move the data off it first with pvmove. Then remove the PV from the VG:

sudo pvmove /dev/sdd
sudo vgreduce data_vg /dev/sdd
sudo pvremove /dev/sdd

pvmove migrates all extents off the disk to other PVs in the VG. If the remaining PVs do not have enough free space, the move fails (and your data stays safe).

Remove Everything

To tear down the entire LVM stack, remove in order:

sudo umount /mnt/app
sudo lvremove -f /dev/data_vg/app_lv
sudo vgremove data_vg
sudo pvremove /dev/sdb /dev/sdc

All LVM metadata is wiped from the disks. Remember to remove the corresponding fstab entries to avoid boot failures.

LVM Quick Reference

OperationCommand
Create PVpvcreate /dev/sdb
Create VGvgcreate data_vg /dev/sdb /dev/sdc
Create LVlvcreate -n app_lv -L 10G data_vg
Extend LV + filesystemlvextend -r -L +5G /dev/data_vg/app_lv
Add disk to VGpvcreate /dev/sdd && vgextend data_vg /dev/sdd
Create snapshotlvcreate -s -n snap -L 1G /dev/data_vg/app_lv
Restore snapshotlvconvert --merge /dev/data_vg/snap
Reduce LV (ext4)resize2fs /dev/VG/LV SIZE && lvreduce -L SIZE /dev/VG/LV
Remove LVlvremove -f /dev/data_vg/app_lv
Remove disk from VGpvmove /dev/sdd && vgreduce VG /dev/sdd
Remove VGvgremove data_vg
Remove PVpvremove /dev/sdb
List PVspvs (summary) or pvdisplay (detail)
List VGsvgs (summary) or vgdisplay (detail)
List LVslvs (summary) or lvdisplay (detail)

When to Use LVM

LVM is worth the small setup overhead on nearly every server. The scenarios where it pays off most:

  • Database servers where data directories grow unpredictably. Extend the volume without stopping PostgreSQL or MariaDB
  • Docker hosts where /var/lib/docker fills up faster than expected
  • Multi-disk servers where you want one large filesystem spanning several drives
  • Pre-upgrade snapshots so you can roll back a kernel or application update instantly
  • Any server where you cannot predict storage needs at install time (which is most of them)

The only case where LVM adds unnecessary complexity is single-disk desktops with fixed storage needs, or ephemeral cloud instances that get destroyed and rebuilt regularly. For filesystem-level backup strategies, see the Restic backup to S3 guide or the rsync with systemd timers article.

Related Articles

Automation How To Create AWS EFS Filesystem With CloudFormation macos Find File Duplicates on Linux/macOS/Windows using dupeGuru Storage Install Pydio Cells on RHEL 10 / Rocky Linux 10 Storage Resolve no tools available to resize disk with ‘gpt’

Leave a Comment

Press ESC to close