BtrFS is a copy-on-write filesystem with native subvolumes, snapshots, transparent compression, data checksumming, and online resize. It’s the default root filesystem on Fedora 42 and openSUSE, and it’s a solid choice for any Ubuntu or Debian server where you want easy rollback points and pooled storage without the complexity of ZFS. This guide walks through creating a BtrFS filesystem on a second disk, mounting it, creating subvolumes, taking a snapshot, and checking usage.
The commands below are run against a 4 GB virtual disk attached as /dev/sda to an Ubuntu 24.04 LTS VM. The same sequence works on Debian 13, Fedora 42, and any distro shipping btrfs-progs.
Tested April 2026 on Ubuntu 24.04.4 LTS with kernel 6.8.0-101-generic and btrfs-progs v6.6.3
Step 1: Install btrfs-progs
The kernel has built-in BtrFS support on Ubuntu 24.04, Debian 13, and every Rocky/Alma/RHEL 10 kernel. Only the userspace tools need to be installed:
sudo apt update
sudo apt install -y btrfs-progs
On Rocky Linux 10 and AlmaLinux 10 the package is in AppStream with the same name:
sudo dnf install -y btrfs-progs
Check the version after the install:
btrfs --version
A modern Ubuntu, Debian, or Fedora ships a reasonably current btrfs-progs release:
btrfs-progs v6.6.3
Step 2: Identify the target disk
Use lsblk to locate the disk you want to format. Anything without a MOUNTPOINTS column is fair game:
lsblk
In the test VM the new disk shows up as /dev/sda with no partitions and no mountpoint:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 4G 0 disk
vda 253:0 0 20G 0 disk
├─vda1 253:1 0 19G 0 part /
├─vda14 253:14 0 4M 0 part
├─vda15 253:15 0 106M 0 part /boot/efi
└─vda16 259:0 0 913M 0 part /boot
Double-check you have the right device before formatting. mkfs.btrfs will happily overwrite anything you point it at without asking, so this step matters. For existing data on the disk, add -f after you’re sure.
Step 3: Create the filesystem
Format the disk with a label for easier identification later. The -f flag forces the format past any previous filesystem signature the tool detects:
sudo mkfs.btrfs -f -L data /dev/sda
The tool prints a summary of what it created:
Label: data
UUID: 54e05318-0152-40c2-a93a-db2a044796a9
Number of devices: 1
Devices:
ID SIZE PATH
1 4.00GiB /dev/sda
Step 4: Mount the filesystem
Create a mount point and mount the new filesystem. The default mount options are fine for a single-disk layout:
sudo mkdir -p /mnt/btrfs
sudo mount /dev/sda /mnt/btrfs
df -hT /mnt/btrfs
The Size column will be noticeably smaller than the raw disk size because BtrFS reserves some space for metadata. That’s normal:
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda btrfs 4.0G 5.8M 3.5G 1% /mnt/btrfs
To mount it automatically on boot, grab the UUID and add a line to /etc/fstab:
UUID=$(sudo blkid -s UUID -o value /dev/sda)
echo "UUID=$UUID /mnt/btrfs btrfs defaults,compress=zstd:3 0 0" | sudo tee -a /etc/fstab
The compress=zstd:3 option turns on transparent compression at a moderate level. Higher numbers (up to 15) compress harder at the cost of more CPU. For a server with fast NVMe and plenty of CPU, bumping it to zstd:6 typically pays off.
Step 5: Create subvolumes
Subvolumes are BtrFS’s equivalent of cheap filesystem-within-a-filesystem containers. They share the underlying storage pool but snapshot independently, can be mounted separately, and can carry their own compression or nocow attributes. A common layout is one for home data, one for the actual dataset, and one for snapshots:
sudo btrfs subvolume create /mnt/btrfs/@home
sudo btrfs subvolume create /mnt/btrfs/@data
sudo btrfs subvolume create /mnt/btrfs/@snapshots
Each successful create prints the path it just made. List them with btrfs subvolume list:
sudo btrfs subvolume list /mnt/btrfs
All three show up with their own IDs, parent level, and on-disk path:
ID 256 gen 7 top level 5 path @home
ID 257 gen 7 top level 5 path @data
ID 258 gen 7 top level 5 path @snapshots
The @ prefix is a convention that makes subvolumes easy to spot in directory listings. It has no technical meaning to BtrFS.
Step 6: Take a read-only snapshot
BtrFS snapshots are instant because they’re just new references to the existing copy-on-write data. Create one from the @data subvolume into the @snapshots subvolume:
sudo sh -c 'echo hello-btrfs > /mnt/btrfs/@data/hello.txt'
sudo btrfs subvolume snapshot /mnt/btrfs/@data /mnt/btrfs/@snapshots/@data-1
The snapshot is immediately browsable and contains exactly what @data held the moment you created it:
ls -la /mnt/btrfs/@snapshots/@data-1/
The snapshot directory contains the same file you wrote to the source subvolume, now frozen in time:
total 4
drwxr-xr-x 1 root root 18 Apr 11 06:23 .
drwxr-xr-x 1 root root 14 Apr 11 06:23 ..
-rw-r--r-- 1 root root 12 Apr 11 06:23 hello.txt
For a read-only snapshot (safer for backup workflows because nothing can modify it), add -r:
sudo btrfs subvolume snapshot -r /mnt/btrfs/@data /mnt/btrfs/@snapshots/@data-ro
Step 7: Check actual usage
df reports BtrFS capacity in a misleading way because of the DUP metadata profile and the way space is allocated in chunks. Use the BtrFS-specific filesystem usage command for an accurate picture:
sudo btrfs filesystem usage /mnt/btrfs
The output breaks down total, allocated, used, and free space in block-group detail:
Overall:
Device size: 4.00GiB
Device allocated: 536.00MiB
Device unallocated: 3.48GiB
Used: 416.00KiB
Free (estimated): 3.48GiB (min: 1.75GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Data,single: Size:8.00MiB, Used:0.00B (0.00%)
Metadata,DUP: Size:256.00MiB, Used:192.00KiB (0.07%)
Two things to notice: Data ratio 1.00 means regular (non-duplicated) data blocks, and Metadata ratio 2.00 means metadata is duplicated for safety (the DUP profile). On a multi-disk BtrFS pool, you could run RAID1 across devices for the data profile too, but a single-disk setup uses single-profile data and DUP metadata by default.
Step 8: Deleting subvolumes and rollback
Deleting a subvolume is btrfs subvolume delete:
sudo btrfs subvolume delete /mnt/btrfs/@snapshots/@data-1
Rolling back to a snapshot is a two-step move: unmount the current subvolume, swap the name of the snapshot into the target path, and remount. Because subvolumes are cheap, many automation stacks (Snapper, timeshift, rollback hooks in pacman or apt) take a snapshot before every update and roll back by running this swap.
Where to go next
Once you have BtrFS up, the natural follow-ups are setting up automated snapshot schedules (via snapper on openSUSE, or a systemd timer calling btrfs subvolume snapshot on Ubuntu), configuring rsync backups via systemd timers from snapshot directories, and wiring the pool into your server’s post-install checklist. For multi-disk layouts, BtrFS supports native RAID 0, 1, and 10 via mkfs.btrfs -d raid1 -m raid1 /dev/sda /dev/sdb. See our NFS server guide if you plan to export a BtrFS subvolume to other machines on the LAN, or combine it with sshfs for quick cross-host mounts.
Excellent article, thank you so much for sharing!