Clonezilla copies disks. That is the entire pitch. It reads only the used blocks on a partition, compresses them, and writes the result to an image file or straight to another disk. It handles 15+ filesystems (ext4, xfs, btrfs, NTFS, FAT32, HFS+, and more), works with both MBR and GPT, supports UEFI, and does all of this without costing a cent. The interface looks like it was designed in 2005 because it was. None of that matters when it clones a 20 GB disk in under two minutes.
This guide covers disk imaging, restoration, partition operations, direct disk-to-disk cloning, and network deployment with Clonezilla SE. Every command was tested on real hardware in a Proxmox VE lab with actual output captured. You will also find a full command reference table and a compression comparison so you can pick the right algorithm for your workload. If you need a broader backup strategy, see immutable backups with Restic and Borg for protecting images against ransomware.
Tested April 2026 with Clonezilla Live 3.3.1-35 on Rocky Linux 10.1 in a Proxmox VE lab. Partclone 0.3.45, kernel 6.18.9.
What Clonezilla Supports
Clonezilla uses Partclone under the hood, which reads filesystem metadata to skip unused blocks. This is why imaging a 20 GB disk with only 2 GB of actual data produces a tiny image. Here is what Clonezilla 3.3.x supports as of April 2026:
| Feature | Details |
|---|---|
| Supported filesystems | ext2/3/4, xfs, btrfs, FAT12/16/32, exFAT, NTFS, HFS+, UFS, VMFS3/5, minix, nilfs2, f2fs, JFS, ReiserFS |
| Partition table types | MBR (msdos) and GPT |
| Boot modes | Legacy BIOS and UEFI |
| Compression algorithms | gzip, bzip2, xz, lz4, lzo, zstd (default in 3.3.x) |
| Encryption | AES-256 via ecryptfs (-enc flag) |
| Image targets | Local disk, USB, NFS, SSH/SFTP, CIFS/SMB, AWS S3, WebDAV |
| Network cloning | Clonezilla SE via DRBL with PXE boot and multicast |
| Unsupported filesystem handling | Falls back to dd (sector-by-sector copy) |
For filesystems Clonezilla does not recognize, it copies the entire partition with dd, so nothing is lost. The image just won’t benefit from intelligent block skipping.
Prerequisites
- Clonezilla Live ISO (stable or alternative). Download from clonezilla.org/downloads
- Tested on: Clonezilla Live 3.3.1-35 (Debian 13 based), Rocky Linux 10.1 source VM
- Source system to image (physical or virtual)
- Storage destination: a second local disk, USB drive, NFS share, or SSH server. The image must be stored on a separate device from the source disk
- For USB boot: a 1 GB+ USB drive and a tool like
ddor Rufus to write the ISO - For VM testing: attach the Clonezilla ISO as a CD-ROM and add a second virtual disk for image storage
Boot Clonezilla Live
Grab the ISO from the Clonezilla download page. For physical machines, write it to a USB drive:
sudo dd if=clonezilla-live-3.3.1-35-amd64.iso of=/dev/sdX bs=4M status=progress
Replace /dev/sdX with your USB device. For VMs in Proxmox or similar hypervisors, attach the ISO to the virtual CD drive and set the boot order to boot from it first.
When Clonezilla boots, you will see a GRUB menu. Select Clonezilla live (VGA with large font & framebuffer) for the best console experience in VMs. The next screens ask for language and keyboard layout. Pick your language (English is the default), then choose Keep default keyboard layout unless you need something specific. Finally, select Start_Clonezilla to enter the main menu.
Create a Disk Image
The source disk in this lab is a 20 GB Rocky Linux 10.1 system disk with four partitions. This is the layout before imaging:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 2M 0 part
├─vda2 252:2 0 200M 0 part /boot/efi
├─vda3 252:3 0 1000M 0 part /boot
└─vda4 252:4 0 18.8G 0 part /
vdb 252:16 0 40G 0 disk
The vdb disk is a 40 GB secondary disk attached to the VM for storing the image. Clonezilla needs the image repository mounted at /home/partimag.
Prepare the Storage Disk
If your storage disk is blank, format it and create a filesystem. From the Clonezilla shell (press Ctrl+C at the main menu, or choose the shell option):
sudo parted /dev/vdb mklabel gpt
sudo parted /dev/vdb mkpart primary ext4 0% 100%
sudo mkfs.ext4 /dev/vdb1
Mount it at the Clonezilla image repository path:
sudo mount /dev/vdb1 /home/partimag
If you are using the Clonezilla wizard instead of the command line, choose device_image at the main menu, then local_dev as the repository location. Clonezilla will detect vdb1 and let you select it as the image destination.
Run the Image Creation
With the repository mounted, clone the entire disk to an image. The -q2 flag tells Clonezilla to use Partclone (the intelligent block copier), -z9 selects zstd compression, and -i 4096 splits the image into 4 GB chunks for compatibility with FAT32 storage:
sudo ocs-sr -q2 -c -j2 -z9 -i 4096 -batch -p true savedisk rocky10-image vda
Clonezilla processes each partition sequentially. The imaging completed all four partitions at an average rate of 14.82 GB/min:
Partclone v0.3.45 http://partclone.org
Starting to clone device (/dev/vda4) to image (-)
Reading Super Block
Elapsed: 00:00:42, Remaining: 00:00:00, Completed: 100.00%
Total Time: 00:00:42, Ave. Rate: 14.82GB/min,
The entire 20 GB disk compressed down to 542 MB thanks to zstd and intelligent block copying. Only used blocks were read.
Image Directory Structure
Clonezilla stores the image as a directory containing partition images plus metadata files. Here is what the rocky10-image directory contains:
ls -lh /home/partimag/rocky10-image/
The output shows individual files for each partition, compressed with zstd:
total 542M
-rw-r--r-- 1 root root 445K vda1.dd-ptcl-img.zst.aa
-rw-r--r-- 1 root root 5.0M vda2.vfat-ptcl-img.zst.aa
-rw-r--r-- 1 root root 193M vda3.xfs-ptcl-img.zst.aa
-rw-r--r-- 1 root root 344M vda4.xfs-ptcl-img.zst.aa
-rw-r--r-- 1 root root 1.2K blkdev.json
-rw-r--r-- 1 root root 512 blkid.list
-rw-r--r-- 1 root root 256 dev-fs.list
-rw-r--r-- 1 root root 2.1K fdisk.list
-rw-r--r-- 1 root root 3.4K vda-gpt.gdisk
-rw-r--r-- 1 root root 1.8K vda-gpt.sgdisk
-rw-r--r-- 1 root root 512 vda-mbr
-rw-r--r-- 1 root root 4.0K vda-pt.sf
-rw-r--r-- 1 root root 890 Info-saved-by-cmd.txt
-rw-r--r-- 1 root root 1.5K Info-img-id.txt
-rw-r--r-- 1 root root 768 parts
Notice the naming convention. vda1.dd-ptcl-img.zst.aa means partition 1 was copied with dd (BIOS boot has no recognized filesystem), while vda2.vfat-ptcl-img.zst.aa used the VFAT Partclone handler. The .zst suffix confirms zstd compression, and .aa is the first (and only) chunk. The metadata files store the partition table (vda-gpt.gdisk, vda-gpt.sgdisk), the MBR (vda-mbr), block device info, and filesystem types. This metadata is what allows Clonezilla to reconstruct the exact partition layout during restoration.
Restore a Disk Image
Restoration writes the image back to a disk, recreating the partition table and all partitions. Boot Clonezilla Live on the target machine, mount the storage disk at /home/partimag the same way as before, then run:
sudo mount /dev/vdb1 /home/partimag
Verify the image is visible:
ls /home/partimag/rocky10-image/
You should see the partition image files listed. Now restore the image to the target disk:
sudo ocs-sr --batch -g auto -e1 auto -e2 -c -r -j2 -p true restoredisk rocky10-image vda
The flags break down as follows:
-g autoreinstalls GRUB automatically after restoration, which is critical for bootable disks-e1 autoadjusts the partition table to fit the target disk geometry-e2runssfdiskto restore the partition table from the saved layout-cconfirms the image is not corrupt before writing (checksum verification)-rresizes the last partition to fill the target disk if it is larger than the source-j2clones the hidden data between the MBR and the first partition-p truepowers off or reboots after completion (depends on the mode)
Clonezilla overwrites the target disk completely. The GPT partition table is written first, then each partition is restored from its compressed image. After completion, remove the Clonezilla media and reboot into the restored system.
Restoring to a Larger Disk
The -r flag handles this automatically. If the source was a 20 GB disk and the target is 50 GB, Clonezilla restores the original partition layout and then expands the last partition to use the remaining space. For more granular control over partition sizes during restore, use the -k1 flag instead, which creates the partition layout proportionally.
Partition Operations
Sometimes you only need one partition, not the whole disk. Clonezilla handles this with saveparts and restoreparts.
Save a single partition (the root filesystem on vda4):
sudo ocs-sr -q2 -c -j2 -z9 -i 4096 -batch -p true saveparts root-backup vda4
Restore that partition to the same or different location:
sudo ocs-sr --batch -c -r -j2 -p true restoreparts root-backup vda4
You can also save multiple partitions in one image by listing them:
sudo ocs-sr -q2 -c -j2 -z9 -batch -p true saveparts boot-and-root vda3 vda4
Partition-level imaging is useful when you want to back up only the OS partition before a risky upgrade while leaving the data partition untouched. For a complementary approach using LVM snapshots for point-in-time captures, see the guide on LVM snapshots for consistent server backups.
Disk-to-Disk Cloning
If you don’t need an intermediate image and just want to copy one disk directly to another, Clonezilla can do that with ocs-onthefly. Both disks must be attached to the same machine.
sudo ocs-onthefly -g auto -e1 auto -e2 -r -j2 -sfsck -scs -batch -f vda -d vdb
This reads from vda and writes directly to vdb with no image file in between. The -f flag specifies the source disk and -d the destination. The -sfsck flag skips filesystem checks on the source (speeds things up when you know the source is clean), and -scs skips the checksum step. Both disks can be different sizes because -r resizes the last partition to fill the target.
Direct cloning is faster than image-and-restore because it eliminates the compression and decompression step. The trade-off is no stored backup. You have the clone, but if something goes wrong mid-clone, you have neither the original nor a usable copy.
Command-Line Reference
The ocs-sr command accepts dozens of flags. These are the ones you will use most often:
| Flag | Purpose | Example |
|---|---|---|
-q2 | Use Partclone for imaging (intelligent block copy) | -q2 |
-z1 | Compress with gzip (compatible, moderate speed) | -z1 |
-z9 | Compress with zstd (fast, excellent ratio) | -z9 |
-z9p | Compress with zstd, parallel threads | -z9p |
-z5 | Compress with lz4 (fastest, larger images) | -z5 |
-z6 | Compress with lzo | -z6 |
-z7 | Compress with xz (smallest images, slowest) | -z7 |
-i SIZE | Split image into chunks (bytes). 4096 = 4 GB chunks | -i 4096 |
-batch | Non-interactive mode, skip confirmations | -batch |
-p true | Action after completion: true (command prompt), reboot, poweroff | -p reboot |
-g auto | Reinstall GRUB after restore (auto-detect config) | -g auto |
-r | Resize last partition to fill target disk | -r |
-k1 | Create partition layout proportionally on target | -k1 |
-e1 auto | Adjust partition geometry for target disk | -e1 auto |
-e2 | Use sfdisk to restore partition table | -e2 |
-c | Verify image integrity before restoring | -c |
-j2 | Clone hidden data between MBR and first partition | -j2 |
-enc | Encrypt image with ecryptfs (AES-256) | -enc |
-cm | Check image for corruption after creation | -cm |
-gm | Generate MD5 checksums for all image files | -gm |
-sfsck | Skip filesystem check on source | -sfsck |
The most common combination for creating an image is -q2 -c -j2 -z9 -i 4096 -batch. For restoring, use --batch -g auto -e1 auto -e2 -c -r -j2. Adjust the compression flag based on whether you prioritize speed (-z5 for lz4) or image size (-z7 for xz).
Clonezilla SE: Network Cloning
Clonezilla SE (Server Edition) is the network deployment version. It runs on top of DRBL (Diskless Remote Boot in Linux) and lets you clone an image to dozens or hundreds of machines simultaneously over the network using PXE boot and multicast.
The architecture works like this: one server holds the disk image and runs a DHCP/PXE/TFTP stack. Client machines boot from the network, receive the Clonezilla environment via PXE, and pull the image using multicast. Because multicast sends data once to all clients simultaneously, cloning 50 machines takes roughly the same time as cloning one.
Setting up Clonezilla SE requires installing DRBL on a server (Ubuntu or Debian works best):
sudo apt install drbl
Then configure DRBL with the setup script:
sudo /opt/drbl/sbin/drblsrv -i
This downloads the necessary boot images and configures the PXE environment. Next, set up the network for client machines:
sudo /opt/drbl/sbin/drblpush -i
The drblpush command walks you through configuring the DHCP range, network interface, and client settings. Once DRBL is running, start the Clonezilla SE session:
sudo /opt/drbl/sbin/dcs
The dcs (DRBL Client Service) menu lets you choose between multicast-restore (send one image to all clients at once), broadcast-restore, or select-in-client (each client picks their own image). For large deployments, multicast is the right choice. The server waits for all clients to PXE boot, then streams the image once. With gigabit networking, expect throughput similar to local disk speeds.
Clonezilla SE is the tool to reach for when you need to deploy a standard image to an entire lab, classroom, or data center rack. For environments where disk cloning is part of a larger backup strategy, consider combining Clonezilla images with immutable backup storage to protect against image corruption or deletion.
Compression Comparison
Choosing the right compression algorithm depends on your priority. The table below compares the options available in Clonezilla 3.3.x, ranked by typical imaging speed. These are general characteristics based on the algorithm behavior; actual numbers vary by disk content and CPU.
| Algorithm | ocs-sr flag | Speed | Compression ratio | Best for |
|---|---|---|---|---|
| lz4 | -z5 | Fastest | Low (larger images) | Fast cloning when storage is cheap |
| lzo | -z6 | Very fast | Low to moderate | Slight improvement over lz4 with minimal speed penalty |
| zstd | -z9 | Fast | High | Default for most workloads. Best speed-to-ratio balance |
| zstd (parallel) | -z9p | Fast (multi-core) | High | Multi-core systems where zstd is already the choice |
| gzip | -z1 | Moderate | Moderate | Maximum compatibility with older tools |
| bzip2 | -z2 | Slow | Good | Rarely needed. gzip or zstd are better in every scenario |
| xz | -z7 | Slowest | Highest (smallest images) | Archival storage where image size matters more than time |
In the lab test, zstd (-z9) compressed a 20 GB Rocky Linux 10.1 disk to 542 MB at 14.82 GB/min. That is an excellent balance. If you are imaging hundreds of machines with Clonezilla SE over the network, lz4 may be worth the larger image size because multicast throughput is the bottleneck, not storage. For archival backups that will sit on a shelf for months, xz squeezes out every byte at the cost of significantly longer imaging time.
One practical tip: zstd with the -z9p flag enables parallel compression across multiple CPU cores. On a 4-core VM, this can cut imaging time nearly in half compared to single-threaded zstd. If your hardware has cores to spare, use it.
Encrypting Images
Clonezilla supports AES-256 encryption via ecryptfs. Add the -enc flag when creating an image:
sudo ocs-sr -q2 -c -j2 -z9 -i 4096 -enc -batch -p true savedisk rocky10-encrypted vda
Clonezilla prompts for a passphrase during image creation. During restoration, you must supply the same passphrase or the image cannot be read. Keep this passphrase somewhere safe. A lost passphrase means a lost image with no recovery path.
Encryption adds minimal overhead to the imaging process because it runs after compression. The data is already small by the time it gets encrypted.
Verifying Image Integrity
Before trusting a backup, verify it. The -cm flag checks an image after creation, and -gm generates MD5 checksums for every file in the image directory. To check an existing image without restoring it:
sudo ocs-sr -cm -batch chkimg rocky10-image
This reads every partition image, decompresses it, and verifies that Partclone can parse the data correctly. If any file is corrupt, Clonezilla reports the specific partition that failed. Run this check periodically on stored images, especially before relying on them for disaster recovery. A backup you have never tested is not a backup.
Quick Reference: Common Workflows
These are the commands from this guide collected in one place for quick copy-paste access.
Create a full disk image with zstd compression:
sudo ocs-sr -q2 -c -j2 -z9 -i 4096 -batch -p true savedisk IMAGE_NAME DISK
Restore a disk image with GRUB reinstall and partition resize:
sudo ocs-sr --batch -g auto -e1 auto -e2 -c -r -j2 -p true restoredisk IMAGE_NAME DISK
Save a single partition:
sudo ocs-sr -q2 -c -j2 -z9 -batch -p true saveparts IMAGE_NAME PARTITION
Restore a single partition:
sudo ocs-sr --batch -c -r -j2 -p true restoreparts IMAGE_NAME PARTITION
Direct disk-to-disk clone:
sudo ocs-onthefly -g auto -e1 auto -e2 -r -j2 -sfsck -scs -batch -f SOURCE_DISK -d TARGET_DISK
Verify an existing image:
sudo ocs-sr -cm -batch chkimg IMAGE_NAME
Replace IMAGE_NAME, DISK, PARTITION, SOURCE_DISK, and TARGET_DISK with your actual device and image names. The image repository must be mounted at /home/partimag before running any of these commands.