AlmaLinux

Configure Software RAID on Rocky Linux 10 / AlmaLinux 10 using mdadm

mdadm is the standard Linux utility for managing software RAID arrays. It creates, monitors, and manages MD (multiple device) arrays without needing a dedicated hardware RAID controller – making it a cost-effective way to add disk redundancy and performance to any server.

This guide covers installing mdadm, creating RAID 1, RAID 5, and RAID 10 arrays, formatting and mounting, persisting the configuration, setting up email monitoring alerts, and replacing failed disks on Rocky Linux 10 and AlmaLinux 10. The same steps apply to RHEL 10.

Prerequisites

  • A server running Rocky Linux 10, AlmaLinux 10, or RHEL 10
  • Root or sudo access
  • At least 2 unused disks for RAID 1, 3 for RAID 5, or 4 for RAID 10
  • A working mail relay if you want email alerts (postfix, mailx, or similar)

RAID Level Quick Reference

RAID LevelMethodMin Disks
RAID 1 (Mirror)Mirroring – identical copy on each disk2
RAID 5 (Parity)Striping with distributed parity3
RAID 10 (1+0)Mirrored pairs then striped4

Step 1: Install mdadm on Rocky Linux 10 / AlmaLinux 10

Install the mdadm package and confirm it is available.

sudo dnf install mdadm -y

Verify the installation.

mdadm --version

Step 2: Identify Available Disks

List all block devices to find the disks you will use for the RAID array.

lsblk

Sample output showing four 10 GB disks ready for RAID.

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   40G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   39G  0 part
  ├─rl-root 253:0    0   35G  0 lvm  /
  └─rl-swap 253:1    0    4G  0 lvm  [SWAP]
sdb           8:16   0   10G  0 disk
sdc           8:32   0   10G  0 disk
sdd           8:48   0   10G  0 disk
sde           8:64   0   10G  0 disk

In this example we have /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde. Replace these with your actual device names throughout the guide.

Step 3: Prepare Disks with Partition Tables

Create GPT partition tables and a single partition on each disk. Install parted if it is not already present.

sudo dnf install parted -y

Switch to root to run partition commands without repeating sudo.

sudo -i

Create a GPT label, a full-disk partition, and set the RAID flag on each disk.

for disk in /dev/sdb /dev/sdc /dev/sdd /dev/sde; do
  parted --script "$disk" "mklabel gpt"
  parted --script "$disk" "mkpart primary 0% 100%"
  parted --script "$disk" "set 1 raid on"
done

Verify the partition layout on one of the disks.

parted --script /dev/sdb "print"

Expected output.

$ parted --script /dev/sdb "print"
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  10.7GB  10.7GB               primary  raid

Step 4: Create a Software RAID Array with mdadm

Choose one of the RAID levels below depending on your requirements. Each example creates an array at /dev/md0.

Create RAID 1 (Mirror – 2 disks)

RAID 1 mirrors data across two disks. If one disk fails, the other has a full copy. Usable capacity is half the total disk space.

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Create RAID 5 (Striping with Parity – 3 disks)

RAID 5 stripes data with distributed parity across three or more disks. One disk can fail without data loss. Usable capacity is (N-1) disks.

mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

Create RAID 10 (Mirror + Stripe – 4 disks)

RAID 10 combines mirroring and striping. It needs at least four disks and can tolerate one disk failure per mirrored pair. Usable capacity is half the total. This is the recommended level for database servers and high-I/O workloads.

mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

When prompted with “Continue creating array?”, type y and press Enter. You will see output confirming the array started.

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Step 5: Verify the RAID Array

Check the array status with mdadm --detail. This is the primary command for inspecting any MD array. If you manage LVM logical volumes on top of RAID, the detail output helps confirm the underlying array health.

mdadm --detail /dev/md0

Sample output for a RAID 10 array.

$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 21 10:15:32 2026
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Mar 21 10:17:00 2026
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : server01:0  (local to host server01)
              UUID : 089d8e15:403e261e:dbbee899:36a357b2
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

You can also check the kernel’s view of all MD arrays.

cat /proc/mdstat

Examine individual member disks with the --examine flag.

mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Step 6: Create a Filesystem and Mount the RAID

Format the array with ext4 (or xfs if you prefer the default Rocky Linux filesystem).

mkfs.ext4 /dev/md0

Create a mount point and mount the array.

mkdir -p /mnt/raid
mount /dev/md0 /mnt/raid

Verify the mount.

$ df -hT /mnt/raid
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4   20G   45M   19G   1% /mnt/raid

Step 7: Save mdadm Configuration

Write the array definition to /etc/mdadm.conf so the system reassembles the array automatically on boot.

mdadm --detail --scan >> /etc/mdadm.conf

Verify the content was written.

cat /etc/mdadm.conf

You should see a line like this.

ARRAY /dev/md0 metadata=1.2 name=server01:0 UUID=089d8e15:403e261e:dbbee899:36a357b2

Step 8: Add Persistent Mount in /etc/fstab

Add the RAID device to /etc/fstab for automatic mounting at boot. Using the UUID is more reliable than the device name.

blkid /dev/md0

Copy the UUID from the output, then add a line to fstab. Open the file.

sudo vi /etc/fstab

Add the following line at the end (replace the UUID with your actual value).

UUID=e2570369-67d8-444c-9125-6afdb68dc7d5  /mnt/raid  ext4  defaults  0  0

Test the fstab entry without rebooting.

umount /mnt/raid
mount -a
df -hT /mnt/raid

Step 9: Monitor RAID with Email Alerts

mdadm includes a built-in monitoring daemon that watches for disk failures, spare activations, and rebuild events. When something goes wrong, it sends an email alert so you can respond before data is at risk.

Install mailx for sending alerts

The monitoring daemon needs a mail command to send notifications. Install mailx and make sure your system can relay mail (through Postfix or a similar MTA).

sudo dnf install mailx -y

Configure the mdadm monitor daemon

Open the mdadm configuration file.

sudo vi /etc/mdadm.conf

Add a MAILADDR line with your email address at the top of the file.

MAILADDR [email protected]

Enable and start the mdadm monitoring service.

sudo systemctl enable mdmonitor --now

Verify the service is running.

sudo systemctl status mdmonitor

You can test the monitoring manually by sending a test alert.

mdadm --monitor --scan --test --oneshot

Check your inbox for the test email. If it does not arrive, verify that your MTA is configured and that port 25 (TCP) is open on the server’s firewall.

Step 10: Simulate and Replace a Failed Disk

Knowing how to replace a failed disk is critical for production RAID arrays. Here is the full procedure.

Mark a disk as failed (simulate failure)

For testing purposes, you can mark a disk as faulty.

mdadm --manage /dev/md0 --fail /dev/sdd1

Check the array status to confirm the failure.

$ mdadm --detail /dev/md0
             State : clean, degraded

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       -       0        0        2      removed
       3       8       65        3      active sync set-B   /dev/sde1

       2       8       49        -      faulty   /dev/sdd1

Remove the failed disk

mdadm --manage /dev/md0 --remove /dev/sdd1

Add a replacement disk

After physically replacing the failed drive (or using the same device name for testing), partition the new disk the same way as in Step 3, then add it to the array.

parted --script /dev/sdd "mklabel gpt"
parted --script /dev/sdd "mkpart primary 0% 100%"
parted --script /dev/sdd "set 1 raid on"

Add the new partition to the array.

mdadm --manage /dev/md0 --add /dev/sdd1

The array will begin rebuilding automatically. Monitor the rebuild progress.

$ cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdd1[4] sde1[3] sdc1[1] sdb1[0]
      20953088 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
      [=====>...............]  recovery = 28.3% (2965504/10476544) finish=3.2min speed=38945K/sec

Wait for the rebuild to complete, then verify the array is clean again.

mdadm --detail /dev/md0

The State should show clean with all devices listed as active sync.

Step 11: Add a Hot Spare Disk

A hot spare sits idle in the array until a disk fails, then automatically takes over the failed disk’s role and starts rebuilding. This reduces the window of vulnerability.

Prepare the spare disk.

parted --script /dev/sdf "mklabel gpt"
parted --script /dev/sdf "mkpart primary 0% 100%"
parted --script /dev/sdf "set 1 raid on"

Add it as a spare.

mdadm --manage /dev/md0 --add /dev/sdf1

Verify the spare is recognized. It will show as spare in the detail output.

$ mdadm --detail /dev/md0 | tail -8
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       4       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

       5       8       81        -      spare   /dev/sdf1

Useful mdadm Commands Reference

CommandPurpose
mdadm --detail /dev/md0Full array status and disk list
cat /proc/mdstatKernel-level array status and rebuild progress
mdadm --examine /dev/sdb1Show superblock info on a member disk
mdadm --manage /dev/md0 --fail /dev/sdb1Mark a disk as failed
mdadm --manage /dev/md0 --remove /dev/sdb1Remove a failed disk from the array
mdadm --manage /dev/md0 --add /dev/sdg1Add a new or spare disk to the array
mdadm --stop /dev/md0Stop (deactivate) an array
mdadm --assemble --scanReassemble all arrays from mdadm.conf

Conclusion

You now have a working software RAID array on Rocky Linux 10 / AlmaLinux 10 managed by mdadm, with persistent mounting, email monitoring alerts, and a clear disk replacement procedure. You can also layer LVM on top of RAID for flexible volume management.

For production servers, consider adding a hot spare to every array, regularly checking /proc/mdstat for degraded arrays, testing your email alerts quarterly, and keeping a documented runbook for disk replacement procedures. Refer to the Linux kernel md documentation for advanced configuration options.

Related Articles

Cloud Install OpenStack Dalmatian on Rocky Linux 10 with Packstack AlmaLinux Enable RPM Fusion Repo on Rocky Linux 9 | AlmaLinux 9 AlmaLinux Deploy Kubernetes Cluster on AlmaLinux 8 with Kubeadm AlmaLinux Install Elasticsearch 8.x on Rocky Linux 9 / AlmaLinux 9

Press ESC to close