How to Create RAID with mdadm and Replace a Failed Disk
Proxmox VE provides powerful tools for building reliable and scalable storage for virtual machines, and RAID is one of the key technologies for protecting your data. In this step-by-step guide, you will learn how to create a RAID array using mdadm on Proxmox and understand the best practices for configuring disk redundancy. This tutorial also walks you through how to detect a failed disk and safely replace it without affecting your virtualized workloads. Whether you’re building a homelab or maintaining a production environment, mastering RAID setup is essential for system stability. We explain each command clearly to help beginners and system administrators follow along easily. You will also see how to verify disk health, rebuild the RAID array, and confirm everything is working properly. This guide is ideal for anyone looking to increase storage reliability and protect critical data on Proxmox VE. Follow the full tutorial to make your Proxmox environment more resilient and disaster-ready.
1.1. Lab
Server host proxmox has 3 or more disks
Disk 1: Where OS proxmox is located
Disk 2, Disk 3,…: where VM, Backup File, ISO File,….
1.2. Preparation
Install VM proxmox with 1 disk. After installing OS, add 2 more disks to run RAID 1 (Add n hard drives to run RAID 5, RAID 10..)
nano /etc/pve/qemu-server/101.conf
serial=VM105DISK01
1.3. Set serial for DISK
Real hard drives will have different serials, so to simulate accurately, we will set different serials for this Disk VM Host.
nano /etc/pve/qemu-server/101.conf
Add serial line
serial=VM105DISK02
serial=VM105DISK03
1.4. Start setup
# Install mdadm if not already there
apt update
apt install mdadm -y
# Format 2 drives if needed (CAUTION)
Check lsblk
wipefs -a /dev/sdb
wipefs -a /dev/sdc
# Create RAID 1 named md0
mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sdb /dev/sdc
# Test RAID operation
cat /proc/mdstat
# Create file system (ext4)
mkfs.ext4 /dev/md0
# Create mount directory
mkdir /mnt/raid_data
# Test mount
mount /dev/md0 /mnt/raid_data
# Set auto mount after reboot:
echo ‘/dev/md0 /mnt/raid_data ext4 defaults,nofail 0 2’ >> /etc/fstab
# Create RAID configuration file
mdadm –detail –scan >> /etc/mdadm/mdadm.conf
# Add Storage Raid
# Move VM DISK to Storage RAID and Start
1.5. Simulate a Disk Failure
Simulate a Disk Failure sdb
Step 1: Identify the faulty hard drive
mdadm –detail /dev/md0
cat /proc/mdstat
Check the type and serial number of the faulty hard drive
ls -l /dev/disk/by-id/
Step 2: Remove the faulty drive from the RAID
mdadm –manage /dev/md0 –remove /dev/sdb
In case the raid system drive still has cache, the above command will be effective.
Step 3: Attach a new drive to replace it
nano /etc/pve/qemu-server/101.conf
serial=VM105DISK05
Suppose you just attached a new drive and it recognized it as /dev/sdb.
Check with:
lsblk
Step 4: Add new drive to RAID
mdadm –add /dev/md0 /dev/sdb
Check rebuild progress:
watch cat /proc/mdstat
Step 5: Check again when rebuild is complete
mdadm –detail /dev/md0
→ The result will have 2 active drives ([UU]) and RAID will return to “clean” state.
________________________________________
Update RAID configuration. Save the new configuration to automatically mount the RAID after reboot:
mdadm –detail –scan >> /etc/mdadm/mdadm.conf
update-initramfs -u