At first, we must create partitions on disks with the SAME size in blocks:
fdisk /dev/sdc > n (new partition) > p (primary type of partition) > 1 (partition number) > 2048 (first sector: default) > 1953525167 (last sector: default) > t (change partition type) - selected partition nb. 1 > fd (set it to Linux raid autodetect) > w (write end exit)
fdisk /dev/sdd > n (new partition) > p (primary type of partition) > 1 (partition number) > 2048 (first sector: default) > 1953525167 (last sector: default) > t (change partition type) - selected partition nb. 1 > fd (set it to Linux raid autodetect) > w (write end exit)
fdisk -l /dev/sdc Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 1953525167 1953523120 931.5G fd Linux raid autodetect root@cl3-amd-node2:~# fdisk -l /dev/sdd Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Device Boot Start End Sectors Size Id Type /dev/sdd1 2048 1953525167 1953523120 931.5G fd Linux raid autodetect
Now, we can create raid using a mdadm. Parameter –level=1 defines raid1.
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
We can watch the progress of building the raid:
cat /proc/mdstat md1 : active raid1 sdd1[1] sdc1[0] 976630464 blocks super 1.2 [2/2] [UU] [>....................] resync = 1.8% (17759616/976630464) finish=110.0min speed=145255K/sec bitmap: 8/8 pages [32KB], 65536KB chunk
Now we can add a spare disk:
fdisk /dev/sde > n (new partition) > p (primary type of partition) > 1 (partition number) > 2048 (first sector: default) > 1953525167 (last sector: default) > t (change partition type) - selected partition nb. 1 > fd (set it to Linux raid autodetect) > w (write end exit)
mdadm --add-spare /dev/md1 /dev/sde1
And now we can see detail of the raid:
mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Tue Mar 14 11:56:28 2017 Raid Level : raid1 Array Size : 976630464 (931.39 GiB 1000.07 GB) Used Dev Size : 976630464 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Mar 14 12:00:49 2017 State : clean, resyncing Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Resync Status : 3% complete Name : cl3-amd-node2:1 (local to host cl3-amd-node2) UUID : 919632d4:74908819:4f43bba3:33b89328 Events : 52 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 2 8 65 - spare /dev/sde1
And we can it see here too:
cat /proc/mdstat md1 : active raid1 sde1[2](S) sdd1[1] sdc1[0] 976630464 blocks super 1.2 [2/2] [UU] [=>...................] resync = 7.5% (73929920/976630464) finish=103.3min speed=145533K/sec bitmap: 8/8 pages [32KB], 65536KB chunk unused devices: <none>
After reboot, if we can not see our md1 device like this:
root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[0] sde1[2](S) sdb1[1] 976629760 blocks super 1.2 [2/2] [UU] bitmap: 1/8 pages [4KB], 65536KB chunk unused devices: <none>
We can recreate (assemble) it with this command without resync:
mdadm --assemble /dev/md1 /dev/sdc1 /dev/sdd1 mdadm: /dev/md1 has been started with 2 drives. root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdc1[0] sdd1[1] 976630464 blocks super 1.2 [2/2] [UU] bitmap: 0/8 pages [0KB], 65536KB chunk md0 : active raid1 sda1[0] sde1[2](S) sdb1[1] 976629760 blocks super 1.2 [2/2] [UU] bitmap: 1/8 pages [4KB], 65536KB chunk unused devices: <none>
If we want to automatically start this raid with the boot, we must add this array to mdadm.conf. At first, we scan for our arrays and add it to /etc/mdadm/mdadm.conf.
root@cl3-amd-node2:/etc/drbd.d# mdadm --examine --scan ... ARRAY /dev/md/1 metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1 ARRAY /dev/md/0 metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0 spares=1
cat /etc/mdadm/mdadm.conf ... # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0 spares=1
echo "ARRAY /dev/md/1 metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1" >> /etc/mdadm/mdadm.conf
And the last step is update the initramfs to update mdadm.conf in it:
update-initramfs -u
If there is a need to replace bad missing disk, we must create a partition on new disk with the same space.
fdisk -l /dev/sdb Disk /dev/sdb: 233.8 GiB, 251000193024 bytes, 490234752 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 488397167 488395120 232.9G fd Linux raid autodetect
Degraded array:
mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Fri May 27 09:08:25 2016 Raid Level : raid5 Array Size : 488132608 (465.52 GiB 499.85 GB) Used Dev Size : 244066304 (232.76 GiB 249.92 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Apr 20 11:33:11 2017 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : cl2-sm-node3:1 (local to host cl2-sm-node3) UUID : 827b1c8a:5a1a1e7c:1bb5624f:9aa491b1 Events : 692 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 65 1 active sync /dev/sde1 3 8 49 2 active sync /dev/sdd1
Now we can add new disk to this array:
mdadm --manage /dev/md1 --add /dev/sdb1 mdadm: added /dev/sdb1
And its done:
cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid5 sdb1[4] sde1[1] sdd1[3] 488132608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU] [>....................] recovery = 0.3% (869184/244066304) finish=197.5min speed=20515K/sec bitmap: 0/2 pages [0KB], 65536KB chunk
If we have a problem with some disk, we may remove it during work. At first, we must mark it as failed. So look at good and working raid-1:
mdadm --detail /dev/md0 /dev/md0: Raid Level : raid1 Array Size : 976629760 (931.39 GiB 1000.07 GB) Used Dev Size : 976629760 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 3 State : clean Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 active sync /dev/sda1 active sync /dev/sdb1 spare /dev/sde1
Now mark disk sda1 as faulty:
mdadm /dev/md0 -f /dev/sda1
mdadm --detail /dev/md0 /dev/md0: Array Size : 976629760 (931.39 GiB 1000.07 GB) Used Dev Size : 976629760 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Rebuild Status : 0% complete spare rebuilding /dev/sde1 active sync /dev/sdb1 faulty /dev/sda1
cat /proc/mdstat md0 : active raid1 sda1[0](F) sde1[2] sdb1[1] 976629760 blocks super 1.2 [2/1] [_U] [>....................] recovery = 0.2% (2292928/976629760) finish=169.9min speed=95538K/sec
I waited until finish this operation. Then I halted this server, remove the exact drive and insert a new one. After power-on, we create a new partition table on /dev/sda exactly as old one, or as active disks now. The we re-add it as spare to the raid:
mdadm /dev/md0 -a /dev/sda1
mdadm --detail /dev/md0 /dev/md0: Raid Level : raid1 Array Size : 976629760 (931.39 GiB 1000.07 GB) Used Dev Size : 976629760 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 3 Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 active sync /dev/sde1 active sync /dev/sdb1 spare /dev/sda1
cat /proc/mdstat md0 : active raid1 sda1[3](S) sde1[2] sdb1[1] 976629760 blocks super 1.2 [2/2] [UU] bitmap: 1/8 pages [4KB], 65536KB chunk