Encrypted LVM partition on software raid-1 with mdadm

At another post https://www.gonscak.sk/?p=201 I posted how to create raid1 software raid with mdadm in linux. Now I tried to add a crypted filesystem to this.

First, check, that we have working software raid:

sudo mdadm --misc --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Wed Aug 22 09:34:23 2018
        Raid Level : raid1
        Array Size : 1953381440 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Thu Aug 23 14:18:50 2018
             State : active 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : bitmap
              Name : gw36:0  (local to host gw36)
              UUID : ded4f30e:1cfb20cb:c10b843e:df19a8ff
            Events : 3481
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Now, we synced drives and clean. It is time to encrypt.  If we have not loaded modules for encryption, load it:q

modprobe dm-crypt

Now create the volume with passphrase:

sudo cryptsetup --cipher=aes-xts-plain --verify-passphrase --key-size=512 luksFormat /dev/md0

And we can open it:

sudo cryptsetup  luksOpen /dev/md0 cryptdisk

Now we can create as many times a physical volume, volume group and logical volume.

sudo pvcreate /dev/mapper/cryptdisk
sudo vgcreate raid1 /dev/mapper/cryptdisk
sudo lvcreate --size 500G --name lv-home raid1

sudo pvs
  PV                     VG        Fmt  Attr PSize    PFree
  /dev/mapper/cryptdisk  raid1     lvm2 a--    <1,82t 1,33t
sudo vgs
  VG        #PV #LV #SN Attr   VSize    VFree
  raid1       1   1   0 wz--n-   <1,82t 1,33t
sudo lvs
  LV      VG        Attr       LSize
  lv-home raid1     -wi-ao---- 500,00g            

Next, we create a filesystem on this logical volume:

sudo mkfs.ext4 /dev/mapper/raid1-lv--home

And we can mount it:

sudo mount /dev/mapper/raid1-lv--home crypt-home/

Now we have an encrypted partition (disk) for our home directory.

Total Page Visits: 176697 - Today Page Visits: 104

How to create software raid 1 with mdadm with spare

At first, we must create partitions on disks with the SAME size in blocks:

fdisk /dev/sdc
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
fdisk /dev/sdd
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
fdisk -l /dev/sdc
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdc1        2048 1953525167 1953523120 931.5G fd Linux raid autodetect
root@cl3-amd-node2:~# fdisk -l /dev/sdd
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdd1        2048 1953525167 1953523120 931.5G fd Linux raid autodetect

Now, we can create raid using a mdadm. Parameter –level=1 defines raid1.

 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

We can watch the progress of building the raid:

cat /proc/mdstat
md1 : active raid1 sdd1[1] sdc1[0]
      976630464 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  1.8% (17759616/976630464) finish=110.0min speed=145255K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

Now we can add a spare disk:

fdisk /dev/sde
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
mdadm --add-spare /dev/md1 /dev/sde1

And now we can see detail of the raid:

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar 14 11:56:28 2017
     Raid Level : raid1
     Array Size : 976630464 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
  Intent Bitmap : Internal
    Update Time : Tue Mar 14 12:00:49 2017
          State : clean, resyncing
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
  Resync Status : 3% complete
           Name : cl3-amd-node2:1  (local to host cl3-amd-node2)
           UUID : 919632d4:74908819:4f43bba3:33b89328
         Events : 52
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        -      spare   /dev/sde1

And we can it see here too:

cat /proc/mdstat
md1 : active raid1 sde1[2](S) sdd1[1] sdc1[0]
      976630464 blocks super 1.2 [2/2] [UU]
      [=>...................]  resync =  7.5% (73929920/976630464) finish=103.3min speed=145533K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices: <none>

After reboot, if we can not see our md1 device like this:

root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sde1[2](S) sdb1[1]
      976629760 blocks super 1.2 [2/2] [UU]
      bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>

We can recreate (assemble) it with this command without resync:

mdadm --assemble /dev/md1 /dev/sdc1 /dev/sdd1
mdadm: /dev/md1 has been started with 2 drives.
root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc1[0] sdd1[1]
      976630464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sde1[2](S) sdb1[1]
      976629760 blocks super 1.2 [2/2] [UU]
      bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>

If we want to automatically start this raid with the boot, we must add this array to mdadm.conf. At first, we scan for our arrays and add it to /etc/mdadm/mdadm.conf.

root@cl3-amd-node2:/etc/drbd.d# mdadm --examine --scan
...
ARRAY /dev/md/1  metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1
ARRAY /dev/md/0  metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0
   spares=1
cat /etc/mdadm/mdadm.conf
...
# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0
   spares=1
echo "ARRAY /dev/md/1  metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1" >> /etc/mdadm/mdadm.conf

And the last step is update the initramfs to update mdadm.conf in it:

update-initramfs -u

If there is a need to replace bad missing disk, we must create a partition on new disk with the same space.

fdisk -l /dev/sdb
Disk /dev/sdb: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1        2048 488397167 488395120 232.9G fd Linux raid autodetect

Degraded array:

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri May 27 09:08:25 2016
     Raid Level : raid5
     Array Size : 488132608 (465.52 GiB 499.85 GB)
  Used Dev Size : 244066304 (232.76 GiB 249.92 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent
    Update Time : Thu Apr 20 11:33:11 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : cl2-sm-node3:1  (local to host cl2-sm-node3)
           UUID : 827b1c8a:5a1a1e7c:1bb5624f:9aa491b1
         Events : 692
    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       65        1      active sync   /dev/sde1
       3       8       49        2      active sync   /dev/sdd1

Now we can add new disk to this array:

mdadm --manage /dev/md1 --add /dev/sdb1
   mdadm: added /dev/sdb1

And its done:

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdb1[4] sde1[1] sdd1[3]
      488132608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      [>....................]  recovery =  0.3% (869184/244066304) finish=197.5min speed=20515K/sec
      bitmap: 0/2 pages [0KB], 65536KB chunk

If we have a problem with some disk, we may remove it during work. At first, we must mark it as failed. So look at good and working raid-1:

mdadm --detail /dev/md0
/dev/md0:
 Raid Level : raid1
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 State : clean
 Active Devices : 2
 Working Devices : 3
 Failed Devices : 0
 Spare Devices : 1
active sync /dev/sda1
active sync /dev/sdb1
spare /dev/sde1

Now mark disk sda1 as faulty:

mdadm /dev/md0 -f /dev/sda1
mdadm --detail /dev/md0
/dev/md0:
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 Persistence : Superblock is persistent
 State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 1
 Spare Devices : 1
Rebuild Status : 0% complete
spare rebuilding /dev/sde1
active sync /dev/sdb1
faulty /dev/sda1
cat /proc/mdstat
md0 : active raid1 sda1[0](F) sde1[2] sdb1[1]
 976629760 blocks super 1.2 [2/1] [_U]
 [>....................] recovery = 0.2% (2292928/976629760) finish=169.9min speed=95538K/sec

I waited until finish this operation. Then I halted this server, remove the exact drive and insert a new one. After power-on, we create a new partition table on /dev/sda exactly as old one, or as active disks now. The we re-add it as spare to the raid:

 mdadm /dev/md0 -a /dev/sda1
mdadm --detail /dev/md0
/dev/md0:
 Raid Level : raid1
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
 Spare Devices : 1
active sync /dev/sde1
active sync /dev/sdb1
spare /dev/sda1
cat /proc/mdstat
md0 : active raid1 sda1[3](S) sde1[2] sdb1[1]
 976629760 blocks super 1.2 [2/2] [UU]
 bitmap: 1/8 pages [4KB], 65536KB chunk
Total Page Visits: 176697 - Today Page Visits: 104