Well. I changed the disk, restart the NAS and few minutes after I received an email:
Code
This is an automatically generated mail message from mdadm
running on nas
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active (auto-read-only) raid5 sdd[3] sda[0] sdc[2]
5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
bitmap: 7/15 pages [28KB], 65536KB chunk
unused devices: <none>
Alles anzeigen
Great, this is exactly what is was expecting for.
Now, do a conveyance test on the new disk:
Code
root@nas:~# smartctl -t conveyance /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.19.0-0.bpo.4-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Conveyance self-test routine immediately in off-line mode".
Drive command "Execute SMART Conveyance self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 5 minutes for test to complete.
Test will complete after Tue Jun 4 21:14:49 2019
Use smartctl -X to abort test.
Alles anzeigen
Wait 5 minutes and check the results:
Code
root@nas:~# smartctl -a /dev/sdb
blah blah blah
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Conveyance offline Completed without error 00% 0 -
Great, the disk is not able to self-detect any problem due to the conveyance.
Now create the parition table:
Code
root@nas:~# sfdisk -d /dev/sda | sfdisk /dev/sdb
sfdisk: /dev/sda: does not contain a recognized partition table
Ok. Let's see that closer:
Code
root@nas:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sdb 1,8T disk
sdc 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sdd 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sde 58,6G iso9660 disk
└─sde1 8,3G ext4 part /
Alles anzeigen
Assuming that the RAID will take place onto the whole disk, I guess the signature will be copied too by rebuilding the array. So go ahead!
And now, we just have to wait!
Code
root@nas:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdb[4] sdd[3] sda[0] sdc[2]
5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
[>....................] recovery = 0.1% (2964856/1953383424) finish=241.2min speed=134766K/sec
bitmap: 7/15 pages [28KB], 65536KB chunk
unused devices: <none>
Just verifying if I was right about the RAID signature (and assuming the rebuilt begin by the first sector):
Code
root@nas:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sdb 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sdc 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sdd 1,8T linux_raid_member disk
└─md0 5,5T crypto_LUKS raid5
sde 58,6G iso9660 disk
└─sde1 8,3G ext4 part /
Alles anzeigen