Hello everyone,
I am running OMV 3.0.99 on my NAS. I have set up a RAID5 (with mdadm) containing of three 5TB WD RED drives. I have set up the RAID using the OMV GUI back in the day (2016).
Recently one of the disks has failed. When I got home I ve noticed that it makes weird sounds when starting and that it was not recognized anymore in LINUX. The device seems to be physically broken and I have set up a RMA since I still have warranty on the drives. Now I am waiting for the replacement drive and asking myself what I should do when the new drive is here to recover my RAID.
Some notes on my drives:
- sda is my system drive
- the RAID contained of disks sdb, sdc and sdd. The drive sdd failed and has been physically removed from the NAS case.
- Now sdd is my backup disk (used to be sde before the RAID disk failed)
Here are some important outputs of my system:
"uname -a" output
Linux homenas 4.9.0-0.bpo.6-amd64 #1 SMP Debian 4.9.88-1+deb9u1~bpo8+1 (2018-05-13) x86_64 GNU/Linux
cat /proc/mdstat
root@homenas:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 sdb[0] sdc[1]
9767278592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
unused devices: <none>
blkid
root@homenas:~# blkid
/dev/sda1: UUID="911053a9-f06c-4479-becb-cb8faa2a5783" TYPE="ext4" PARTUUID="2c92f843-01"
/dev/sda5: UUID="28ae7474-1d14-48a6-9e8e-2ed31e060803" TYPE="swap" PARTUUID="2c92f843-05"
/dev/sdb: UUID="bb8b3798-d160-71b4-cc60-bc8fdc8e0761" UUID_SUB="e52bb12c-23e1-7c8f-a7f7-d52d4b2b46a9" LABEL="HomeNAS:NAS" TYPE="linux_raid_member"
/dev/sdc: UUID="bb8b3798-d160-71b4-cc60-bc8fdc8e0761" UUID_SUB="d9eac207-7167-d19e-c1de-8c7525b77d48" LABEL="HomeNAS:NAS" TYPE="linux_raid_member"
/dev/sdd1: UUID="523cffe7-115d-49b4-95e0-7549aecdf064" TYPE="ext4" PARTUUID="fba4a7ee-026a-497f-9b3d-bbdec92cb0d6"
/dev/md127: UUID="bd5ef96f-5587-4211-95c0-10219985ff6d" TYPE="ext4"
fdisk -l | grep "Disk "
root@homenas:~# fdisk -l | grep "Disk "
Disk /dev/sda: 29,8 GiB, 32017047552 bytes, 62533296 sectors
Disk identifier: 0x2c92f843
Disk /dev/sdb: 4,6 TiB, 5000981078016 bytes, 9767541168 sectors
Disk /dev/sdc: 4,6 TiB, 5000981078016 bytes, 9767541168 sectors
Disk /dev/sdd: 1,8 TiB, 2000394706432 bytes, 3907020911 sectors
Disk identifier: C0401C51-A74A-4675-935E-AF9BF6706166
Disk /dev/md127: 9,1 TiB, 10001693278208 bytes, 19534557184 sectors
cat /etc/mdadm/mdadm.conf
root@homenas:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md/NAS metadata=1.2 name=HomeNAS:NAS UUID=bb8b3798:d16071b4:cc60bc8f:dc8e0761
# instruct the monitoring daemon where to send mail alerts
MAILADDR <<<<REMOVED FOR PRIVACY RESONS>>>>
Alles anzeigen
mdadm --detail --scan --verbose
root@homenas:~# mdadm --detail --scan --verbose
ARRAY /dev/md127 level=raid5 num-devices=3 metadata=1.2 name=HomeNAS:NAS UUID=bb8b3798:d16071b4:cc60bc8f:dc8e0761
devices=/dev/sdb,/dev/sdc
mdadm --detail /dev/md127
root@homenas:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Mar 12 17:22:49 2016
Raid Level : raid5
Array Size : 9767278592 (9314.80 GiB 10001.69 GB)
Used Dev Size : 4883639296 (4657.40 GiB 5000.85 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jan 27 13:11:42 2019
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : HomeNAS:NAS
UUID : bb8b3798:d16071b4:cc60bc8f:dc8e0761
Events : 305
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 0 0 4 removed
Alles anzeigen
I have searched the internet and found different steps, but I dont know which are necessary in my situation:
- mark disk as failed
- remove disk from array
- copy partition table of one remaining disk of the array to the new replacement drive
- re-add drive to the array (--> rebuild will be automatically be initiated)
Since the disk failed completly and was not present in Linux anymore I could not mark it as failed and remove it from the array. I have found the following command to remove a disk from the array which is not present anymore:
mdadm /dev/md127 -r detached
Is it recommended to use this command before I install the new drive? Or is it not necessary to remove the drive from the array in my case?
I would really appreciate your guidance!
Thanks in advance