Hello together,
I'm using Openmediavault for nearly 2 years now with a Raid 5 array (3x WD Red 4 TB).
Recently, my NAS stopped working and I figured out it was the power supply (external power supply with PicoPSU). During troubleshooting, I also dismounted the mainboard and the SATA Cables. I think I placed the SATA cables in the same order as before, but I'm not 100 % sure.
After replacing the power supply, the System works again and OMV shows all three drives as physical drives, but no Raid.
I read many of the threads concerning such problems, but am not 100 % sure what I can do without destroying something (the last Backup doesn't contain all data).
I retrieved the following information:
1. cat /proc/mdstat
2. blkid
/dev/sda: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="f082f604-46c3-9868-87d2-3a812ba76a6d" LABEL="openmediavault:Meins" TYPE="linux_raid_member"
/dev/sdc: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="21a38f7b-9b5b-d2e1-758a-68d1d8d823c1" LABEL="openmediavault:Meins" TYPE="linux_raid_member"
/dev/sdb1: UUID="8fcb8f05-0087-4943-849b-29809705ae97" TYPE="ext4" PARTUUID="db38b868-01"
/dev/sdb3: UUID="06b87404-2eba-4acf-8079-24812f838995" TYPE="ext4" PARTUUID="db38b868-03"
/dev/sdb5: UUID="2fa71961-def1-415d-83ab-c13c2def7e37" TYPE="swap" PARTUUID="db38b868-05"
/dev/sdd: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="81d2fe2f-8f12-8980-6e83-c79a8d0e57ee" LABEL="openmediavault:Meins" TYPE="linux_raid_member"
3. fdisk -l | grep "Disk "
Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdc: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdb: 28 GiB, 30016659456 bytes, 58626288 sectors
Disk identifier: 0xdb38b868
Disk /dev/sdd: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
4. cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=openmediavault:Meins UUID=7a3a0eca:76153cc2:f086e85a:5cba2017
# instruct the monitoring daemon where to send mail alerts
MAILADDR [my mail address]
MAILFROM root
Alles anzeigen
5. mdadm --detail --scan --verbose
This command does absolutely nothing. It results just in the next empty line in the console.
As I don't want to destroy anything, it would be great if someone can tell me if there's a chance to restore the raid array and what I can try to do so. Thank you!
- Edit: additionally, OMV sent me the following mails just before it stopped working (one for each drive):
This is an automatically generated mail message from mdadm
running on openmediavault
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sdd.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0](F) sdd[2](F) sdc[1](F)
7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/0] [___]
bitmap: 3/30 pages [12KB], 65536KB chunk
unused devices: <none>
Alles anzeigen
This is an automatically generated mail message from mdadm
running on openmediavault
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sda.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0](F) sdd[2] sdc[1](F)
7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/1] [__U]
bitmap: 1/30 pages [4KB], 65536KB chunk
Alles anzeigen
This is an automatically generated mail message from mdadm
running on openmediavault
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sdc.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0] sdd[2] sdc[1](F)
7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
bitmap: 1/30 pages [4KB], 65536KB chunk
unused devices: <none>
Alles anzeigen