Hi there,
sorry for my english, since this isn't my first language.
I have a small selfbuild omv nas system with a ssd as boot drive and 3x 5TB WD Reds in software RAID5 for storage.
A dying powersupply led to a crash of the system.
With a new powersupply OMV did boot up, but the RAID5 array is gone.
I did some googling which led to the following:
Code
root@omv:~# blkid
/dev/sdc: UUID="19db90ae-32e5-20b1-a262-0b98da017f4d" UUID_SUB="fe2cdb75-0c5b-fc2a-4749-4e74320dfd8a" LABEL="OMV:Volume1" TYPE="linux_raid_member"
/dev/sda: UUID="19db90ae-32e5-20b1-a262-0b98da017f4d" UUID_SUB="5fe1c647-5813-b767-0356-18d5ceb7b81c" LABEL="OMV:Volume1" TYPE="linux_raid_member"
/dev/sdd1: UUID="4A68-E9C2" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="313ed214-e145-4e28-a27b-38f8150acfb7"
/dev/sdd2: UUID="c2d9f6b4-77dc-4079-ac45-7cea74cb7a23" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="38b478ba-8e2b-44d0-851f-0197777f82dd"
/dev/sdd3: UUID="ccd538da-ef5b-4669-b6ef-d56e30ef5c72" TYPE="swap" PARTUUID="29ddab1a-1186-47b5-8397-00c5f7e7743d"
/dev/sdb: UUID="19db90ae-32e5-20b1-a262-0b98da017f4d" UUID_SUB="24436374-6c74-bcc1-eb4b-a4230e7ccd98" LABEL="OMV:Volume1" TYPE="linux_raid_member"
root@omv:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb[1](S) sdc[3](S) sda[0](S)
15627670536 blocks super 1.2
root@omv:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Working Devices : 3
Name : OMV:Volume1
UUID : 19db90ae:32e520b1:a2620b98:da017f4d
Events : 46629
Number Major Minor RaidDevice
- 8 32 - /dev/sdc
- 8 0 - /dev/sda
- 8 16 - /dev/sdb
root@omv:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR xxxxxxxxxxxxx
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md/Volume1 metadata=1.2 name=OMV:Volume1 UUID=19db90ae:32e520b1:a2620b98:da017f4d
root@omv:~# fdisk -l | grep "Disk "
Disk /dev/sdc: 5,46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EFZX-68B
Disk /dev/sda: 4,55 TiB, 5000981078016 bytes, 9767541168 sectors
Disk model: WDC WD50EFRX-68L
Disk /dev/sdd: 59,63 GiB, 64023257088 bytes, 125045424 sectors
Disk model: TS64GSSD370
Disk identifier: CD27F2B6-9489-478A-B73E-1A5A98DE183D
Disk /dev/sdb: 4,55 TiB, 5000981078016 bytes, 9767541168 sectors
Disk model: WDC WD50EFRX-68L
root@omv:~# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=OMV:Volume1 UUID=19db90ae:32e520b1:a2620b98:da017f4d
devices=/dev/sda,/dev/sdb,/dev/sdc
Display More
sdd is the system drive, RAID drives are sdc, sda, sdb .
I noticed mdadm is showing the wrong RAID level (raid0 instead of raid5).
In OMV Webconsole I can see the disks and the filesystem (which is marked missing), but not the sofware raid is empty.
Can anyone give me a hint to fix this or is the array actually destoyed?
Thanks in advance to everyone