Hello together,
cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdg[1](S) sdc[3](S) sdb[4](S) sdh[2](S) sdd[5](S) sda[0](S) sde[6](S)
6835316280 blocks super 1.2
unused devices: <none>
blkid:
/dev/sdc: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="4d1e1b6b-bab1-107e-0789-af8c5f7bf547" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sdb: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="423a1aad-fcea-7158-436a-115ea156385c" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sdd: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="f518e6a4-3403-24ae-a75a-88e2b0722a89" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sda: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="1bed1572-c467-cace-222b-9b0305a4557f" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sde: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="5c0d7d2a-a569-37d9-1c43-2117fc1246e4" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sdg: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="291e58ea-9112-4d4b-1563-af9342dc8243" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sdh: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="bbbce84b-1ef4-ba8d-1f98-63d17884e7ae" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"
/dev/sdf1: UUID="CDBE-D745" TYPE="vfat" PARTUUID="299b948a-937b-4097-90b6-59ed7ea06813"
/dev/sdf2: UUID="475d099c-df49-4b02-b831-92bff08eaec4" TYPE="ext4" PARTUUID="3fef7530-5dad-42c4-bc82-ba7d60381dce"
/dev/sdf3: UUID="63ecac96-3b50-4975-861d-0ff07f2ed681" TYPE="swap" PARTUUID="f136f054-23a1-46e4-8827-a2a8b4eddf48"
fdisk -l | grep "Disk ":
Disk /dev/sdc: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000LM048-2E71
Disk /dev/sdb: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10SPZX-00Z
Disk /dev/sdd: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EADS-98M
Disk /dev/sda: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: SAMSUNG HD103SI
Disk /dev/sde: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM003-9YN1
Disk /dev/sdg: 931 GiB, 999643152384 bytes, 1952428032 sectors
Disk model: HDD/2
Disk /dev/sdh: 931 GiB, 999643152384 bytes, 1952428032 sectors
Disk model: HDD/1
Disk /dev/sdf: 90 GiB, 96626278400 bytes, 188723200 sectors
Disk model: Sys/Mirror
Disk identifier: 111D9AB6-2E07-4633-9EAB-E02AFC72AA27
cat /etc/mdadm/mdadm.conf:
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR RalfRichter@Richter-Audio.de
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=HULK.local:HulkBuster UUID=8541d67d:e2204d81:769010c7:6e45facf
mdadm --detail --scan --verbose:
INACTIVE-ARRAY /dev/md127 num-devices=7 metadata=1.2 name=HULK.local:HulkRaid UUID=70b2de20:3e368789:e2a1866e:cb067917
devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdg,/dev/sdh
The setup:
2 devices (hardware) raid 1 for omv - system
7 devices (software) raid 6 as an data pool
Problem:
Raid 6 disappeared after reboot. I was using the system as usual and after the reboot I wasn't able to get to my data.
All hdds seems to be recognized
"only" the data raid is missing.
I must admit that this is the first time I setup my own nas to reuse older hardware instead of buying a complete new solution.
Thanks for all help in advance!