Hi all,
After experiencing a drive failure my RAID configuration is gone and my filesystem has the status 'Missing'. I've come across this thread of a user experiencing something similar. Howerver, I would like some advice on how to proceed.
Some additional information as requested here: Degraded or missing raid array questions
cat /proc/mdstat
Code
root@blackhole:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sde[2](S) sda[0](S)
2929891464 blocks super 1.2
unused devices: <none>
root@blackhole:~#
blkid
Code
root@blackhole:~# blkid
/dev/sda: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="9dde3400-8912-4b48-4cae-fbc5615c0639" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
/dev/sde: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="ede58e22-3571-eff1-cea6-bf408092cc7f" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
/dev/sdb: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="ac59d1b4-66c2-fb9e-bd77-a292bf33e996" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
/dev/sdc1: UUID="a5c3c40c-49ec-485a-902e-daed29e0c8ca" TYPE="ext4" PARTUUID="283b6343-01"
/dev/sdc5: UUID="d8d2ee6f-9711-4b1b-ab64-aa899c710392" TYPE="swap" PARTUUID="283b6343-05"
fdisk -l | grep "Disk "
Code
root@blackhole:~# fdisk -l | grep "Disk "
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD1002FBYS-0
Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EFRX-68P
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-22B
Disk /dev/sdc: 29.8 GiB, 32017047552 bytes, 62533296 sectors
Disk model: TS32GSSD340K
Disk identifier: 0x283b6343
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP1
Alles anzeigen
cat /etc/mdadm/mdadm.conf
Code
root@blackhole:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 spares=1 name=blackhole:blackdatafive UUID=f3d673d4:6ec122df:28177bcf:babef850
Alles anzeigen
mdadm --detail --scan --verbose
Code
root@blackhole:~# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 name=blackhole:blackdatafive UUID=f3d673d4:6ec122df:28177bcf:babef850 devices=/dev/sda,/dev/sdb,/dev/sde
I am running OMV 5.6.26-1 (Usul)
For replacement of the drive that has currently failed (still connected), I added /dev/sdd.
Configuration failed while running, no loss of power to my knowledge.
How can I recover this RAID configuration and filesystem?