Hi all,
my Raid5 array is missing it happened yesterdy in the evening when the OMV was operating as a fileshare, without having done any updates of configchanges in the hours before.
Suddenly the clients mouted ffileshares were gone and on the OMV there mow is a Red NFS warning in dashbord and in the fiel system view where /dev/md0 should be are only spacers "-" now flagged red "Missing".
Here comes the reqired output of what I should provide with the missing raid question. (And Yes, I have full a Backup from last week ; on 6 TB USB-Drive)
Code
1. #cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[2](S)
1953382488 blocks super 1.2
#####################################################################################
2. # blkid
/dev/nvme0n1p3: UUID="1b2ab52e-35e1-4656-80be-8d0d89f40d5d" TYPE="swap" PARTUUID="af84516d-3006-4469-9027-6afc159548d0"
/dev/nvme0n1p1: UUID="0108-F247" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0932f569-f97b-41c8-a057-01014d98739a"
/dev/nvme0n1p4: UUID="abaa68f2-c099-48b7-afe2-ffbbd0a9fd4a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="81b95980-7ffe-48f5-9783-460c31ddd3d3"
/dev/nvme0n1p2: UUID="d81e789f-673f-4d5e-bdf2-a8dfe179fb3f" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c0e0999d-5b0b-4536-b69d-6d7cf626089b"
/dev/sdd: UUID="e11088a3-db91-43ca-6aa7-b37657c7bd25" UUID_SUB="5508ef1b-0adc-d04a-ec92-65fa85c86412" LABEL="omv:0" TYPE="linux_raid_member"
/dev/sde1: UUID="6aa63a65-5ecb-483f-b9dd-e5f2f997cca5" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="USBBackup" PARTUUID="763a3310-20e9-4001-8065-66923ecc8b71"
/dev/sda: UUID="e11088a3-db91-43ca-6aa7-b37657c7bd25" UUID_SUB="130be59b-95d8-d371-98e6-63750cb31812" LABEL="omv:0" TYPE="linux_raid_member"
/dev/sdf: UUID="e11088a3-db91-43ca-6aa7-b37657c7bd25" UUID_SUB="939579c3-7603-2a90-7a93-3c791e0bd110" LABEL="omv:0" TYPE="linux_raid_member"
/dev/sdb: UUID="e11088a3-db91-43ca-6aa7-b37657c7bd25" UUID_SUB="92b0cb4b-1ff5-f295-0a8c-4820c8a7e2b0" LABEL="omv:0" TYPE="linux_raid_member"
#####################################################################################
3. # fdisk -l | grep "Disk "
Disk /dev/nvme0n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: Samsung SSD 970 EVO Plus 250GB
Disk identifier: 66F425A7-ED7C-4181-A162-25E27019B8F4
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sdd: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sde: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: HD Quattro 3.0
Disk identifier: 374F88A3-0BD8-4EE1-8802-A64683E6D8CE
Disk /dev/sdf: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
#####################################################################################
4. # cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR xxx.yyy@something
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=omv:MyRaid5 UUID=13bae22b:52bc7fc9:93271136:cc024480
#####################################################################################
# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=omv:0 UUID=e11088a3:db9143ca:6aa7b376:57c7bd25
devices=/dev/sdb
Display More
I tried this but do not really understand what it does... (Hope it din't make it worese) :
Code
root@omv:~# mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=5 missing /dev/sdd /dev/sdb /dev/sda /dev/sdf
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdd appears to be part of a raid array:
level=raid5 devices=5 ctime=Wed Dec 18 23:12:28 2024
mdadm: /dev/sdb appears to be part of a raid array:
level=raid5 devices=5 ctime=Wed Dec 18 23:12:28 2024
mdadm: /dev/sda appears to be part of a raid array:
level=raid5 devices=5 ctime=Wed Dec 18 23:12:28 2024
mdadm: /dev/sdf appears to be part of a raid array:
level=raid5 devices=5 ctime=Wed Dec 18 23:12:28 2024
mdadm: size set to 1953382400K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array?
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
###################################################################################
root@omv:~# fsck -v /dev/md0
fsck from util-linux 2.38.1
e2fsck 1.47.0 (5-Feb-2023)
fsck.ext2: Invalid argument while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Display More