I have four drives in a RAID array and recently one of the drives has gone missing in OMV. I no longer have access to the filesystem and the RAID array no longer shows in OMV as well.
My setup is 4 HGST SAS drives running into a LSI SAS9201-8i controller. As a troubleshooting method to test for faulty cables or a faulty controller I reversed the order of the cables I had plugged into the drives. I figured if one of the cables was bad I'd see a different drive missing, but all drives serial numbers stayed the same. Their /dev/ paths did change though when I checked in the Storage -> Disks sections of OMV. I'm not sure if that's relevant.
We had a power cut one night around the time I noticed this issue. I can't say for sure if it popped up then as I wasn't actively using my NAS until a few days after the power cut.
Doing a scan for drives in Storage -> Disks gives a "communication error" with no extra details.
Below is the output of several commands I saw to run / post here in these types of cases. I'm fairly technical but know very little about Linux and OMV.
Looking for any guidance on this! Thanks!
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[0](S) sdc[1](S) sda[2](S)
11720659464 blocks super 1.2
unused devices: <none>
blkid
/dev/sdf1: UUID="809af825-9205-44fb-af15-8ee268f3eb28" TYPE="ext4" PARTUUID="6a551e78-01"
/dev/sdf5: UUID="367ede47-636a-41ee-a50c-91e1ed1e9b9c" TYPE="swap" PARTUUID="6a551e78-05"
/dev/sde1: LABEL="Public" UUID="9ad1b12e-e18c-4d51-9439-11cb5245fc81" TYPE="ext4" PARTUUID="079114d8-1da9-4eb0-a5eb-f81480cb9ec7"
/dev/sdc: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="dd1a4324-4be3-ee22-4c43-a98be7f16dd9" LABEL="NAS.local:data" TYPE="linux_raid_member"
/dev/sdb: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="d41c4964-c59c-b1bf-7798-2a4cccecf19d" LABEL="NAS.local:data" TYPE="linux_raid_member"
/dev/sda: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="adb24629-4d07-23a7-f581-686c9c7653b6" LABEL="NAS.local:data" TYPE="linux_raid_member"
fdisk -l | grep "Disk "
Disk /dev/sdf: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: ADATA SU650
Disk identifier: 0x6a551e78
Disk /dev/sde: 978.1 GiB, 1050214588416 bytes, 2051200368 sectors
Disk model: Crucial_CT1050MX
Disk identifier: BB2878DD-93AC-4955-9002-DA86E6235F98
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HUS724040ALS640
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HUS724040ALS640
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HUS724040ALS640
Alles anzeigen
cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=NAS.local:data UUID=28c73d6f:daf32b3b:6c7b9ad0:f7e5954f
Alles anzeigen