Hello,
I had a disk failure on my RAID5 recently while I was on holidays with no spare time to dedicate to that issue. The array was logically in degraded mode from that moment but was still running fine with 3 disks until a second event occured yesterday :
This is an automatically generated mail message from mdadm
running on OMV
A Fail event had been detected on md device /dev/md/XPENOLOGY:2.
It could be related to component device /dev/sdb5.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md2 : active raid5 sdb5[0](F) sdc5[1] sdd[3]
17566977984 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/2] [_U_U]
unused devices: <none>
Alles anzeigen
I don't really understand why I received this message as this device is OK, the HP Gen 8 RAID controller does not detect any error on it and SMART tests are good. At this time I turned off the server and physically unplugged the first drive which was faulty then I restarted it but now my RAID5 is marked as inactive and is no more visible by OMV although the three remaining physical disks are present in the GUI.
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdc5[1](S) sdd[3](S) sdb[4](S)
17576636912 blocks super 1.2
unused devices: <none>
blkid
/dev/sda1: UUID="98a223a7-01be-4910-871e-42449f27132e" TYPE="ext4" PARTUUID="0009b7e4-01"
/dev/sda5: UUID="0a6e6abe-bd60-4323-b742-c8ae0d8ef1c2" TYPE="swap" PARTUUID="0009b7e4-05"
/dev/sdb: UUID="394b8163-356b-6262-52f6-72d24c3bc33f" UUID_SUB="57584ed1-3398-6977-2e38-530ef2b968e2" LABEL="XPENOLOGY:2" TYPE="linux_raid_member"
/dev/sdc1: UUID="e2ec8542-ea1a-e93a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="3daedda2-ab78-4a06-b185-c980acdfe091"
/dev/sdc2: UUID="d5d5bcfe-89b5-e8c4-2cf3-e5bf6a1edd70" TYPE="linux_raid_member" PARTUUID="42e1c209-98ba-48ee-9afc-247152902ac1"
/dev/sdc5: UUID="394b8163-356b-6262-52f6-72d24c3bc33f" UUID_SUB="fe533fdf-afea-192c-5fd5-11c6102a15ca" LABEL="XPENOLOGY:2" TYPE="linux_raid_member" PARTUUID="dec4d9c2-d2ec-4513-b951-1bb8771fc52f"
/dev/sr0: UUID="2015-06-29-06-52-36-00" LABEL="OpenMediaVault" TYPE="iso9660" PTUUID="78a03a04" PTTYPE="dos"
/dev/sdd: UUID="394b8163-356b-6262-52f6-72d24c3bc33f" UUID_SUB="3db782f1-473c-081f-d8dc-6213c974b7db" LABEL="XPENOLOGY:2" TYPE="linux_raid_member"
Alles anzeigen
fdisk -l | grep "Disk "
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Partition 2 does not start on physical sector boundary.
Partition 5 does not start on physical sector boundary.
Disk /dev/sda: 16 GiB, 17179869184 bytes, 33554432 sectors
Disk identifier: 0x0009b7e4
Disk /dev/sdb: 5,5 TiB, 6001141571584 bytes, 11720979632 sectors
Disk identifier: ABE4B6E0-89EC-489A-9BEA-6B1D38D42C8B
Partition 2 does not start on physical sector boundary.
Partition 5 does not start on physical sector boundary.
Disk /dev/sdc: 5,5 TiB, 6001141571584 bytes, 11720979632 sectors
Disk identifier: E26F72F4-EF94-48E8-9022-404A6317EB4D
Disk /dev/sdd: 5,5 TiB, 6001141571584 bytes, 11720979632 sectors
Alles anzeigen
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md/XPENOLOGY:2 metadata=1.2 spares=0 name=XPENOLOGY:2 UUID=394b8163:356b6262:52f672d2:4c3bc33f
# instruct the monitoring daemon where to send mail alerts
MAILADDR someaddress@somedomain.com
MAILFROM root
Alles anzeigen
mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0xa
Array UUID : 394b8163:356b6262:52f672d2:4c3bc33f
Name : XPENOLOGY:2
Creation Time : Sun Jun 5 13:29:00 2016
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 11720977584 (5589.00 GiB 6001.14 GB)
Array Size : 17566977984 (16753.18 GiB 17988.59 GB)
Used Dev Size : 11711318656 (5584.39 GiB 5996.20 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Recovery Offset : 163875928 sectors
Unused Space : before=1960 sectors, after=9658928 sectors
State : clean
Device UUID : 57584ed1:33986977:2e38530e:f2b968e2
Update Time : Wed Aug 7 23:20:21 2019
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : b1a144c8 - correct
Events : 26486
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdc5
/dev/sdc5:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 394b8163:356b6262:52f672d2:4c3bc33f
Name : XPENOLOGY:2
Creation Time : Sun Jun 5 13:29:00 2016
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 11711318656 (5584.39 GiB 5996.20 GB)
Array Size : 17566977984 (16753.18 GiB 17988.59 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=0 sectors
State : clean
Device UUID : fe533fdf:afea192c:5fd511c6:102a15ca
Update Time : Mon Aug 12 18:57:28 2019
Checksum : 9560b95 - correct
Events : 26567
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .A.A ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 394b8163:356b6262:52f672d2:4c3bc33f
Name : XPENOLOGY:2
Creation Time : Sun Jun 5 13:29:00 2016
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 11720977584 (5589.00 GiB 6001.14 GB)
Array Size : 17566977984 (16753.18 GiB 17988.59 GB)
Used Dev Size : 11711318656 (5584.39 GiB 5996.20 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1960 sectors, after=9658928 sectors
State : clean
Device UUID : 3db782f1:473c081f:d8dc6213:c974b7db
Update Time : Mon Aug 12 18:57:28 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 6e16750f - correct
Events : 26567
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : .A.A ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen
My configuration :
My RAID5 contains 4 6To drives, 1 of them is faulty and has been removed physically (formerly /dev/sde)
What I've tried to do until now :
mdadm --stop /dev/md2
mdadm: stopped /dev/md2
mdadm --assemble /dev/md2 /dev/sd[bd] /dev/sdc5 --verbose --force --run
mdadm: looking for devices for /dev/md2
mdadm: /dev/sdb is identified as a member of /dev/md2, slot 2.
mdadm: /dev/sdd is identified as a member of /dev/md2, slot 3.
mdadm: /dev/sdc5 is identified as a member of /dev/md2, slot 1.
mdadm: no uptodate device for slot 0 of /dev/md2
mdadm: added /dev/sdb to /dev/md2 as 2 (possibly out of date)
mdadm: added /dev/sdd to /dev/md2 as 3
mdadm: added /dev/sdc5 to /dev/md2 as 1
mdadm: failed to RUN_ARRAY /dev/md2: Input/output error
mdadm: Not enough devices to start the array.
Alles anzeigen
I thought the force mode would do the trick, especially with this little difference on the events counter (26486 vs 26567) but no luck with that.
What's the next step then ? I've read some stuff about assume-clean switch that could work but I'm not sure about that. Could it be a good idea to do a mdadm --zero-superblock /dev/sdb ?
Any help greatly appreciated
Mazz