Hi all,
I'm on 6.0.28-3 (Shaitan). I extended my RAID-5 to a new 4 TB HDD, having 3x 4 TB already in the system. During extension, the system shut down (for some unknown reason, maybe overheating), but when I started it again, it continued with the extension. When it ended (seemingly successfully), I was in a hurry and just quickly extended the file system, which worked.
Now having a closer look at the RAID, it tells me it's in the state "clean, degraded":
Version : 1.2 Creation Time : Sun Sep 2 04:05:15 2018 Raid Level : raid5 Array Size : 11720661504 (11177.69 GiB 12001.96 GB) Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Jan 27 07:00:56 2023 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Consistency Policy : bitmap
Name : openmediavault:RAIDAR (local to host openmediavault) UUID : e98b7abd:4f328c81:40a102c3:1824afcf Events : 79896
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd - 0 0 3 removed
Googling suggested to me the mdadm --add command to add the missing drive back to the array. However, I would have expected the "recover" option in the GUI to do the same, but I cannot select a device there:
Does anyone have experience with this? Can I safely execute the mdadm --add command or do I need to do something else?
Here some detailed information:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active raid5 sdc[1] sdb[0] sdd[2]
11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
bitmap: 20/30 pages [80KB], 65536KB chunk
blkid
/dev/sda1: UUID="64ae1488-3bd9-4236-8742-9ea44db6f56c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="76aa5ac0-01"
/dev/sda5: UUID="c2b0cb47-aeec-4b5a-8285-857b1c56da54" TYPE="swap" PARTUUID="76aa5ac0-05"
/dev/sdb: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="a36eadb0-2348-fb83-ec76-65c9fa5df48b" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
/dev/sdc: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="94cf7512-43e5-3957-7060-0e6cc0cdd526" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
/dev/sdd: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="f1ae8b96-55da-2541-bc00-7be870687109" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
/dev/md127: LABEL="Raidar" UUID="5d21dac9-d7ba-4831-9d29-e6d9d8de5b3b" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sde: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="e80184c3-5dc3-17b4-1f73-a6f95f5fb718" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
/dev/sdf1: UUID="b533ba9f-52ff-9d49-8092-a954a53881e4" BLOCK_SIZE="4096" TYPE="ext4" PTUUID="d433308c" PTTYPE="dos" PARTUUID="d433308c-01"
fdisk -l | grep "Disk "
Disk /dev/sda: 111,79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: 2115
Disk identifier: 0x76aa5ac0
Disk /dev/sdb: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Disk /dev/sdc: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Disk /dev/sdd: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Disk /dev/md127: 10,92 TiB, 12001957380096 bytes, 23441323008 sectors
Disk /dev/sde: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68W
Disk /dev/sdf: 114,61 GiB, 123060879360 bytes, 240353280 sectors
Disk model: SanDisk 3.2Gen1
Disk identifier: 0xd433308c
Alles anzeigen
cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
[...]
# definitions of existing MD arrays
ARRAY /dev/md/openmediavault:RAIDAR metadata=1.2 name=openmediavault:RAIDAR UUID=e98b7abd:4f328c81:40a102c3:1824afcf
Alles anzeigen
mdadm --detail --scan --verbose
ARRAY /dev/md/openmediavault:RAIDAR level=raid5 num-devices=4 metadata=1.2 name=openmediavault:RAIDAR UUID=e98b7abd:4f328c81:40a102c3:1824afcf
devices=/dev/sdb,/dev/sdc,/dev/sdd
Any help is appreciated, thank you.