So my 4 disk Raid 5 is degraded in a clean state, and i can see the 4th disk in OMV but i just cannot add it to the raid to recover it.
I saw i needed to post this info, so here goes .
- cat /proc/mdstat
gives;
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid5 sda[1] sdd[3] sdb[0]
8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
bitmap: 5/22 pages [20KB], 65536KB chunk
unused devices: <none>
- blkid
gives;
/dev/sda: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="2b20e8a7-125f-091b-5dd7-16b528eebeb6" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
/dev/md127: UUID="1db1716f-6925-4055-b177-90ec77c59e66" TYPE="ext4"
/dev/sdb: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="aff6fa5a-2158-8b2b-7dfc-0e4ffbfbba79" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
/dev/sdc: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="60f1906c-95c7-70f0-30d0-e561ad3c27c8" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
/dev/sdd: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="b63fb25d-b5a1-f3b2-9938-8644fcfb22cd" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
/dev/sde1: UUID="b7f84e5e-0dd6-4916-9523-7efa28fda8db" TYPE="ext4" PARTUUID="9b344873-01"
/dev/sde5: UUID="e4455646-7655-432f-afc1-f660ec01d150" TYPE="swap" PARTUUID="9b344873-05"
- fdisk -l | grep "Disk
gives
fdisk -l | grep sdc
Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
- cat /etc/mdadm/mdadm.conf
gives
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
- mdadm --detail --scan --verbose
ARRAY /dev/md/NAS-*NAME*:NAS level=raid5 num-devices=4 metadata=1.2 name=NAS-*NAME*:NAS UUID=a6af1c8f:c3642be9:925785ac:37b4f7d0
devices=/dev/sda,/dev/sdb,/dev/sdd
Disk sdc is the issue here, i even did a complete new install and have the same issue. So hoping for your thoughts