Hi,
I attempted to add a new HDD to my RAID 5 setup (4 WD Red 3TB, trying to add a Seagate Ironwolf 3TB). However, when I did this and tried to grow the array, I ended up in the same situation as the chap here. While I eventually got the RAID array back by re-creating it with an assume-clean, the filesystem entry is now listed as Missing, and I'm trying to find out if there's a way to get it back again without having to unmount all the shares that I have setup. I have read something about ext4 labels but not sure if that's what's needed here.
I've tried mounting it manually, but get the following:
➜ ~ mount -t ext4 /dev/md/spine:NAS /srv/dev-disk-by-label-Trove
mount: /srv/dev-disk-by-label-Trove: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
Some hopefully useful screenshots and info follows:
➜ ~ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active (auto-read-only) raid5 sdb[2] sdf[0] sdc[1] sdd[3]
8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
➜ ~ blkid
/dev/sdb: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="cfebfaeb-99ac-8e5b-ecec-9cb8b826eb2e" LABEL="spine:NAS" TYPE="linux_raid_member"
/dev/sdd: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="5a40a2e3-6427-b4ee-ad2b-100a919e11dd" LABEL="spine:NAS" TYPE="linux_raid_member"
/dev/sdc: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="52748001-bf47-d31f-f20d-f81f5d4c6771" LABEL="spine:NAS" TYPE="linux_raid_member"
/dev/sdf: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="f18d7a57-45e7-9b1c-5111-4ee8f46eee01" LABEL="spine:NAS" TYPE="linux_raid_member"
/dev/sda1: UUID="2177df42-ad0f-4f54-9c5f-41256e4ce912" TYPE="ext4" PARTUUID="1fb25b71-01"
/dev/sda5: UUID="8afb3686-48d4-44d2-9d5a-85036f35ff80" TYPE="swap" PARTUUID="1fb25b71-05"
/dev/sde1: PARTUUID="926ce136-9e7a-ba48-9546-51faf7c0da1d"
➜ ~ fdisk -l | grep "Disk "
Partition 2 does not start on physical sector boundary.
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFAX-68J
Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: ST3000VN007-2AH1
Disk identifier: E416E151-DCBD-6544-9A4F-A597D78E5093
Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP1
Disk identifier: 0x1fb25b71
Disk /dev/md127: 8.2 TiB, 9001371697152 bytes, 17580804096 sectors
➜ ~ cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR foo@bar.com
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md/NAS metadata=1.2 name=spine:NAS UUID=2d9841d7:6ef098f7:3548d7cc:683ce3c2
➜ ~ mdadm --detail --scan --verbose
ARRAY /dev/md/spine:NAS level=raid5 num-devices=4 metadata=1.2 name=spine:NAS UUID=2d9841d7:6ef098f7:3548d7cc:683ce3c2
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sdf
➜ ~ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=2177df42-ad0f-4f54-9c5f-41256e4ce912 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=8afb3686-48d4-44d2-9d5a-85036f35ff80 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
# >>> [openmediavault]
/dev/disk/by-label/Trove /srv/dev-disk-by-label-Trove ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# <<< [openmediavault]
Alles anzeigen
If anyone has any advice they could share, would be eternally greatful. I have off-site backups if the array is toast, although it seems to be looking OK (at least from an mdadm perspective). Won't be able to tell for sure though until I can mount it properly.
Thanks,
Peter.