Missing filesystem after RAID 5 repair

  • Hi,


    I attempted to add a new HDD to my RAID 5 setup (4 WD Red 3TB, trying to add a Seagate Ironwolf 3TB). However, when I did this and tried to grow the array, I ended up in the same situation as the chap here. While I eventually got the RAID array back by re-creating it with an assume-clean, the filesystem entry is now listed as Missing, and I'm trying to find out if there's a way to get it back again without having to unmount all the shares that I have setup. I have read something about ext4 labels but not sure if that's what's needed here.


    I've tried mounting it manually, but get the following:


    Code
    ➜  ~ mount -t ext4  /dev/md/spine:NAS /srv/dev-disk-by-label-Trove
    mount: /srv/dev-disk-by-label-Trove: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.


    Some hopefully useful screenshots and info follows:




    If anyone has any advice they could share, would be eternally greatful. I have off-site backups if the array is toast, although it seems to be looking OK (at least from an mdadm perspective). Won't be able to tell for sure though until I can mount it properly.


    Thanks,


    Peter.

    • Offizieller Beitrag

    Try mdadm --readwrite /dev/md127

    While I eventually got the RAID array back by re-creating it with an assume-clean

    =O that is an absolute last resort and should never be used until all other options have been exhausted......why? it can cause fs loss and subsequently data loss


    Growing an array in OMV should be done from the GUI before going down the cli route

  • I did originally try using the GUI to do it, but ended up in the same situation as the article I linked above. Not ideal as you pointed out, but didn't think to ask here first.


    Weirdly, the --readwrite option doesn't survive a reboot. When the machine comes back (filesystem still missing), I see

    Code
    ➜  ~ cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active (auto-read-only) raid5 sdc[2] sdb[0] sdd[1] sde[3]
          8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    again...

  • :) weirdly I didn't suggest a reboot, as blkid returns something odd what's the output of mdadm --detail /dev/md127

    Fair :) Although I was myself curious if that status would survive.


    • Offizieller Beitrag

    Although I was myself curious if that status would survive

    Fair enough, but that can change the drive references, if you look at the blikd output in your first post it has /dev/sd[bcdf] as being a 'Linux Raid Member' it shows /dev/sde as /dev/sde1 which denotes a partition.


    Now the above output has /dev/sd[bcde] as part of the raid, which confirms your last cat /proc/mdstat


    If you do mdadm --readwrite /dev/md127 do not reboot what's the output of cat /proc/mdstat

  • Fair enough, but that can change the drive references, if you look at the blikd output in your first post it has /dev/sd[bcdf] as being a 'Linux Raid Member' it shows /dev/sde as /dev/sde1 which denotes a partition.


    Now the above output has /dev/sd[bcde] as part of the raid, which confirms your last cat /proc/mdstat


    If you do mdadm --readwrite /dev/md127 do not reboot what's the output of cat /proc/mdstat

    The read-only status goes after that:

    Code
    ➜  ~ mdadm --readwrite /dev/md127
    ➜  ~ cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdc[2] sdb[0] sdd[1] sde[3]
          8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    No reboots performed :)

  • :thumbup: so that solves the auto read only if you don't reboot, I take it the raid is still showing as missing under file systems

    Unfortunately yes. Seems a bit odd! I'd read somewhere about a CLI OMV tool that rewrites fstab, etc, wasn't sure if that was needed somehow, but didn't try it out... so open to ideas!

    • Offizieller Beitrag

    CLI OMV tool that rewrites fstab

    Not a tool but a command, if you look at your blkid output it shows no file system on each of the drives within the array, nor does it display the raid, if you run that again having resolved the auto read only does it show?

  • Not a tool but a command, if you look at your blkid output it shows no file system on each of the drives within the array, nor does it display the raid, if you run that again having resolved the auto read only does it show?

    Ah, I was wondering what was wrong, but now that you point it out...

    Code
    ➜  ~ blkid
    /dev/sda1: UUID="2177df42-ad0f-4f54-9c5f-41256e4ce912" TYPE="ext4" PARTUUID="1fb25b71-01"
    /dev/sda5: UUID="8afb3686-48d4-44d2-9d5a-85036f35ff80" TYPE="swap" PARTUUID="1fb25b71-05"
    /dev/sdc: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="cfebfaeb-99ac-8e5b-ecec-9cb8b826eb2e" LABEL="spine:NAS" TYPE="linux_raid_member"
    /dev/sdd: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="52748001-bf47-d31f-f20d-f81f5d4c6771" LABEL="spine:NAS" TYPE="linux_raid_member"
    /dev/sde: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="5a40a2e3-6427-b4ee-ad2b-100a919e11dd" LABEL="spine:NAS" TYPE="linux_raid_member"
    /dev/sdb: UUID="2d9841d7-6ef0-98f7-3548-d7cc683ce3c2" UUID_SUB="f18d7a57-45e7-9b1c-5111-4ee8f46eee01" LABEL="spine:NAS" TYPE="linux_raid_member"
    /dev/sdf1: PARTUUID="926ce136-9e7a-ba48-9546-51faf7c0da1d"

    If the array's toast, it's not the end of the world, although I'll have a few grumpy late nights getting everything back (and recreating the stuff that wasn't backed up because it's lower priority)...

    • Offizieller Beitrag

    If the array's toast, it's not the end of the world

    It might be, but good for you in having a backup, the only other option to try is fsck, check and repair file system;


    fsck /dev/md127 you will have to answer yes to any repair

  • It might be, but good for you in having a backup, the only other option to try is fsck, check and repair file system;


    fsck /dev/md127 you will have to answer yes to any repair

    OK, totally in unknown territory here:

    I suspect this isn't good...

    • Offizieller Beitrag

    Same output for both block addresses

    Then it's not recoverable, sorry nothing else I can think of to try, this looks like a rebuild :(


    I would suggest a clean install rather than attempting to remove smb, shares, raid etc, it will just get messy.

  • Then it's not recoverable, sorry nothing else I can think of to try, this looks like a rebuild :(


    I would suggest a clean install rather than attempting to remove smb, shares, raid etc, it will just get messy.

    All good, guessed as much. Thanks for your assistance!


    Curious though - why is a clean install easier than going through the UI and removing shares and filesystems? Is the messiness something that will stick around later?

    • Offizieller Beitrag

    Curious though - why is a clean install easier than going through the UI and removing shares and filesystems? Is the messiness something that will stick around later

    You will have to delete smb shares, then delete the shared folders, break the array (this might be possible by just deleting it rather than removing one drive, then another before deleting it) delete the file system in relation to the array. In essence that should work, you would then have to wipe each drive, create the array, create the file system etc.


    When I moved from OMV4 to 5 instead of attempting an upgrade I did a clean install, took me less than day to get back up and running and I changed to using ZFS :)

  • You will have to delete smb shares, then delete the shared folders, break the array (this might be possible by just deleting it rather than removing one drive, then another before deleting it) delete the file system in relation to the array. In essence that should work, you would then have to wipe each drive, create the array, create the file system etc.


    When I moved from OMV4 to 5 instead of attempting an upgrade I did a clean install, took me less than day to get back up and running and I changed to using ZFS :)

    Hah, I did the same, although given this isn't a major upgrade, there's a bunch of stuff I'd rather not spend the time re-setting up, got little enough time as it is! Will try it the manual way, and if I get stuck, the nuke and restart option is always there...


    Thanks for your assistance, much appreciated!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!