File Systems Missing - RAID Arrays Missing

  • This NAS is my backup NAS and life's gotten in the way, so I haven't done anything with it in a few weeks. I updated it through the OMV web UI, but when I went to access the shares, no folders came up. I rebooted the server, but I still couldn't access any data.


    I then saw this and then began to dig deeper:



    It looks like all of the drives cannot be found in the arrays, but I'm unsure as to why. At first I thought it was because I had a RAID50, but that's worked fine for some months, and the Veeam array is separate, so it's not specifically due to RAID50.


    When trying the methods HERE, only 1 or 2 disks from the arrays (with the exception of /dev/md0) could be found, so the arrays won't mount. At least 1 or 2 have the RAID meta data info on them to be found...


    What are my next steps in trying to get my arrays back online?

  • Adding additional supporting info:

    • Offizieller Beitrag

    I'm not a RAID expert. That is especially true when looking at mdadm RAID50. Perhaps geaves might chime in.
    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors or setting up the equivalent of RAID50 in ZFS or BTRFS.)


    - Are you using OMV5?
    - Also, what do you mean by you "upgraded" OMV through the GUI?

    - Was a kernel upgrade part of the upgrade, or did you notice?

    - Did you backup your boot disk before upgrading?

    • Offizieller Beitrag

    Wow!! I didn't see this,Raid 50 had to google that.


    This md0 : active (auto-read-only) you should be able to correct with mdadm --readwrite /dev/md0 if that complains stop the array mdadm --stop /dev/md0 then run again.


    The raids are displaying as inactive so;


    stop the array mdadm --stop /dev/md? where the ? is raid reference e.g. 0, 1, 2, 3, 127 etc


    then reassemble;


    mdadm --assemble --force --verbose /dev/md? /dev/sd[abcdef]


    ? =raid reference as above /dev/sd[abcdef these are the drive references for the array


    All of your errors are usually associated with a power outage or with a drive being pulled from an array


    I'm about to sign off, but if you reply I'll check in the morning.

    • Offizieller Beitrag

    Wow!! I didn't see this,Raid 50 had to google that.

    "Wow" was my first reaction. From the detail, it appears that 2 each RAID5's are involved that are RAID0'ed together. That would create a level of complexity that I've never dealt with.

    I think I'd skip the RAID0 level and use two discrete RAID5's, or maybe mergerfs from a common mount point. (Or ZFS.)

    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors ............)

    eptesicus - I was wrong in the assumption above. I assumed another RAID type.

    • Offizieller Beitrag

    That would create a level of complexity that I've never dealt with.

    No nor me, according to what I have read it operates by striping raid 0 across multiple raid 5 sets and I am surmising one should deal with the raid 5 should a drive fail.

  • Sorry I should've checked in on this again... I pretty much just gave up and got distracted with other stuff.


    I'm not a RAID expert. That is especially true when looking at mdadm RAID50. Perhaps geaves might chime in.
    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors or setting up the equivalent of RAID50 in ZFS or BTRFS.)


    - Are you using OMV5?
    - Also, what do you mean by you "upgraded" OMV through the GUI?

    - Was a kernel upgrade part of the upgrade, or did you notice?

    - Did you backup your boot disk before upgrading?


    With my RAID50 setup, I have 3x RAID5 arrays striped, so I can lose 3 disks at a time, permitting 1 disk per RAID5.


    -Yep, OMV5.

    -No upgrades. I did system updates through the GUI, that's all.

    -I didn't notice the kernel before the updates.

    -No backups... *SMH* Funny, considering this is my backup box.


    mdadm --readwrite /dev/md:

    root@vii-nas02:~# mdadm --readwrite /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    root@vii-nas02:~# mdadm --readwrite /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md1 does not appear to be active.

    root@vii-nas02:~# mdadm --readwrite /dev/md2

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md2 does not appear to be active.

    root@vii-nas02:~# mdadm --readwrite /dev/md3

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md3 does not appear to be active.


    Stopping the arrays works except for md0...

    root@vii-nas02:~# mdadm --stop /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?

    root@vii-nas02:~# mdadm --stop /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md1

    root@vii-nas02:~# mdadm --stop /dev/md2

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md2

    root@vii-nas02:~# mdadm --stop /dev/md3

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md3

    root@vii-nas02:~# mdadm --stop /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: error opening /dev/md1: No such file or directory

    root@vii-nas02:~# mdadm --stop /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY



    When trying to repair after stopping the arrays that did indeed stop, I get the INACTIVE-ARRAY message again...

    root@vii-nas02:~# mdadm --readwrite /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: error opening /dev/md1: No such file or directory


    This is above my skillset with mdadm unfortunately... What's surprising is that, I know RAID50 isn't a OMV feature, but the RAID level for /dev/md3 is... So I have no idea what the issue really is here.

    • Offizieller Beitrag

    This is above my skillset with mdadm unfortunately... What's surprising is that, I know RAID50 isn't a OMV feature, but the RAID level for /dev/md3 is... So I have no idea what the issue really is here.

    =O and you're hoping we can help, I don't know whether to laugh or cry.


    To do anything the array's have to stopped, you cannot stop /dev/md0, why because it's active but in a read-write state that means it's mounted, that's why you get; Cannot get exclusive access when running the stop command.

    Running this -> mdadm --readwrite /dev/md: will run the command across all raid arrays, that's about as helpful as a chocolate teapot.


    Why run mdadm --readwrite /dev/md1 that array is INACTIVE, not working, gone to sleep.


    What I suggested in my post is the usual way to approach --readwrite and inactive arrays not to apply a blanket command and we'll see what happens


  • Oh man! Ok... I was able to get the /dev/md1 array dismounted, and I performed the steps you mentioned earlier and have gotten my RAID50 showing up now...


    I need to bring /dev/md3 online, but it has sdaa, sdab, sdy, and sdz in it... How do I safely perform the below with the sdaa and sdab 4-letter drives?

    mdadm --assemble --force --verbose /dev/md? /dev/sd[abcdef]

    • Offizieller Beitrag

    I need to bring /dev/md3 online, but it has sdaa, sdab, sdy, and sdz in it... How do I safely perform the below with the sdaa and sdab 4-letter drives?

    :) I was waiting for that, best guess is sdaa and sdab are partitions on sda, considering OMV uses only complete drives when creating mdadm arrays this was probably achieved via the cli


    This, /dev/sd[abcdef] is a simple way of adding the drive references, beats typing /dev/sdaa /dev/sdab etc etc for each individual drive

    • Offizieller Beitrag

    While I don't know how to help you - the following is FYI:
    ___________________________________________________________

    -No backups... *SMH* Funny, considering this is my backup box.

    I still backup the OS, even on my backup servers. It's cheap and easy if using a USB drive to boot. (And there are other methods for easily backing up a hard drive or SSD.)

    With my RAID50 setup, I have 3x RAID5 arrays striped, so I can lose 3 disks at a time, permitting 1 disk per RAID5.

    You can lose one (1) disk in each array. The second you lose 2 drives in one array, it's over. (With 3 each RAID5 arrays under RAID0, you have 3 times the likelihood of this possibility occurring.) Or if something goes wrong with the top level RAID0 array, like a superblock problem, it's over.

    -I didn't notice the kernel before the updates.

    I don't used mdadm RAID:
    I mentioned this because mdadm RAID is part of the kernel and it seems that there may have been a change to mdadm RAID in a recent kernel update.

    • Offizieller Beitrag

    You can lose one (1) disk in each array. The second you lose 2 drives in one array, it's over. (With 3 each RAID5 arrays under RAID0, you have 3 times the likelihood of this possibility occurring.) Or if something goes wrong with the top level RAID0 array, like a superblock problem, it's over.

    This is definitely 'squeaky bum' territory

  • :) I was waiting for that, best guess is sdaa and sdab are partitions on sda, considering OMV uses only complete drives when creating mdadm arrays this was probably achieved via the cli


    This, /dev/sd[abcdef] is a simple way of adding the drive references, beats typing /dev/sdaa /dev/sdab etc etc for each individual drive

    mdadm --assemble --force --verbose /dev/md3 /dev/sdy /dev/sdz /dev/sdaa /dev/sdab gets me going! Thank you!


    While I don't know how to help you - the following is FYI:
    ___________________________________________________________

    I still backup the OS, even on my backup servers. It's cheap and easy if using a USB drive to boot. (And there are other methods for easily backing up a hard drive or SSD.)

    You can lose one (1) disk in each array. The second you lose 2 drives in one array, it's over. (With 3 each RAID5 arrays under RAID0, you have 3 times the likelihood of this possibility occurring.) Or if something goes wrong with the top level RAID0 array, like a superblock problem, it's over.

    I don't used mdadm RAID:
    I mentioned this because mdadm RAID is part of the kernel and it seems that there may have been a change to mdadm RAID in a recent kernel update.


    I run Veeam at my home for VM backups, so I'll see if I can get it to backup my NAS' OS SSDs.


    Regarding RAID5, I'm well aware of the issue in losing a max of 1 drive per RAID5 array before SHTF. My primary NAS runs ZFS with more redundancy, and this OMV server provides backup only. I'm taking on more risk going with RAID50 in lieu of RAID60, and I'm fine with that.

    • Offizieller Beitrag

    First note that, when someone posts, the running assumption is that their OMV is their one and only server (no backup). While that's a sad state of affairs, that's generally the way it is over +90% of the time.

    My primary NAS runs ZFS with more redundancy, and this OMV server provides backup only.

    When trying new approaches, the place to do it is on a backup server. (I have two backups with one of them off and cold.)

    While I'm not a fan of mdadm RAID, there's nothing wrong with it if one understands the considerations and risks. In your setup, the only thing I find questionable is the nested arrays with RAID0 at the top. (Creating RAID50) Since RAID5 and RAID0's disk I/O boost is lost to the network bottle neck, I'd consider another way to aggregate the RAID5 arrays.

    The mergerfs plugin (it's called the unionfilesystems plugin) would allow you to aggregate your 3 each RAID5 arrays under a common mount point without losing everything, if the RAID0 layer failed. Further, if one of the RAID5 arrays failed (with two disk failures) the data on two remaining RAID5 arrays would still be there.


    On the other hand mergerfs has it's own considerations. Understanding it's storage policies is key to using it a way that's easy to recover from, in the event of one RAID5 array.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!