File Systems Missing - RAID Arrays Missing

  • This NAS is my backup NAS and life's gotten in the way, so I haven't done anything with it in a few weeks. I updated it through the OMV web UI, but when I went to access the shares, no folders came up. I rebooted the server, but I still couldn't access any data.


    I then saw this and then began to dig deeper:



    It looks like all of the drives cannot be found in the arrays, but I'm unsure as to why. At first I thought it was because I had a RAID50, but that's worked fine for some months, and the Veeam array is separate, so it's not specifically due to RAID50.


    When trying the methods HERE, only 1 or 2 disks from the arrays (with the exception of /dev/md0) could be found, so the arrays won't mount. At least 1 or 2 have the RAID meta data info on them to be found...


    What are my next steps in trying to get my arrays back online?

  • Adding additional supporting info:

  • I'm not a RAID expert. That is especially true when looking at mdadm RAID50. Perhaps geaves might chime in.
    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors or setting up the equivalent of RAID50 in ZFS or BTRFS.)


    - Are you using OMV5?
    - Also, what do you mean by you "upgraded" OMV through the GUI?

    - Was a kernel upgrade part of the upgrade, or did you notice?

    - Did you backup your boot disk before upgrading?

  • Wow!! I didn't see this,Raid 50 had to google that.


    This md0 : active (auto-read-only) you should be able to correct with mdadm --readwrite /dev/md0 if that complains stop the array mdadm --stop /dev/md0 then run again.


    The raids are displaying as inactive so;


    stop the array mdadm --stop /dev/md? where the ? is raid reference e.g. 0, 1, 2, 3, 127 etc


    then reassemble;


    mdadm --assemble --force --verbose /dev/md? /dev/sd[abcdef]


    ? =raid reference as above /dev/sd[abcdef these are the drive references for the array


    All of your errors are usually associated with a power outage or with a drive being pulled from an array


    I'm about to sign off, but if you reply I'll check in the morning.

  • Wow!! I didn't see this,Raid 50 had to google that.

    "Wow" was my first reaction. From the detail, it appears that 2 each RAID5's are involved that are RAID0'ed together. That would create a level of complexity that I've never dealt with.

    I think I'd skip the RAID0 level and use two discrete RAID5's, or maybe mergerfs from a common mount point. (Or ZFS.)

    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors ............)

    eptesicus - I was wrong in the assumption above. I assumed another RAID type.

  • Sorry I should've checked in on this again... I pretty much just gave up and got distracted with other stuff.


    I'm not a RAID expert. That is especially true when looking at mdadm RAID50. Perhaps geaves might chime in.
    (BTW: With a 50% cost in disk real-estate, you might have been better off with stripping ZFS zmirrors or setting up the equivalent of RAID50 in ZFS or BTRFS.)


    - Are you using OMV5?
    - Also, what do you mean by you "upgraded" OMV through the GUI?

    - Was a kernel upgrade part of the upgrade, or did you notice?

    - Did you backup your boot disk before upgrading?


    With my RAID50 setup, I have 3x RAID5 arrays striped, so I can lose 3 disks at a time, permitting 1 disk per RAID5.


    -Yep, OMV5.

    -No upgrades. I did system updates through the GUI, that's all.

    -I didn't notice the kernel before the updates.

    -No backups... *SMH* Funny, considering this is my backup box.


    mdadm --readwrite /dev/md:

    root@vii-nas02:~# mdadm --readwrite /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    root@vii-nas02:~# mdadm --readwrite /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md1 does not appear to be active.

    root@vii-nas02:~# mdadm --readwrite /dev/md2

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md2 does not appear to be active.

    root@vii-nas02:~# mdadm --readwrite /dev/md3

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: /dev/md3 does not appear to be active.


    Stopping the arrays works except for md0...

    root@vii-nas02:~# mdadm --stop /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?

    root@vii-nas02:~# mdadm --stop /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md1

    root@vii-nas02:~# mdadm --stop /dev/md2

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md2

    root@vii-nas02:~# mdadm --stop /dev/md3

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: stopped /dev/md3

    root@vii-nas02:~# mdadm --stop /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: error opening /dev/md1: No such file or directory

    root@vii-nas02:~# mdadm --stop /dev/md0

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY



    When trying to repair after stopping the arrays that did indeed stop, I get the INACTIVE-ARRAY message again...

    root@vii-nas02:~# mdadm --readwrite /dev/md1

    mdadm: ARRAY line /dev/md/ddf0 has no identity information.

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: error opening /dev/md1: No such file or directory


    This is above my skillset with mdadm unfortunately... What's surprising is that, I know RAID50 isn't a OMV feature, but the RAID level for /dev/md3 is... So I have no idea what the issue really is here.

  • This is above my skillset with mdadm unfortunately... What's surprising is that, I know RAID50 isn't a OMV feature, but the RAID level for /dev/md3 is... So I have no idea what the issue really is here.

    =O and you're hoping we can help, I don't know whether to laugh or cry.


    To do anything the array's have to stopped, you cannot stop /dev/md0, why because it's active but in a read-write state that means it's mounted, that's why you get; Cannot get exclusive access when running the stop command.

    Running this -> mdadm --readwrite /dev/md: will run the command across all raid arrays, that's about as helpful as a chocolate teapot.


    Why run mdadm --readwrite /dev/md1 that array is INACTIVE, not working, gone to sleep.


    What I suggested in my post is the usual way to approach --readwrite and inactive arrays not to apply a blanket command and we'll see what happens


  • Oh man! Ok... I was able to get the /dev/md1 array dismounted, and I performed the steps you mentioned earlier and have gotten my RAID50 showing up now...


    I need to bring /dev/md3 online, but it has sdaa, sdab, sdy, and sdz in it... How do I safely perform the below with the sdaa and sdab 4-letter drives?

    mdadm --assemble --force --verbose /dev/md? /dev/sd[abcdef]

  • I need to bring /dev/md3 online, but it has sdaa, sdab, sdy, and sdz in it... How do I safely perform the below with the sdaa and sdab 4-letter drives?

    :) I was waiting for that, best guess is sdaa and sdab are partitions on sda, considering OMV uses only complete drives when creating mdadm arrays this was probably achieved via the cli


    This, /dev/sd[abcdef] is a simple way of adding the drive references, beats typing /dev/sdaa /dev/sdab etc etc for each individual drive

  • While I don't know how to help you - the following is FYI:
    ___________________________________________________________

    -No backups... *SMH* Funny, considering this is my backup box.

    I still backup the OS, even on my backup servers. It's cheap and easy if using a USB drive to boot. (And there are other methods for easily backing up a hard drive or SSD.)

    With my RAID50 setup, I have 3x RAID5 arrays striped, so I can lose 3 disks at a time, permitting 1 disk per RAID5.

    You can lose one (1) disk in each array. The second you lose 2 drives in one array, it's over. (With 3 each RAID5 arrays under RAID0, you have 3 times the likelihood of this possibility occurring.) Or if something goes wrong with the top level RAID0 array, like a superblock problem, it's over.

    -I didn't notice the kernel before the updates.

    I don't used mdadm RAID:
    I mentioned this because mdadm RAID is part of the kernel and it seems that there may have been a change to mdadm RAID in a recent kernel update.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!