File system missing after update OMV 7.7.0-1

  • Hi! This is my first post in this forum so, first of all, I want to thank all members and specially the developers for their great work. I've been using OMV for several years now and the forum has been solving the few problems I´ve had since then.


    Unfortunnaly, two days ago I encounter a problem and I've not been able to find a post on the forum that can guide me to solve this.


    Issue: Suddenly a shared folder that uses SMB stops working. In the GUI the file system was marked as "Missing".


    This is what I've checked and tried:


    1. The file system (EXT4) is composed by two disks that work as a stripe using Multiple Device plugin.


    2. The stripe has dissapear from the MD plugin.


    3. The disks are correctly detected by OMV and are listed as always in the Disks menu (1 and 2). However, the file system does not detect the stripe composed by them.


    4. After searching in the forums and the internet, first I tried to remove the permissions on the shared folder, then the shared folder itself and and then remove the file system (after doing this I realised than maybe the data will be lost).


    5. I've been trying to recreate the Stripe since then, but the disks are not listed as options


    So, maybe does this has something to be with the last update? Is there any way to create again the stripe with this to disks to make the content available again as a shared folder? Or, in any case, the data is lost because of the removed file system? Luckily it didn't contain anything important, but if there is any way to recover it and if the process is not to difficult, I've will be glad to know.

  • crashtest

    Approved the thread.
  • Hi! I also have this problem after the latest update of OMV 7.7.0-1 on both my OMV NAS.


    The first server have two different jbod disks, the second have only one disk.

    They worked fine until re-boot, then I also get missing superblock on all drives.

    I feels very unlikely that 3 drive assemblies on 2 different servers would fail at the same time due to a hardware error.


    My lesson is to never update your servers at the same time.


    Trying to find help on this, but only found your post so far.


    -- Martin

  • See this.



    It seems there's some issue with the latest backports kernel.

    See if it's your case.

  • See this.



    It seems there's some issue with the latest backports kernel.

    See if it's your case.

    I guess we need a sticky or [how-to} on this problem.

    TheXerax I missed your post. Assuming your kernel has been updated to 6.12.9, then your MD array may re-appear it you do this at the CLI:


    1. modprobe raid0  - stop here if this returns an error message

    2. check the array state with: cat proc/mdstat

    3. If the array is inactive, it may be activated with: mdadm --run /dev/mdN ( where N is the number taken from 2)

    4. re-check as in 2 above, if the arrays is active but read-only then use: mdadm --readwrite /dev/mdN


    Re-boot and check array status. Report back.

  • Good point re: updates. Can you please describe the arrays you are using. When you say jbod, do you mean MD RAID linear?

    The updates brings in backport kernel 6.12.9 with has a missing config element as I described here: RE: Lineal RAID wont work since i Update 7.7.0-1 (Sandworm)

  • Good point re: updates. Can you please describe the arrays you are using. When you say jbod, do you mean MD RAID linear?

    The updates brings in backport kernel 6.12.9 with has a missing config element as I described here: RE: Lineal RAID wont work since i Update 7.7.0-1 (Sandworm)

    Hi! Yes they are MD raid linear arrays. I did manage to get it working again by switching back to previous kernel as suggested by the thread you linked, thank you! :)

  • Hi Krisbee!. Thanks a lot for your answer. I've followed these steps, and the array is listed, but can't be activated in any way:


    1. modprobe raid0 - Executing this dindn't give me any error or message

    2. cat proc/mdstat - The array is listed but it's inactive:

    Code
    root@nas:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdd[1]
          488254488 blocks super 1.2

    3. mdadm --run /dev/mdN - It gives an invalid argument error:

    Code
    root@nas:~# mdadm --run /dev/md0
    mdadm: failed to start array /dev/md0: Invalid argument

    Searching about the mdadm command I found that maybe the argument --assemble could work. The system just runs it but the array is inactive anyways.


    Then i checked this post named by Soma

    See this.



    It seems there's some issue with the latest backports kernel.

    See if it's your case.

    There you said that reurning to a previous Kernel could work. I'm in 6.12.9, so I tried to install the openmediavault-kernel 7.1.4 plugin, stablished the 6.1.0-31 version and, after a reboot, the array is not shown. I repeat the process of the mdadm but it doesn't work neither.


    Then I checked and, apparently, there is not any raid0 controller installed, so the modprobe raid0 is not working:

    I'll keep the 6.1.0 Kernel version for the moment.


    PD: After wrinting all of the above I noticed that the array inactive and listed by [tt]cat /proc/mdstat[tt] is the sdd, and that is only one of the two devices that made the array. Maybe this is because I removed the file system?

  • TheXerax


    The output of lsmod shows the md_raid module for raid0 is present on your system, but can you please post teh output of the following commands in order to verify the current state of your system:


    uname -r

    blkid

    cat /etc/fstab

    findmnt --real

    cat /proc/mdstat

    mdadm -D /dev/md0

    mdadm -E /dev/sd[a-z]

  • There you go Krisbee:

    • uname-r
    Code
    root@nas:~# uname -r
    6.1.0-31-amd64
    • blkid
    Code
    root@nas:~# blkid
    /dev/sdd: UUID="bc0094a4-71c0-70cb-92a7-9bd3ee8e7a78" UUID_SUB="b90910e3-362e-66cf-e1fa-250e1b00f391" LABEL="nas:0" TYPE="linux_raid_member"
    /dev/sdb1: UUID="df79a195-783f-4db6-aa40-42e721c91cae" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="953e70eb-6821-48fa-be3d-7e83e3ddb898"
    /dev/sda5: UUID="f934c81b-1839-423c-b8d0-291bf5a3255f" TYPE="swap" PARTUUID="9dfe3998-05"
    /dev/sda1: UUID="1556e362-037e-49b9-8176-1231a22f1e94" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9dfe3998-01"
    /dev/sdc1: PARTUUID="890107a2-fc25-446f-94fb-a61e62610f7d"
    • cat /etc/fstab
    • findmnt --real
    Code
    root@nas:~# findmnt --real
    TARGET                                                       SOURCE    FSTYPE OPTIONS
    /                                                            /dev/sda1 ext4   rw,relatime,errors=remount-ro
    └─/srv/dev-disk-by-uuid-df79a195-783f-4db6-aa40-42e721c91cae /dev/sdb1 ext4   rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
    • cat /proc/mdstat
    Code
    root@nas:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdd[1](S)
          488254488 blocks super 1.2
    
    unused devices: <none>
    • mdadm -D /dev/md0
    • mdadm -E /dev/sd[a-z]
  • This is insane.


    So from a bad kernel install on armbian (6.6.79) my OMV setup became unbootable.


    After installing OMV7 on a fresh install, Storage > File Systems is empty. Can't even create manual mounts. File system dropdown is empty.


    What happened?

    • Official Post

    You need to mount existing file systems. There is a separate button in the file system page for that action.

    Thou shalt not make a machine in the likeness of a human mind.

  • PD: After wrinting all of the above I noticed that the array inactive and listed by [tt]cat /proc/mdstat[tt] is the sdd, and that is only one of the two devices that made the array. Maybe this is because I removed the file system?


    The outputs you posted in #10 above confirm that /dev/sdd was part of array with two devices ( line 38 of "mdadm -E ..." output). But if you look at the output of bklid, there's only one device on your system that is marked as a member of an array. Although you said your array was a stripe, the outptut of "mdadd -D ..." describes your inactive array as linear. So I don't know which type was actually in use. A stripe, or linear, array has no redundancy, lose of single drive means your array is dead. You'll need to restore data from backups.

  • The outputs you posted in #10 above confirm that /dev/sdd was part of array with two devices ( line 38 of "mdadm -E ..." output). But if you look at the output of bklid, there's only one device on your system that is marked as a member of an array. Although you said your array was a stripe, the outptut of "mdadd -D ..." describes your inactive array as linear. So I don't know which type was actually in use. A stripe, or linear, array has no redundancy, lose of single drive means your array is dead. You'll need to restore data from backups.

    I imagined that this could be the conclusion for my problem. I don't have backups of everything, but there were nothing important in there.
    I think that this could be an oportunity to replace those old disks for something newer and better.


    In any case, thanks a lot Krisbee :)

  • TheXerax

    Added the Label resolved
  • Guten Tag, ich bräuchte mal etwas Hilfe bin totaler Anfänger ich habe 3 OMV NAS,s 2sind nach Updates das File System weg gewesen nun gut sind die daten halt weg ( kotz ) nun hab ich das Problem bei der 3 und da sind alle filme und Musik drauf und wurde gern die daten wieder haben ich hatte die platten schon in Linux Mind da kam ( platte kann nicht eingehängt werden ( gepart erkennt sie ) OMV auf Ursprung neu installiert hat nicht gebracht nun meine Idee irgend was mit Mounten ( weiß nicht was das ist und wie das geht ) genau da bräuchte ich Hilfe den nach 4 Jahren Misserfolge mit OMV hab ich langsam die Nase voll nun meine bitte kann mir jemand beim Mounten Helfen ( wie man sich mit Putty verbindet weiß ich das war es aber auch schon.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!