RAID mounting problem after upgrade from OMV 7 to 8

  • Good morning, everyone.


    After updating OMV from version 7 to 8, I have a problem with the local environment of Portainer (still installed with version 6).

    Within Portainer, the local environment appears as UP (image 1), but as soon as I click on it, I get the error “Failed to load environment” (image2).


    I have already followed the instructions provided by ryecoaaron on this thread:
    docker not working since omv-upgrade


    In particular, I have already executed this:

    Code
    sudo mkdir -p /etc/default/grub.d
    echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0"' | sudo tee /etc/default/grub.d/apparmor.cfg
    sudo update-grub
    sudo reboot

    All previously installed containers:

    • Plex
    • Tautulli
    • Yacht

    are accessible, except for Yacht, from which I receive the message “Unable to reach the site.


    In Plex I have 3 data directories, but when I try to launch any files from the main directory, the system responds with “Playback error. Please check that the file exists and the necessary drive is connected.” while the files in the other two directories (which reside on two other hard drives) works perfectly.


    Tautulli works perfectly.

    Ultimately, I only have problems with Portainer, Plex, and Yacht.

    In OMV -> Services -> Compose -> Containers, all containers previously installed with Portainer are visible (image 3).


    Can anyone help me before I cause any damage and have to reinstall everything?


    Thank you very much, and to anyone who wants to help me.

  • I forgot to mention another piece of information that might be useful.

    Previously, to access OMV from the browser, I used openmediavault.local:#PORTNUMBER.

    Now I have to type IPADDRESS:#PORTNUMBER.

  • I apologize, but I found something else strange.


    When I access the main shared PLEX folder from Windows, it appears to be empty.


    I checked via shell and indeed all the files inside the main PLEX folder have disappeared!


    The main Plex folder was contained in a RAID.

    I checked the RAID from OMV and indeed it appears to be empty (image 1).


    Main Plex shared folder is available (image 2)

  • The filesystem that lives on your WSRAID (/dev/md0) isn't mounted for some reason.


    Is the RAID array itself up and working? (Check both Disks and Multiple Devices under the Storage tab.)


    Unless there are other issues, I would think that simply mounting the filesystem from the GUI should make everything work properly. But I have a feeling you would have thought of that already...

  • Hi cubemin and thanks for reply.


    All devices are displayed and appear to be functioning.

    In fact, it seems that the RAID has not been mounted, and I don't know why...

    I received this notifications via email:


    I've never done this before: how do I mount the file system from the GUI?


    Thanks again.

  • I had already asked for a possible solution to the Mountpoint loading problem here:



    The output of ls -al /etc/systemd/system/docker.service.d/ NOW is this:


    Code
    totale 16
    drwxrwxrwx  2 root root 4096  8 feb 14.50 .
    drwxr-xr-x 40 root root 4096  8 feb 11.11 ..
    -rw-r--r--  1 root root   65  8 feb 14.50 override.conf
    -rw-r--r--  1 root root  253 15 set  2024 waitAllMounts.conf

    override.conf is a new file created after OMV upgrade from 7.x to 8.x.


    File daemon.json after OMV upgrade is:


    Code
    {
      "data-root": "/var/lib/docker",
      "log-driver": "json-file",
      "log-opts": {
        "max-file": "3",
        "max-size": "50m"
      },
      "storage-driver": "overlay2"
    }
  • This is the output of cat /proc/mdstat:

    Code
    Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10]
    md0 : active raid5 sdd[4] sdc[3] sda[2]
          3906764800 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>


    This is the output of mdadm --detail --scan:

    Code
    ARRAY /dev/md0 metadata=1.2 UUID=4ee32c19:e1e3fcd0:60825221:f548dbe8
    • Official Post

    Your array is fine. No idea why your filesystem didn't mount. Mount it with:


    sudo mount -a


    Did you reboot?

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I mounted it from the console with mount -a and restarted, but email notifications keep coming in and the RAID is not being recognized.

    After reboot with mount -a this is the response

    Code
    root@openmediavault:~# mount -a
    mount: /srv/dev-disk-by-uuid-17f31cd8-ac58-4f95-8ebe-1e867e3b300a: fsconfig() failed: /dev/md0: Can't open blockdev.
           dmesg(1) may have more information after failed mount system call.
  • shecky66

    Changed the title of the thread from “Docker enviroments problem after upgrade from OMV 7 to 8” to “RAID mounting problem after upgrade from OMV 7 to 8”.
  • votdev blkid result is this:

    Code
    /dev/sdf1: UUID="06c4ddb3-0f14-4c2a-962c-6b2623fac8fb" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="10876870-d797-4684-80ec-b9a81ae0b587"
    /dev/sdd: UUID="4ee32c19-e1e3-fcd0-6082-5221f548dbe8" UUID_SUB="c6b9538c-8679-1176-28da-a6e7dde35f79" LABEL="openmediavault.local:RAID" TYPE="linux_raid_member"
    /dev/sdb: UUID="4ee32c19-e1e3-fcd0-6082-5221f548dbe8" UUID_SUB="57456778-ba5f-f3f5-dd99-795701b46f30" LABEL="openmediavault.local:RAID" TYPE="linux_raid_member"
    /dev/md0: LABEL="WSRAID" UUID="17f31cd8-ac58-4f95-8ebe-1e867e3b300a" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sde1: LABEL="SUPPORT" UUID="117a7274-86a2-4478-8d8c-8717549c0d3d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="deb0a13a-90f3-49b6-9062-372afc321acc"
    /dev/sdc: UUID="4ee32c19-e1e3-fcd0-6082-5221f548dbe8" UUID_SUB="27d619b3-130e-cfee-11d0-2bb9256b61b3" LABEL="openmediavault.local:RAID" TYPE="linux_raid_member"
    /dev/sda5: UUID="e9a2f680-45ff-4677-afd2-f22880958b71" TYPE="swap" PARTUUID="19cfcc19-05"
    /dev/sda1: UUID="429b606c-2baf-42c3-9d2c-433c80f71c97" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="19cfcc19-01"
  • votdev result of mdadm --detail /dev/md0 is this:



    And yes: sda is the root disk!

  • There is a discrepancy between the output of /proc/mdstat and mdadm --detail /dev/md0. Did you perform a reboot during that?

    Absolutely no!
    The raid seems to be working, but there are no files inside it...


    Is there a way to fix this without having to recreate the RAID?

    • New
    • Official Post

    Just another reason not to use raid.


    How do you know there are no files in it? Is the filesystem on the array mounted since the array looks ok? blkid showed that the filesystem was still on the array.


    grep  md0 /proc/mounts


    if that command returns no output,


    sudo mount -a

    grep  md0 /proc/mounts

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron grep md0 /proc/mounts return no output.

    sudo mount -a return

    Code
    mount: /srv/dev-disk-by-uuid-17f31cd8-ac58-4f95-8ebe-1e867e3b300a: fsconfig() failed: /dev/md0: Can't open blockdev.
           dmesg(1) may have more information after failed mount system call.

    And now?

    P.S.: I don't know why you claim that using RAID is not a good choice.
    Personally, RAID has saved my life a few times when, after a disk broken, I didn't lose any data...

    • New
    • Official Post

    And now?

    What is the output of fsck.ext4 /dev/md0

    P.S.: I don't know why you claim that using RAID is not a good choice.
    Personally, RAID has saved my life a few times when, after a disk broken, I didn't lose any data...

    First, because people who don't know linux struggle to recover from problems with raid.

    Second, because raid is not backup which most people are trying to use it.

    Third, because people use raid on hardware that is not good for raid and has no redundancy which raid is meant to provide.

    Finally, I have seen more people lose data from a raid array that won't assemble than people have saved data from having raid.


    Proper backup would save you from a broken disk and many more things. Raid is meant for availability. All it is giving you is less downtime.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!