OMV5 not surviving first reboot

  • I have a Proliant N54L with BIOS hack unlocking 5th drive. Was running OMV4 over Debian on a partitioned SSD without issues. 5x HDD with luksencryption (unlocking with a file).


    Tried to upgrade to OMV5 but that failed so went with a fresh install, same setup but installing over Debian to a 16Gb USB (tried OMV ISO but presume the flash drive is too small for direct install).


    All good, set everything up, installed Extras and Docker etc, but on first reboot after being able to load GUI I can’t boot. Have tried fresh install again, same result. Tried with updated kernel, still the same.


    The error is-


    You are in emergency mode. After logging in, type “journalctl -xb” to view system logs, “systemctl reboot” to reboot, “systemctl default” or “exit” to boot into default mode.


    The red errors in journalctl -xb are-

    Base address is zero, assuming no IPMI interface

    ERST: Failed to get Error Log Address Range

    APEI: Can not request [mem 0xdfab6a3] for APEI BERT registers

    usbhid 1-3.1:1:1: couldn’t find an input interrupt endpoint

    sd 6:0:0:0 [sda] No caching mode page found

    Sd 6:0:0:0 [sda] Assuming drive cache: write through

    Opemmediavault.local blkmapd[290]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory

    Opemmediavault.local kernel: kvm: disabled by bios

    Opemmediavault.local systemmd[1]: Timed out waiting for device /dev etc (all the HDD drives)

    Opemmediavault.local nfsdcltrack[475]: Failed to init database: -13


    And the last message is -


    The process /bin/plymouth could not be executed and failed. The error number returned by this process is ERRNO.


    Help? Thanks.

  • johnnyb

    Added the Label OMV 5.x
  • Further relevant info- I manually partitioned the USB in the Debian install as it was below threshold capacity for auto partition.


    10GB /, 4GB swap.

  • So looks like a fstab issue- the mergerfs filesystem is failing the boot.


    The drives it points to were encrypted through CLI though unlocked through OMV GUI, perhaps that’s the issue. I’ll wipe everything and start again.

    • Official Post

    the solution was adding nofail to the mergerfs fstab entry

    That should not be necessary, I have this currently on OMV4 and the option nofail is not included in the mergerfs mount point and it should not be necessary what trapexit it pointing out is that if the individual drives already have nofail, which in OMV they do, then mergerfs will mount. If a drive does not have that or fails then the megerfs will not mount.


    I have the same microserver with mergerfs and snapraid and it runs flawlessly, I have also tried and tested OMV5 in a VM with mergerfs and snapraid setting it up exactly as it is on my OMV4 and I've not needed to add that option, it does what says on the tin.

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 7x amd64 running on an HP N54L Microserver

  • That should not be necessary, I have this currently on OMV4 and the option nofail is not included in the mergerfs mount point and it should not be necessary what trapexit it pointing out is that if the individual drives already have nofail, which in OMV they do, then mergerfs will mount. If a drive does not have that or fails then the megerfs will not mount.


    I have the same microserver with mergerfs and snapraid and it runs flawlessly, I have also tried and tested OMV5 in a VM with mergerfs and snapraid setting it up exactly as it is on my OMV4 and I've not needed to add that option, it does what says on the tin.


    My snapraid/mergerfs/luks setup was similarly flawless on OMV4, it's only been since the move to OMV5 these boot issues have started. I suspect it was the nofail flag all along (and not the encrypted drives) that caused initial issues.


    The last boot fail came after I did a hard reset due to the system stalling on a large file size copy between two drives- the source drive subsequently dropped from OMV.


    No issues since nofail flag. Can't explain it, but moving on to learning how to balance my media folder between old and new drives.

  • Thank you johnnyb

    I experienced the same problem and solved it yesterday.

    Commented out all drives in /etc/fstab (but not system drive xD) and my system booted as expected.

    My drives are encrypted via luks plugin as well.

    Running OMV5 with Debian 10.

  • please continue the conversation started in https://github.com/openmediavault/openmediavault/issues/850 here

    More details on OS, OMV and HW setup are required to allow for troubleshooting!

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Official Post

    please continue the conversation started in https://github.com/openmediavault/openmediavault/issues/850 here

    More details on OS, OMV and HW setup are required to allow for troubleshooting!

    ?( why? this thread is related to mergerfs causing the system not to boot should a drive fail, the solution is to add no fail to the mergerfs mountpoint in fstab. This in itself should not be necessary as the no fail flag is added to each drive anyway, but the solution for johnnyb and I'm surmising titango20 was to add no fail flag to the mergerfs mountpoint + both are using LUKS which could also throw a spanner in the works.

    The first post in that github link is supposed to point to a thread on the forum which it doesn't!!

  • geaves, author of referenced issue #850 was not happy with close of his issue, hence I'm giving him a chance to convince us of a bug, so far I've not seen any evidence for it and share your view.

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Official Post

    The problem is github users and forum users don't always use the same name, I don't :) hence I failed to understand your post, who are you asking to continue their post on github, + this thread is technically resolved but has not been marked as such.

  • author of referenced github issue #850

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Official Post

    author of referenced github issue #850

    The author of the github #850 is Kasperx and that user name has not responded to this thread + why is he referencing this thread anyway as this thread is concerning mergerfs.


    EDIT: I've replied to that github post

  • why is he referencing this thread anyway as this thread is concerning mergerfs.

    thats exactly the question only the author of GH issue can answer :)

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Official Post

    thats exactly the question only the author of GH issue can answer

    I've actually asked him his username on the forum as he states that he has continued the thread on the forum, that was 3 hours ago according to github


    Personally I think he's referencing the wrong forum thread, because he doesn't actually state his issue.

  • Sorry for losing main aspect of mergefs. I discriped my problem in referenced thread.

    I will start a new thread here if you are more happy with that.

    Thanks to geaves for clarification.

    .

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!