OMV5 not surviving first reboot

  • I have a Proliant N54L with BIOS hack unlocking 5th drive. Was running OMV4 over Debian on a partitioned SSD without issues. 5x HDD with luksencryption (unlocking with a file).


    Tried to upgrade to OMV5 but that failed so went with a fresh install, same setup but installing over Debian to a 16Gb USB (tried OMV ISO but presume the flash drive is too small for direct install).


    All good, set everything up, installed Extras and Docker etc, but on first reboot after being able to load GUI I can’t boot. Have tried fresh install again, same result. Tried with updated kernel, still the same.


    The error is-


    You are in emergency mode. After logging in, type “journalctl -xb” to view system logs, “systemctl reboot” to reboot, “systemctl default” or “exit” to boot into default mode.


    The red errors in journalctl -xb are-

    Base address is zero, assuming no IPMI interface

    ERST: Failed to get Error Log Address Range

    APEI: Can not request [mem 0xdfab6a3] for APEI BERT registers

    usbhid 1-3.1:1:1: couldn’t find an input interrupt endpoint

    sd 6:0:0:0 [sda] No caching mode page found

    Sd 6:0:0:0 [sda] Assuming drive cache: write through

    Opemmediavault.local blkmapd[290]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory

    Opemmediavault.local kernel: kvm: disabled by bios

    Opemmediavault.local systemmd[1]: Timed out waiting for device /dev etc (all the HDD drives)

    Opemmediavault.local nfsdcltrack[475]: Failed to init database: -13


    And the last message is -


    The process /bin/plymouth could not be executed and failed. The error number returned by this process is ERRNO.


    Help? Thanks.

  • johnnyb

    Added the Label OMV 5.x
  • So looks like a fstab issue- the mergerfs filesystem is failing the boot.


    The drives it points to were encrypted through CLI though unlocked through OMV GUI, perhaps that’s the issue. I’ll wipe everything and start again.

  • the solution was adding nofail to the mergerfs fstab entry

    That should not be necessary, I have this currently on OMV4 and the option nofail is not included in the mergerfs mount point and it should not be necessary what trapexit it pointing out is that if the individual drives already have nofail, which in OMV they do, then mergerfs will mount. If a drive does not have that or fails then the megerfs will not mount.


    I have the same microserver with mergerfs and snapraid and it runs flawlessly, I have also tried and tested OMV5 in a VM with mergerfs and snapraid setting it up exactly as it is on my OMV4 and I've not needed to add that option, it does what says on the tin.

  • That should not be necessary, I have this currently on OMV4 and the option nofail is not included in the mergerfs mount point and it should not be necessary what trapexit it pointing out is that if the individual drives already have nofail, which in OMV they do, then mergerfs will mount. If a drive does not have that or fails then the megerfs will not mount.


    I have the same microserver with mergerfs and snapraid and it runs flawlessly, I have also tried and tested OMV5 in a VM with mergerfs and snapraid setting it up exactly as it is on my OMV4 and I've not needed to add that option, it does what says on the tin.


    My snapraid/mergerfs/luks setup was similarly flawless on OMV4, it's only been since the move to OMV5 these boot issues have started. I suspect it was the nofail flag all along (and not the encrypted drives) that caused initial issues.


    The last boot fail came after I did a hard reset due to the system stalling on a large file size copy between two drives- the source drive subsequently dropped from OMV.


    No issues since nofail flag. Can't explain it, but moving on to learning how to balance my media folder between old and new drives.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!