RAID fails to assemble/Filesystem fails to mount during boot

  • Since the last update to 7.3.0-5 (Sandworm) I have run into the following issue (some excerpts from the most recent boots):

    Code
    Jul 02 17:03:06 openmediavault systemd[1]: dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device: Job dev-disk-by\x2duuidcfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device/start timed out.
    Jul 02 17:03:06 openmediavault systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device - /dev/disk/by-uuid/cfd18169-e6ea-469d-9f25-c222a82302a6.
    Jul 02 17:03:06 openmediavault systemd[1]: Dependency failed for systemd-fsck@dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.service - File System Check on /dev/disk/by-uuid/cfd18169-e6ea-469d-9f25-c222a8230>
    Jul 02 17:03:06 openmediavault systemd[1]: Dependency failed for srv-dev\x2ddisk\x2dby\x2duuid\x2dcfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.mount - /srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6.
    Code
    Jul 02 17:37:38 openmediavault monit[1008]: 'filesystem_srv_dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' trying to restart
    Jul 02 17:37:38 openmediavault monit[1008]: 'mountpoint_srv_dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' status failed (32) -- /srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6 is not a mountpoint
    Jul 02 17:37:38 openmediavault monit[1008]: 'mountpoint_srv_dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' status failed (32) -- /srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6 is not a mountpoint
    Jul 02 17:38:08 openmediavault monit[1008]: Filesystem '/srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' not mounted
    Jul 02 17:38:08 openmediavault monit[1008]: 'filesystem_srv_dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' unable to read filesystem '/srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6' state
    Code
    Jul 01 19:32:15 openmediavault systemd[1]: dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device: Job dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device/start timed out.
    Jul 01 19:32:15 openmediavault systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device - /dev/disk/by-uuid/cfd18169-e6ea-469d-9f25-c222a82302a6.
    Jul 01 19:32:15 openmediavault systemd[1]: Dependency failed for srv-dev\x2ddisk\x2dby\x2duuid\x2dcfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.mount - /srv/dev-disk-by-uuid-cfd18169-e6ea-469d-9f25-c222a82302a6.
    Jul 01 19:32:15 openmediavault systemd[1]: srv-dev\x2ddisk\x2dby\x2duuid\x2dcfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.mount: Job srv-dev\x2ddisk\x2dby\x2duuid\x2dcfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.mount/start failed with result 'dependency'.
    Jul 01 19:32:15 openmediavault systemd[1]: Dependency failed for systemd-fsck@dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.service - File System Check on /dev/disk/by-uuid/cfd18169-e6ea-469d-9f25-c222a82302a6.
    Jul 01 19:32:15 openmediavault systemd[1]: systemd-fsck@dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.service: Job systemd-fsck@dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.service/start failed with result 'dependency'.
    Jul 01 19:32:15 openmediavault systemd[1]: dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device: Job dev-disk-by\x2duuid-cfd18169\x2de6ea\x2d469d\x2d9f25\x2dc222a82302a6.device/start failed with result 'timeout'.

    After that in the GUI it displays the filesystem as missing and in "Multiple Devices" the RAID is completely gone. I can work around of this whole issue by doing a manual "mdadm --assemble md0" and adding "x-systemd.automount" to the file system in fstab. After I manually assemble the RAID after booting, only then the NAS does the "I'm done booting"-beep and everything works as normal.


    Here's more info for good measure:


    cat /proc/mdstat:

    Code
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid6 sdb[0] sdh[7] sdg[6] sdf[5] sda[4] sde[3] sdd[2] sdc[1]
    35162342400 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
    bitmap: 0/44 pages [0KB], 65536KB chunk

    blkid:

    fdisk -l | grep "Disk ":

    cat /etc/mdadm/mdadm.conf:

    mdadm --detail --scan --verbose:

    Code
    ARRAY /dev/md/md0 level=raid6 num-devices=8 metadata=1.2 name=openmediavault:0 UUID=86fc8ff7:cd4917fc:3f874a93:f382df33
    devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh

    cat /etc/fstab:


    Any pointers how to solve this? I could just run "mdadm --assemble md0" via crontab @reboot, but that would be bad practice, no?

  • votdev

    Approved the thread.
  • After a bit of tinkering I found a solution. What fixed it for me was to edit /etc/mdadm/mdadm.conf. The bit at the end

    Code
    ARRAY /dev/md0 metadata=1.2 name=openmediavault:0 UUID=86fc8ff7:cd4917fc:3f874a93:f382df33
    ARRAY /dev/md/0  metadata=1.2 UUID=86fc8ff7:cd4917fc:3f874a93:f382df33 name=openmediavault:0

    I simply commented out the second line like so

    Code
    ARRAY /dev/md0 metadata=1.2 name=openmediavault:0 UUID=86fc8ff7:cd4917fc:3f874a93:f382df33
    #ARRAY /dev/md/0  metadata=1.2 UUID=86fc8ff7:cd4917fc:3f874a93:f382df33 name=openmediavault:0

    This fixed the issue, not exactly sure why.

  • tonnuminat

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!