RAID: strange behaviour

  • Hi at all.


    After the latest OMV update (release 5.6.20-1) my RAID is no longer mounted by the system.


    After checking with journalctl -b and dmesg if there were errors and not finding anything in particular, I got around the problem by inserting the command mount -a at reboot in a scheduled job.


    On reboot I received 2 email notifications:

    • the first that the mount has failed;
    • the second that the mount is successful (I suppose due to the scheduled job).

    Going to check in the File System menu, RAID is up and running.


    But I can't understand why at boot, without the workaround, the system fails to mount the RAID. :/:/:/


    Can anyone give me an answer?


    Thanks in advance to those who want to help me find a solution.


    [ Original post is >> HERE << ]

  • For your more information, below are the results of the commands


    cat /etc/fstab

    and mount

  • Maybe you should do a check on the S.M.A.R.T of the drives that compose your RAID.

    • Offizieller Beitrag

    TBH I have never come across this before, however in your previous post from the joutnalctl output the one thing that stands out before the this is not a mountpoint, is, 'unable to read filesystem'


    During the boot process there is a check done on each file system (I think), can't remember without seeing the boot process output.


    You could try running fsck /dev/md? where the ? is replaced by the raids reference i.e. 0, 127 etc.

  • I can not see the exact model number of your disks (column to narrow)

    Is it

    "WDC WD20EFRX-xxx" --> CMR drives

    or is it

    "WDC WD20EFAX-xxx" --> SMR drivers

    SMR drives are known to be less suitable for RAID arrays.

    • Offizieller Beitrag

    Ok basically fsck needs to run with the raid stopped and unmounted, either remove the scheduled job that mounts at boot, stop the array then run fsck. Or install systemrescue cd as per the instructions in omv-extras -> kernel, or create a systemrescue cd then run it.


    Basically there is an issue with the file system that needs to be corrected

  • Hi ananas and thanks for your reply.

    My RAID consists of 3 discs model WDC WD20EFAX-68FB5N0 (RAID 5).

  • Ok basically fsck needs to run with the raid stopped and unmounted, either remove the scheduled job that mounts at boot, stop the array then run fsck. Or install systemrescue cd as per the instructions in omv-extras -> kernel, or create a systemrescue cd then run it.


    Basically there is an issue with the file system that needs to be corrected

    Thanks for your support geaves.


    I disabled the scheduled job that mounts raid at boot and restart the system.

    The RAID was not mounted at boot and so I was able to run the fsck /dev/md0 command.


    A multitude of errors have appeared, practically all related to the Plex system configuration folder (Plex was installed via docker).

    I restore them all (I can't list all errors here, it would be too long...).


    At the end the fsck result said:

    Now I try to restart the system without workaround (scheduled job at boot).


    I will let you know.


    Thanks again!

    • Offizieller Beitrag

    A multitude of errors have appeared, practically all related to the Plex system configuration folder (Plex was installed via docker).

    I restore them all (I can't list all errors here, it would be too long...).

    This could be related to what ananas spotted regarding your drives, whilst I am aware of SMR drives some people are not, and that just never crossed my mind.

  • After the reboot the RAID system starts up again (without using the workaround). 8)


    One problem: PLEX no longer works... <X||;(

    I'll have to recreate the container.


    Thanks anyway everyone for the support!

  • shecky66

    Hat das Label OMV 5.x hinzugefügt.
  • shecky66

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!