MD Email every morning

  • Since the upgrade I recieve the below email each morning


    All drives aare still on line, I have 2 SSDs and 2 HDDs both are mirror

    Code
    Anacron job 'cron.daily' on MyNas
    /etc/cron.daily/openmediavault-mdadm:
    mdadm: DeviceDisappeared event detected on md device /dev/md/md1
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0
    mdadm: NewArray event detected on md device /dev/md1
  • there's a workaround for this.


    in /etc/udev/rules.d you will find 99-openmediavault-md-raid.rules. if you have updated to Openmediavault 8.0.4 you look for these line:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192"

    at the end add ", SYMLINK+="md/md0" so it will look like this:


    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192", SYMLINK+="md/md0"

    then look for this:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      IMPORT{program}="import_env /etc/default/openmediavault", \
      ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}"

    again add ", SYMLINK+="md/md0" at the end so that looks like this:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      IMPORT{program}="import_env /etc/default/openmediavault", \
      ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}", SYMLINK+="md/md0"


    you can test it with mdadm --monitor --scan --oneshot. if you don't change this you will get the DeviceDisappeared error.


    Since you have two raid sets, I think you have to add ", SYMLINK+=md/nd1" too, but as said, add above first, then test with mdadm --monitor --scan --oneshot and if then you only get an message about md1 then add SYMLINK+="md/nd1" too.


    I have found solution here: https://discourse.ubuntu.com/t…red-newarray-alerts/56076

    more background info here: https://linux.debian.bugs.dist…ssage-every-day-from-cron

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • cheers changed the bellow


    Code
    ACTION=="add|change", \
    SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
    ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192", SYMLINK+="md/md0"
    
    
    ACTION=="add|change", \
    SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
    IMPORT{program}="import_env /etc/default/openmediavault", \
    ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}", SYMLINK+="md/md0"



    this is still the output even after reboot

    Code
    mdadm --monitor --scan --oneshot
    mdadm: DeviceDisappeared event detected on md device /dev/md/md1
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0
    mdadm: NewArray event detected on md device /dev/md1
  • that is strange. I have removed the statements in my config I added and reloaded the udev rules, this is the output now:


    Code
     mdadm --monitor --scan --oneshot                 
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0

    when I revert my config and reload udev rules error is gone here.


    what happens if you do this:


    udevadm control --reload-rules && udevadm trigger <-reload udev rules

    then

    mdadm --monitor --scan --oneshot?

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • still same :(


    Code
    cd /etc/udev/rules.d
    nano 99-openmediavault-md-raid.rules
    udevadm control --reload-rules && udevadm trigger
    mdadm --monitor --scan --oneshot
    mdadm: DeviceDisappeared event detected on md device /dev/md/md1
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0
    mdadm: NewArray event detected on md device /dev/md1
  • if you look in the /dev/ folder, do you have a subfolder /dev/md ?


    On mine I have the device file in /dev/, so /dev/md0 exists there. In the /dev/md subfolder there's a symbolic link to /dev/md0, so /dev/md/md0 points to /dev/md0.


    How does your /dev folder look?

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • the statement SYMLINK+=md/md0 should create a subdir /dev/md.


    what happens if you create a md folder in /dev and create a symlink in /dev/md/ to /dev/md0 in it?


    so mkdir /dev/md # create subdir

    cd /dev/md #change into subdir

    ln -s /dev/md0 #create symlink


    test if that solves it.

    if so create symlink to /dev/md1 in /dev/md too

    (ln -s /dev/md1 in /dev/md)


    hope this helps. Issue is that your and mine raid exist as /dev/mdX, and mdadm --monitor --scan --oneshot looks in /dev/md for the devices. So if the udev rule did not create the symlink maybe doing so by hand will solve it.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • wiz101,


    Since the 8.x upgrade I've been having the exact same DeviceDisappeared daily emails. The udev rule change did not fix the issue, but manually creating the symlink as you described in your last post fixed it for me. Thanks for your help!

  • I had the udev rule changed before the update and that changed it for me. Nevertheless once the symlinks are present it should work.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • Thanks wiz101. Manually creating the symlink as you described has fixed this for me too :thumbup:

    Code
    root@openmediavault:~#  mdadm --monitor --scan --oneshot
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb[1] sdc[2]
          976629376 blocks super 1.2 [2/2] [UU]
          bitmap: 0/8 pages [0KB], 65536KB chunk
    
    unused devices: <none>
    root@openmediavault:~# sudo /etc/cron.daily/openmediavault-mdadm
    root@openmediavault:~#
  • cheers this worked doing it for each md

    udevadm control --reload-rules && udevadm trigger

    now shows blank

  • CAH1982

    Added the Label resolved
  • Just found out after a reboot settings are lost


    in this

    Code
    /etc/udev/rules.d/99-openmediavault-md-raid.rules


    Adding the below seems to fix it


    Code
    ACTION=="add|change", \
    SUBSYSTEM=="block", KERNEL=="md*", \
    SYMLINK+="md/%k"


    So full file looks like this

  • That is strange the settings get removed, in my file they still are there. Well if it is fixed it is fixed.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • That is strange the settings get removed, in my file they still are there. Well if it is fixed it is fixed.

    Im refering to when creating the links manually, after a reboot, the links would need to be recreated

    adding the below to the rules, means after reboot i reboot the links exist


    ACTION=="add|change", \SUBSYSTEM=="block", KERNEL=="md*", \
    SYMLINK+="md/%k"



    Confirming that creating symlinks in /dev/md resolves the issue.


    It's odd, I have 3 MD arrays, 0, 1 and 7 and only 1 symlink for 0 was created by OMV 8 install. Perhaps there is a bug?


    Maybe, one of the devs would have to advise on that, least its a easy fix

  • Hi,


    I am also having this issue, on my backups NAS.


    This might be a bug in OMV. I will apply this solution and wait a few weeks more to catch a few more bugs before upgrading the more important NAS to OMV 8.x.


    The solution seems to work.


    Thank you!

  • Hi,

    after upgrading from OMV 7 to 8, I do have the same problem with mdadm email, a bit different though:

    I added this fix to /etc/udev/rules.d/99-openmediavault-md-raid.rules

    Code
    ACTION=="add|change", \
    SUBSYSTEM=="block", KERNEL=="md*", \
    SYMLINK+="md/%k"

    After that I still get output from mdadm --monitor --scan --oneshot

    The /dev/md/1 symlink also existed before the fix.


    Any suggestions?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!