Getting the folowing error messages: 'status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint' and 30sec later status succeeded (0)

    • OMV 4.x
    • Resolved
    • Getting the folowing error messages: 'status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint' and 30sec later status succeeded (0)

      Hi guys,

      I installed OMV on a SSD and added 2x 3TB HDD's in a RAID1 using BtrFS. Everything worked fine for at least one year.
      Yesterday I decided to upgrade my System to 2x 4TB. So I replaced the two disks seperatly using the btrfs replace command.

      Since this time I get the following messages via E-Mail after every startup:
      Status failed Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:00 Action: alert Host: IGF-Server Description: status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint
      And exactly 30 seconds later:
      Status succeeded Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:30 Action: alert Host: IGF-Server Description: status succeeded (0) -- /srv/dev-disk-by-label-Data-RAID is a mountpoint

      I don't know what the problem is. Everything works fine... But I don't what to have thoes error messages in my Inbox.
    • geaves wrote:

      you can turn off the messages by going into Notifications and in the notifications tab uncheck software raid
      He's smart and does not use mdraid but btrfs' own mirror mode (differences) so I doubt you can adjust anything within 'software raid'.

      @daniel.971 without providing more information it's somewhat impossible to help.

      Source Code

      1. systemctl status mountpoint_srv_dev-disk-by-label-Data-RAID
      2. journalctl -u mountpoint_srv_dev-disk-by-label-Data-RAID
    • daniel.971 wrote:

      Hi, thanks @tkaiser @geaves for your fast reply.
      The systemctl status command results in Unit mountpoint_srv_dev-disk-by-label-Data-RAID.service could not be found.
      The other one: -- No entries --
      I can also provide the corresponding line in my fstab. Maybe it is helpful...
      /dev/disk/by-label/Data-RAID /srv/dev-disk-by-label-Data-RAID btrfs autodefrag,inode_cache,nofail 0 2
      I tried the same on my system and got the same answers :) as this is a monit alert I'm wondering if this thread might be of help. Whilst what led to the monit alerts in that thread are not the same the solution 'might be'.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      I tried the same on my system and got the same answers

      Well, why should on your system shares with the same name exist as somewhere else. Please note the '/dev/disk/by-label/' path so obviously the label you assigned to the filesystem is the important ingredient.

      An example from a random OMV box next to me -- how to get the service names for the mounted filesystems:

      Source Code

      1. root@espressobin:~# df -h | grep srv
      2. /dev/sda1 466G 423G 41G 92% /srv/dev-disk-by-label-BACKUP
      3. root@espressobin:~# systemctl list-unit-files | grep 'BACKUP'
      4. sharedfolders-AO_BACKUP.mount enabled
      5. sharedfolders-TK_BACKUP.mount enabled
      6. srv-dev\x2ddisk\x2dby\x2dlabel\x2dBACKUP.mount generated
      7. root@espressobin:~# systemctl status 'srv-dev\x2ddisk\x2dby\x2dlabel\x2dBACKUP.mount'
      8. ● srv-dev\x2ddisk\x2dby\x2dlabel\x2dBACKUP.mount - /srv/dev-disk-by-label-BACKUP
      9. Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
      10. Active: active (mounted) since Sat 2018-09-08 11:05:23 CEST; 1 weeks 4 days ago
      11. Where: /srv/dev-disk-by-label-BACKUP
      12. What: /dev/sda1
      13. Docs: man:fstab(5)
      14. man:systemd-fstab-generator(8)
      15. Tasks: 0 (limit: 4915)
      16. CGroup: /system.slice/srv-dev\x2ddisk\x2dby\x2dlabel\x2dBACKUP.mount
      17. Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
      Display All
    • daniel.971 wrote:

      Hi guys,

      I installed OMV on a SSD and added 2x 3TB HDD's in a RAID1 using BtrFS. Everything worked fine for at least one year.
      Yesterday I decided to upgrade my System to 2x 4TB. So I replaced the two disks seperatly using the btrfs replace command.

      Since this time I get the following messages via E-Mail after every startup:
      Status failed Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:00 Action: alert Host: IGF-Server Description: status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint
      And exactly 30 seconds later:
      Status succeeded Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:30 Action: alert Host: IGF-Server Description: status succeeded (0) -- /srv/dev-disk-by-label-Data-RAID is a mountpoint

      I don't know what the problem is. Everything works fine... But I don't what to have thoes error messages in my Inbox.
      The email notification comes from monit, see /etc/monit/conf.d/openmediavault-filesystem.conf. It seems that the specified mount point does not exist when monit triggers the first test. After 30 seconds the next test is done and the filesystem is now mounted. So no worries about that.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • tkaiser wrote:

      Well, why should on your system shares with the same name exist as somewhere else
      Ok understood, perhaps I should have said I got the same/similar output service could not be found and -- No entries --

      Thereby negating the necessity for you to comment that I couldn't possibly have the same answer as the systems are totally different :) I have also seen that error elsewhere and pointed the op to a possible solution as it's a monit alert.

      Since this is a user help forum does it really matter that a comment and not a 'do this' or 'or that' is incorrect!!

      I spent my Saturday morning helping a user from Turkey via a remote session and whatsapp get his Raid 5 back which was his CCTV footage. I also took the time to explain to him the importance of a backup and how he could set that up. I always understood community support forums were just that, not to point out when someone "states" something incorrectly.
      Raid is not a backup! Would you go skydiving without a parachute?
    • tkaiser wrote:

      geaves wrote:

      Since this is a user help forum does it really matter that a comment and not a 'do this' or 'or that' is incorrect!
      Not at all. I just wanted to provide some guidance how to get the name of such services. But as @votdev and you pointed out it's a monit alert that can be ignored. :)
      Good to know, that this issue could be ignored. But I don't want to recieve such messages every day. Is there a possibility to trigger monit the first time a little bit later?

      @tkaiser
      Running your commands results in:
      srv-dev\x2ddisk\x2dby\x2dlabel\x2dData\x2dRAID.mount - /srv/dev-disk-by-label-Data-RAID
      Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
      Active: active (mounted) since Fri 2018-09-21 10:38:26 CEST; 11min ago
      Where: /srv/dev-disk-by-label-Data-RAID
      What: /dev/sdb1
      Docs: man:fstab(5)
      man:systemd-fstab-generator(8)
      Process: 504 ExecMount=/bin/mount /dev/disk/by-label/Data-RAID /srv/dev-disk-by-label-Data-RAID -t btrfs -o autodefrag,inode_cache (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 4915)
      CGroup: /system.slice/srv-dev\x2ddisk\x2dby\x2dlabel\x2dData\x2dRAID.mount


      Sep 21 10:38:15 IGF-Server systemd[1]: Mounting /srv/dev-disk-by-label-Data-RAID...
      Sep 21 10:38:26 IGF-Server systemd[1]: Mounted /srv/dev-disk-by-label-Data-RAID.
    • You can set the environment variable OMV_MONIT_DELAY_SECONDS=30 (see github.com/openmediavault/open…diavault/mkconf/monit#L59) in /etc/default/openmediavault.
      After that you need to execute the following commands:

      Shell-Script

      1. # omv-mkconf monit
      2. # systemctl restart monit
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit