Getting the folowing error messages: 'status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint' and 30sec later status succeeded (0)

  • Hi guys,


    I installed OMV on a SSD and added 2x 3TB HDD's in a RAID1 using BtrFS. Everything worked fine for at least one year.
    Yesterday I decided to upgrade my System to 2x 4TB. So I replaced the two disks seperatly using the btrfs replace command.


    Since this time I get the following messages via E-Mail after every startup:
    Status failed Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:00 Action: alert Host: IGF-Server Description: status failed (1) -- /srv/dev-disk-by-label-Data-RAID is not a mountpoint
    And exactly 30 seconds later:
    Status succeeded Service mountpoint_srv_dev-disk-by-label-Data-RAID Date: Wed, 19 Sep 2018 12:30:30 Action: alert Host: IGF-Server Description: status succeeded (0) -- /srv/dev-disk-by-label-Data-RAID is a mountpoint


    I don't know what the problem is. Everything works fine... But I don't what to have thoes error messages in my Inbox.

  • There seems to be some sort of delay in the mountpoint being initialised, you can turn off the messages by going into Notifications and in the notifications tab uncheck software raid. But you really need someone to suggest the appropriate fix.

  • you can turn off the messages by going into Notifications and in the notifications tab uncheck software raid

    He's smart and does not use mdraid but btrfs' own mirror mode (differences) so I doubt you can adjust anything within 'software raid'.


    @daniel.971 without providing more information it's somewhat impossible to help.

    Code
    systemctl status mountpoint_srv_dev-disk-by-label-Data-RAID
    journalctl -u mountpoint_srv_dev-disk-by-label-Data-RAID
  • Hi, thanks @tkaiser @geaves for your fast reply.
    The systemctl status command results in Unit mountpoint_srv_dev-disk-by-label-Data-RAID.service could not be found.
    The other one: -- No entries --
    I can also provide the corresponding line in my fstab. Maybe it is helpful...
    /dev/disk/by-label/Data-RAID /srv/dev-disk-by-label-Data-RAID btrfs autodefrag,inode_cache,nofail 0 2

  • Hi, thanks @tkaiser @geaves for your fast reply.
    The systemctl status command results in Unit mountpoint_srv_dev-disk-by-label-Data-RAID.service could not be found.
    The other one: -- No entries --
    I can also provide the corresponding line in my fstab. Maybe it is helpful...
    /dev/disk/by-label/Data-RAID /srv/dev-disk-by-label-Data-RAID btrfs autodefrag,inode_cache,nofail 0 2

    I tried the same on my system and got the same answers :) as this is a monit alert I'm wondering if this thread might be of help. Whilst what led to the monit alerts in that thread are not the same the solution 'might be'.

  • I tried the same on my system and got the same answers


    Well, why should on your system shares with the same name exist as somewhere else. Please note the '/dev/disk/by-label/' path so obviously the label you assigned to the filesystem is the important ingredient.


    An example from a random OMV box next to me -- how to get the service names for the mounted filesystems:

  • The email notification comes from monit, see /etc/monit/conf.d/openmediavault-filesystem.conf. It seems that the specified mount point does not exist when monit triggers the first test. After 30 seconds the next test is done and the filesystem is now mounted. So no worries about that.

  • Well, why should on your system shares with the same name exist as somewhere else

    Ok understood, perhaps I should have said I got the same/similar output service could not be found and -- No entries --


    Thereby negating the necessity for you to comment that I couldn't possibly have the same answer as the systems are totally different :) I have also seen that error elsewhere and pointed the op to a possible solution as it's a monit alert.


    Since this is a user help forum does it really matter that a comment and not a 'do this' or 'or that' is incorrect!!


    I spent my Saturday morning helping a user from Turkey via a remote session and whatsapp get his Raid 5 back which was his CCTV footage. I also took the time to explain to him the importance of a backup and how he could set that up. I always understood community support forums were just that, not to point out when someone "states" something incorrectly.

  • Not at all. I just wanted to provide some guidance how to get the name of such services. But as @votdev and you pointed out it's a monit alert that can be ignored. :)

    Good to know, that this issue could be ignored. But I don't want to recieve such messages every day. Is there a possibility to trigger monit the first time a little bit later?


    @tkaiser
    Running your commands results in:
    srv-dev\x2ddisk\x2dby\x2dlabel\x2dData\x2dRAID.mount - /srv/dev-disk-by-label-Data-RAID
    Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
    Active: active (mounted) since Fri 2018-09-21 10:38:26 CEST; 11min ago
    Where: /srv/dev-disk-by-label-Data-RAID
    What: /dev/sdb1
    Docs: man:fstab(5)
    man:systemd-fstab-generator(8)
    Process: 504 ExecMount=/bin/mount /dev/disk/by-label/Data-RAID /srv/dev-disk-by-label-Data-RAID -t btrfs -o autodefrag,inode_cache (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
    CGroup: /system.slice/srv-dev\x2ddisk\x2dby\x2dlabel\x2dData\x2dRAID.mount



    Sep 21 10:38:15 IGF-Server systemd[1]: Mounting /srv/dev-disk-by-label-Data-RAID...
    Sep 21 10:38:26 IGF-Server systemd[1]: Mounted /srv/dev-disk-by-label-Data-RAID.

  • You can set the environment variable OMV_MONIT_DELAY_SECONDS=30 (see https://github.com/openmediava…diavault/mkconf/monit#L59) in /etc/default/openmediavault.
    After that you need to execute the following commands:


    Bash
    # omv-mkconf monit
    # systemctl restart monit

    Does it relevant now? I have added

    Code
    OMV_MONIT_DELAY_SECONDS=30

    To config, but when I run

    Code
    omv-mkconf monit

    I receive -bash: omv-mkconf: command not found

    And after reboot - get the same 2 letters

    Monitoring restart -- Does not exist rrdcached

    Monitoring alert -- Exists rrdcached

    NAS: OMV 5➕kernel Linux 5.4.44-2-pve
    (Intel i5 4570❄Gigabyte GA-H97N-WIFI❄8GB DDR3❄SSD EVO850➕WD Red 3TB➕WD Red 6TB➕WD Gold 8TB)
    Gigabit Internet➕Mikrotik hAP ac²

  • This thread is not relevant to you, you are running OMV5 this thread is for OMV4 hence the omv-mkconf: command not found

    Didn`t know such details, thanks!

    On OMV5 you need to run


    omv-salt stage run prepare

    omv-salt deploy run monit

    Thanks! Why I didn`t see this in some FAQ :(

    NAS: OMV 5➕kernel Linux 5.4.44-2-pve
    (Intel i5 4570❄Gigabyte GA-H97N-WIFI❄8GB DDR3❄SSD EVO850➕WD Red 3TB➕WD Red 6TB➕WD Gold 8TB)
    Gigabit Internet➕Mikrotik hAP ac²

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!