OMV 5 - System monitoring

  • Hi,


    I was wondering is there away to monitor the system? What i mean is something like Netdata but also send me emails when a drive is going to fail or has failed.


    Thanks,

    Normally if its in red it's bad!!!


    Machine 1 - Dell OptiPlex 790 - Core i5-2400 3.10GHz - 16GB RAM - OMV5

    Machine 2 - Raspberry PI4 - ARMv7 - 2GB - OMV5

  • Look in Notification. Enable Filesystems and S.M.A.R.T. under Notifications tab. Configure the rest in the Settings tab.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Is there any way of getting to know when the last SMART self test was executed - like Time and Date. I have scheduled jobs for the same but I have doubts they are not running as they should be. Is there a way to get when they last ran ?

  • Yes true. I have been seeing that. Unfortunately the lifetime is just a number its not very handy as I have 4 drives with different lifetimes and not possible for me to remember each one.


    My exact problem - I am seeing different number of tests in the Self Test Logs (two drives showing 4, one of them is showing only 1 and another shows only 2 tests). The system has been running continuously since last 1.5 months (with a scheduled reboot every week) and scheduled tests run every week on all drives. So I was expecting there to be more tests and more importantly same number of tests across all drives.


    What am i missing?

  • Will need to monitor it closely.. Due to restart, there was nothing in the SMART log.. I was wondering if it takes a lot to include a timestamp in the Self Test Logs page

  • Okay I took a look at the SMART log this morning after the scheduled tests were supposed to run -


    1. 3 out of the 4 devices are not being opened for monitoring. Open fails according to smartd log (attached)


    2. I have a LVM setup with 4 seagate usb hard disks forming 2 logical volumes. Now this causes the lvms to be listed under dev/disk/by-id/ as lvm volumes. Some how the /etc/smartd.conf file is picking up one of these disks instead of the actual physical disks which appear as ata-*.


    Any ideas how to resolve this?

  • Okay I just manually restarted the smartd service and it was able to read and open all my drives for monitoring. So I assume the problem is that the smartd service is starting up too fast and is not able to see the disks as they are not yet initialized in /dev/disk.


    How do I delay the start of smartd service ?

  • How do I delay the start of smartd service ?

    Figured out the answer to this - just did sudo edit systemctl smartd.service and added the following -

    Code
    [Service]
    ExecStartPre=/bin/sleep 60
    StartLimitInterval=300s


    But my /etc/smartd.conf still shows one of the drives as lvm and does not show any scheduled test for that drive (shown below). How to override this permanently ? I am assuming any change i make directly to this file will not survive an upgrade.



    Any suggestion would be appreciated. TIA !!

  • Did you ever manage to get an answer to this?

    Normally if its in red it's bad!!!


    Machine 1 - Dell OptiPlex 790 - Core i5-2400 3.10GHz - 16GB RAM - OMV5

    Machine 2 - Raspberry PI4 - ARMv7 - 2GB - OMV5

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!