Posts by Jormaster2k

    I think adding if failed host 127.0.0.1 port 3493 type TCP then restart to /etc/monit/conf.d/openmediavault-nut.conf will do it. Can you check that?



    Code
    check process nut-server with matching upsd
    group nut
    start program = "/bin/systemctl start nut-server"
    stop program = "/bin/systemctl stop nut-server"
    mode active
    if failed host 127.0.0.1 port 3493 type TCP then restart

    After that you need to run systemctl restart monit.

    Hi Volker,
    I am back from training today.
    I updated OMV-update and got openmediavault-nut 5.0.3 today.
    Then I saw the comment above and and added the line.
    I restarted "systemctl restart monit" as directed.


    Now time will tell: even with my workaround checking every 5 seconds, I still got random stops/recover starts every 3/4 hours or so, so only 6/7 notifications extra per day instead of 300, which is good.
    That is also after updating to openmediavault-nut 5.0.2: by the way, very elegant leverage of monit: I'd done that after your suggesting, if you had not already right away in 5.0.2 :) .
    I noticed now that after openmediavault-nut 5.0.2/3 updates I am occasionally getting a different error message.


    Quick question: should I stop my workaround script to see if your fix worked, or should I leave them working in tandem (together)? I think right now it is a tad overkill running with your 5.0.3 fix + the code fix above + my workaround script all at the same time, which should I stop, if any?


    PS: thank you so much for taking care of this so quickly and with official updates: I am really honored.
    Thanks again!

    Hi wonderful people,
    I recently upgraded my UPS to a Cyberpower CP1500PFCLCD, one of the most popular UPS this year as it is affordable and sine wave waveform (won't debate here about cons/pros of sine wave).
    I use it only for my Openmediavault baby, so I connected it directly via USB. It works, I am happy-ish.
    I premise I am sharing this for both
    a) helping temporarily those who like me are going banana and NUTs (pun intended) with this issue (see below) and
    b) to start a thread that may hopefully bring a more stable/persistent solution


    So.. I installed the Openmediavault official UPS NUT plugin, configured it good as per various threads here (see settings picture below) and everything seems to be working fine... except...
    ISSUES:
    - I cannot either monitor the status remotely


    - The plugin simply fails to shut down the server on loss of power (I explain below) making it moot to have it running.


    The server looses connection randomly for no apparent reason, consistently in a range between 5 to 75 minutes after I manually stop/restart the service, because of the dreaded (and apparently well-known) "Data stale" issue.
    - DRAMA: I get notifications of this every 5 minutes via email, so if I am not at home to restart the service manually, I get self spammed HUNDREDS of email notifications: I need/want all the other notifications, and mostly I can choose, but i cannot disable the ones for UPS (not in the list yet (hint, hint)).
    - Of course, with the UPS disconnected, there is no point running the plugin or the monitoring at all as the server won't get nor issue the "graceful shutdown" command.
    I perused everywhere, finding solutions that are outdated for OMV2 and OMV3 that no longer apply, and I attempted every trick in the book (e.g. changing the upsd.conf values for maxstartdelay, which it will ignore and maxretry, which is also ignored) but nada...
    - Monitoring: no software for windows or linux are able to establish connection to the OMV server from local network on its local address and the port 3493 (or any other port). I checked with open FW port and no FW at all, TCP and UDP.


    TEMPORARY SOLUTION:
    I finally had enough NUTs so I decided to create a shell script and then add it on cron (on reboot) that will run continuously and check on the UPS link status, and if disconnected will restart the service: this way I will still get an email notification every time it disconnects and reconnects, but not every five minutes for hours. This way I will get 20/30 notices, not 300.
    I also noticed that since I started this script, I am getting almost no failure notices, possibly because of the harassing nature of the scripts itself.
    Basically this script, launched on reboot, will run in the background and checks every 10 seconds if the UPS link is up using "upsc xyz" (upsc comes pre-installed with OMV) command status.
    This method actually provides a status value of 0 (if link up) / 1 (if link down) with echo $?, hence the idea of using it to trigger a driver/daemon restart using the "upsdrvctl start".
    To launch it silent without output I use a second shell script, which is also what I use in the OMV task scheduler to run it on reboot.
    I created two shell script files in root home:
    - touch /root/checkups.sh
    - touch /root/upsfix.sh
    made them executable by root user and group only.


    _______________________________________________

    Code: upsfix.sh
    #!/usr/bin/bash
    # To launch it without output
    /root/checkups.sh > /dev/null 2>&1


    Then in the OMV control panel for the System \ Scheduled Jobs (cron), I created a task,
    - At reboot, as Root, execute "sleep 300 && /root/upsfix.sh"


    One can change the check loop interval from 10 seconds to whatever. Though IMHO I wouldn't go any lower than 5 or any higher than 300 (5 minutes).
    5 minutes (sleep 300) is also what I chose the launcher to wait after reboot to start the script, just to make sure everything is loaded and up and running before launching this.


    SO: with this I fixed temporarily the SPAM and now everything seems to be working fine: I pulled the power to the UPS to test, while it was reporting UPS link disconnected, and 10 seconds later it reconnected and the plugin issued the shutdown.


    Question to you wonderful people: can we do any better than this workaround?


    Please feel free to kill, mock and denigrate my code skills, I'd actually really love if you can do better and show me how.
    It is always a good time to learn new things !


    You guys have a wonderful week !

    Hi wonderful people,


    I am having a never seen before issue that started right after last week's OMV (Debian) Linux Kernel updates and same time Usul upgrades.
    So my current is Openmediavault 5.2.4-1 (Usul) running on Linux 5.4.0-0.bpo.2-amd64 #1 SMP Debian 5.4.8-1~bpo10+1 (2020-01-07) x86_64.


    I premise that I religiously apply updates and then reboot the server as soon as I get each cron alert, within 48 hours which is standard best practice in my lab.
    Starting that time (1/10/2020) I am getting this error during boot up sequence: I noticed it only because I was in front of the panel during the upgrades and reboots, otherwise I would have not even noticed them.


    "ipmi_si dmi-ipmi-si.0: IRQ index 0 not found
    ipmi_si IPI0001:00: IRQ index 0 not found"


    That's it, no other error message ?( , and they show up just at the end of the boot sequence, right before clearing up for the main login prompt.
    IPMI interface on my motherboard (Supermicro) works just fine: I can access, launch reboot/shutdown commands and the java KVM from remote.
    I scoured the net for similar error and I only found one single similar reference for the most recent Clear-linux-native-5.4.2-875 distribution update, right on its official github bug forum.
    They consider it a false positive and marked it as bug. 8|
    May it be also a false positive for this Usul /SMP Debian version?


    I admit I have no knowledge of IPMI modules commands/tools/debug, hence I know not how to query any internal tool for further detail: suggestions are welcome !!
    The reason why I am not going nuts about this is that everything is working perfectly, IPMI interface included: my only concern founded would Usul actually be utilizing IPMI on my motherboard for triggers, power monitoring or other functions and now it is not.


    Alas, I would not even know if they are working or not as I cannot find either ipmi_si or dmi-ipmi-si module/reference in syslog to even query.


    Any takers? Suggestions? Shall I just ignore / "snooze-button-on-worry" for now?


    Yours truly,
    peace and love.
    Jor

    This is going to be just about impossible to fix because it only happens when you change the mounting options of the pool's underlying filesystems and they need to be remounted.

    Actually I investigated a bit more and found the issue and a solution, which is actually a reuse !!!


    The issue is persistent: everytime omv-salt is used by me directly or by OMV gui to apply webUI changes and update stuff (e.g. adding/removing another share), the issue comes back: forced unmount of unionFS and at the reboot assertion error of all the shared folders that are related to the same unionFS mount (see picture above).
    I suspect that when omv-salt command runs, it re-organizes the order in which the OS mounts the file systems and the other shared folders using fstab and then systemd mounts.


    So, the issue is that everytime an OMV config reconfigure happens, the unionFS gets unmounted :!: , and at reboot OMV attempts to mount the systemd mounts (e.g. /etc/systemd/system/sharedfolders-media.mount) BEFORE the dependency from fstab - the related unionFS filesystem - is done being mounted :!::!: .
    I am not sure if this happens because unionFS takes longer time to mount, of if its a bug or what, but it is repeatable and persistent. If I were to fix this issue the clean webUI way, I would have to redo all my shares each time this happens.


    Alas, hurricanehrndz to the rescue !! (See his post in this forum : "MergerFS folders not mounted in /sharedfolders"). :thumbsup:

    His elegant workaround solution was to create a wait service and then create a "systemd" service override for all the necessary shared folders mounts dependent from the unionFS filesystem, to wait for it to be mounted first.


    Voila' everything now works perfectly again! No need to destroy and rebuild my shares in WebUI EVERY single time omv-salt is launched to apply config changes: at the next reboot everything runs smoothly.


    I hope this is a bug that can be fixed in future updates: OMV should be made to adopt a wait method for unionFS based shared folders and/or establish dependency on it being mounted first.


    I hope this also helps others with a situation like mine.


    Thank you again for your help !!

    omv-mkconf is now omv-salt deploy run in most cases. So, it is omv-salt deploy run fstab. Setting the default should still work as well but you need to run the omv-salt command.

    Hi Aaron,


    It worked like a charm for the standard EXT4 filesystems: fstab was edited and all FS are without noexec. Thank you !!


    So that's resolved, BUT...


    Unfortunately I have to report two bugs, one potential and one immediate, in the hopes that you may direct this to who may be interested, in the appropriate dev areas... or forum bug area for OMV5.
    1) while launching the salt command, I got a 'DeprecationWarning' (see below the extract). It may need some python code remediation at some point: who/where should I report this to?
    2) the omv-salt command while fixing the fstab, did unfortunately also MESS with the unionFS mountpoint i have (created with the WebUI plugin, of course), forcing an unmount. I am not sure that this was a big issue thought, because I rebooted immediately to test if fstab was mounting right, and the reboot did in fact restore the unionFS mount itself in /srv/84b16c30-bb58-46d1-bff3-799961a6197b.



    Yet during that reboot I got all new error messages, and all smb shares associated to the unionFS filesystem were broken.
    The error was captured at reboot as [ASSERT] Assertion failed for Mount shared folder xxxxx to /Sharedfolders/xxxxx (where xxxxx are each of my shares): unionFS was mounted correctly but the shared folders were broken.
    To fix that I had to delete all the smb shares in the WebUI and re-make them.
    Again, its a bug, and I fixed my own personal issue, but this might be of interest for the devs.


    So, below is an extract of the omv-salt deploy run fstab command output: the first is the initial output with the deprecation warning, the a whole bunch of correct mounts for my FSs except the unionFS, then at the bottom is the log of the forced unmount.
    ---------- #Issue 1

    /usr/lib/python3/dist-packages/salt/modules/file.py:32: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    from collections import Iterable, Mapping
    /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
    *salt.utils.args.get_function_argspec(original_function)


    ---------- #Issue 2 (also see attached picture of boot errors)
    ID: mount_no_bind_mountpoint_dc7a7b80-8828-47a3-a4c6-e2ebe6dccd31
    Function: mount.mounted
    Name: /srv/84b16c30-bb58-46d1-bff3-799961a6197b ##This is the generated name of my UnionFS filesystem made of 6 of 8 disks)
    Result: True
    Comment: Target was successfully mounted
    Started: 22:30:04.614104
    Duration: 162.709 ms
    Changes:
    ----------
    mount:
    True
    umount:
    Forced unmount because devices don't match. Wanted: 84b16c30-bb58-46d1-bff3-799961a6197b, current: 1of8:2of8:3of8:4of8:5of8:6of8, /etc/1of8:2of8:3of8:4of8:5of8:6of8

    Summary for debian
    -------------
    Succeeded: 21 (changed=1)
    Failed: 0
    Total states run: 21
    Total run time: 612.031 ms

    -------------


    I resolved by deleting the shared folders using the WebUI and re-making them again.
    Still, why would this happen?


    I hope this helps somehow.
    Thanks much again !!

    Files

    • problem.png

      (298.09 kB, downloaded 329 times, last: )

    Hi masters, I hope you are doing well.


    I need to remove "noexec" from the EXT4 mounted drives in my system (you know, the usual docker issue).
    I could do it the "dirty way" by modifying fstab manually and re-mount , but I would like a persistent way to do so, and the WebUI does not go into those details yet, not even in OMV5. (would be a nice feature though.. hint hint)


    ANYWAY, i tried the following:
    - the Techno-DAD way ( he took it from openmediavault.readthedocs.io/en/latest/various/fs_env_vars.html ) , modifying the attributes in the openmediavaul config.xml file. Did not work because omv-mkconf no longer exists in OMV5, so cannot launch the necessary omv-mkconf fstab command (looked everywhere). FAIL
    - the gderf way (master moderator), by setting the environment variable OMV_FSTAB_MNTOPS_EXT4 in /etc/default/openmediavault. Basically a) set the variable b) restart the daemon c) unmount / remount the drives using the WebUI
    OMV_FSTAB_MNTOPS_EXT4="defaults,nofail,user_xattr,exec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0
    and then restarting the daemon ( i even rebooted actually). Nope: the variable is ignored which means is deprecated/no longer valid. UTTER FAIL


    So, the question is: with omv-mkconf gone, disappeared from the new release, and the mounting variables deprecated, how can one persistently "CLEAN" remove noexec from EXT4 mounts ?


    Thanks in advance !