Posts by nrand

    Hi ok is see what going on:

    After suspend your network interface is stating it is down, the relevant bit form the log:

    DEBUG: '_check_networkconfig(): Network interfaces status: down

    The script will avoid running on any network interface that is not in state: unknown, dormant or up.

    I suspect the script is invoking before the network interface has been brought back from the suspend fully. The next thing to know is does the network interface ever go back to the supported states. To know this is need one last pace of information for you both. Once the system has recovered from a suspend (i.e. you can log into the OMV via ssh or such like) what is the output of:

    cat /sys/class/net/<interface name>/operstate


    <interface name> = enp3s0, or relevant network interface

    If you can get the output of the above after a suspend then I will rework the code to fix this.

    FYI: This this is the change that is effecting you:…c3064ab36dd41392c496d0dcb

    For the odd state being seen when you set : FORCE_NIC="enp2s0, eth0, eth1", there are a few issues presenting:

    1. The above is invalid as it is set to, <value><comma><space><value> the space is incorrect and need to be removed.

    2 There no validation in the script currently for FORCE_NIC (I will fix this)

    3. The regex filter for find network interfaces is a bit too lose also and add to the odd behaviour (I will tighten this up)

    Hopefully as soon as you post the output of the cat i be able to work out a fix for you both

    I need the script in DEBUG mode, putting the script into "verbose mode" will turn on the DEBUG and show how it is scanning the network interfaces. without this I cannot tell why it not working. I am also suspired the above fix works so I missing something. It may help having DEBUG log with the fix and without the fix also.

    To turn on debug in the GUI got to "Syslog Configuration" and turn on the tab "Verbose Mode" without this I cannot fix the real issue.

    Hi, can you set you logging to VERBOSE in the GUI for autoshutdown and post the part of the log just before the error occurs. It should look something similar to below:

    DEBUG: '_check_networkconfig(): Network interfaces: enp2s0'

    DEBUG: '_check_networkconfig(): Network interfaces status: up'

    DEBUG: '_check_networkconfig(): Network interfaces IPv4 address: XXX.XXX.XXX.XXX'

    DEBUG: '_check_networkconfig(): Network interfaces address valid: true'

    INFO: '_check_networkconfig(): 'enp2s0' has IPv4 address: XXX.XXX.XXX.XXX'

    I need to know your network interface status when the error occurs. As I suspect I am missed a state change. As this bit of the script was made more strict in the last few changes.

    Hi bride interface are not well supported at present, but does not mean they should not be. Currently the script allows interfaces starting with en,eth,wlan,bond or usb. I suspect your bridge start brXXX. There are a couple of way to fix this.

    The script has a expert setting: FORCE_NIC which you can set so the script can be forced to use the bride interface (see /etc/autoshudown.default for more details). However, a more robust solution assuming you are using a bridge interface starting ‘br’ is to amend the script, which is supper easy. you what to look at line 1130 of /usr/sbin/autoshutdown”

    local -r net_ifaces="${FORCE_NIC:-"en,eth,wlan,bond,usb"}"

    Change it to the below (assuming you using a standard bridge name):

    local -r net_ifaces="${FORCE_NIC:-"en,eth,wlan,bond,usb,br"}"

    The above change should make the script check the bridge interface and validate it. If the above script change work please rase a bug at:…enmediavault-autoshutdown with the details and I add it to the next release.

    It this dose not work please post details of you bride set-up and I debug further

    Can you try moving: /usr/lib/systemd/system-sleep/autoshutdown-restart to /lib/systemd/system-sleep/autoshutdown-restart as in the 5.1.7 update we move to systemd start-up what than pm-utils. if this work you should see the script (above autoshutdown-restart) write a log entry to /var/log/autoshutdown.log on the restart if this is not logged the the scripts if still falling for some reason. Also can you verify you are using 'systemctl suspend' command.

    Please leave feedback if the above works and raze bug in:…enmediavault-autoshutdown and i jump on a fix ASAP

    Looked in all the logs on the system and I can see noting that looks like an error relating to this. I check my drive is ok that running and is not full. Oddly there is little to no logging form OMV itself.

    I am running out of thing to check !

    Tried to configure the web control panel no error showed up and still no Login is presented in the web interface. Are there any logs to look at that my help ?

    dpkg -l | grep php - Output:

    apt-cache policy php - output:

    Installed: (none)
    Candidate: 2:7.3+69
    Version table:
    2:7.3+69 500
    500 buster/main amd64 Packages

    cat /etc/apt/sources.list - 0utput:

    deb buster main
    deb-src buster main
    deb buster/updates main contrib non-free
    deb-src buster/updates main contrib non-free
    # buster-updates, previously known as 'volatile'
    deb buster-updates main contrib non-free
    deb-src buster-updates main contrib non-free


    I have hit a problem with OMV. If I got to the login screen I am no longer presented with a login box, I just get the OMV background.

    I have tried clearing all caches on both browser and OMV (using omv-firstaid), but I am still not presented with a login. I have also used a sperate system to check it not a local problem. This seem to start when Debian released an upgraded PHP package (think for security fixes). I cannot see any error in any logs or journals from my OMV system.

    I am running OMV (5.5.4-1).

    Any help as to where the problem may be or how to debug would be helpful



    Is anyone able to review:…ault-autoshutdown/pull/66

    This pull request adds support for ipv4-mapped, ipv6 address as the 'ss -n' command will display the ipv4-mapped address as '[::ffff:<ipv4 address>]:<port>' rather than just an ipv4 address. Also in such an instance the connection addresses will be similarly formatted and must be adjusted.

    This pull request additionally cleanup connection output so all of the ipv4 connection addresses are presented for a port.

    The change allow Docker containers with host networking setup to be correctly detected by the auto-shutdown plugin. This is required as Docker set-up ipv4-mapped address so that it can easily support both ipv6 and ipv4 port bindings. if 'netstat' is used it report ipv4-mapped port in standard ipv4 format, however, 'ss -n' detects the ipv4-mapped port and reports it in ipv6 format. hence the above pull request.

    To test the pull request, if you got autoshutdown installed you can just simply replace your '/usr/sbin/ 'with the one in the pull request and restart the autoshutdown service. (make sure you back up you old '' file first to be restored after testing). the script should work as before, but now be able to work with ipv4-mapped addresses and tell you all the IP connected to the configured ports the autoshutdown is watching.

    Thanks in advance

    Happy testing.

    hi, I have upgrade from OMV4 to OMV5 this when OK, However, I need to re-gen the fstab file. I did this doing the following:

    ovm-salt deploy run fstab

    This produce a non-working fstab, relevant section below:

    # >>> [openmediavault]
    b616b60c-896a-4ffb-93a7-ffea1c206951 /media/b616b60c-896a-4ffb-93a7-ffea1c206951 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,acl 0 2
    b5bc14e4-183a-449f-a21d-8744e0bd3627 /media/b5bc14e4-183a-449f-a21d-8744e0bd3627 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]

    If the fstab is inspected the leading 'UUID=' is missing from both the auto generated part of the fstab and thus this will not mount. Also more oddly the '/etc/resolev.conf' was removed. The fstab was generated from the following model in the config.xml. (if you need the network config i can supply this as well the advance section of the networking config have a DNS server entry and it using the new systemd networking)

    I can fix both the file easily by adding the 'UUID=' to the fstab and re-setting-up '/etc/resolv.conf'. However what did i do incorrect is the config.xml incorrect
    and why is this effecting the '/etc/resolv.conf' ?

    Any help most welcome as to why this is happening.