RemotePlugin: Has "Service Restart=on-failure" with "StartLimitIntervalSec and StartLimitBurst" ever been suggested/requested?

  • ryecoaaron


    Remote mounts will fail and remain in a failed state if a remote source is unavailable at OMV boot time, or the remote system goes off-line. Manual intervention is required to restart any failed remote mount. Has the use of "Service Restart=on-failure" with "StartLimitIntervalSec and StartLimitBurst" in the associated systemd mount unit ever been suggested/requested? This sort of thing: https://ma.ttias.be/auto-restart-crashed-service-systemd/


    Are there any practical reasons why this is a bad idea?

    • Offizieller Beitrag

    Remote mounts will fail and remain in a failed state if a remote source is unavailable at OMV boot time, or the remote system goes off-line. Manual intervention is required to restart any failed remote mount. Has the use of "Service Restart=on-failure" with "StartLimitIntervalSec and StartLimitBurst" in the associated systemd mount unit ever been suggested/requested? This sort of thing: https://ma.ttias.be/auto-restart-crashed-service-systemd/


    Are there any practical reasons why this is a bad idea?

    Those parameters aren't valid on mount units as far as I know. The plugin used to use automounts to perform similar functions but OMV doesn't support autofs. Even if autofs was supported, it would be a challenge to implement since the filesystems are reported as autofs not nfs, cifs, etc.


    Due to OMV's design, filesystems that aren't always available will be a problem just like a disk being taken out. OMV just always expects them to be available. I'm open to suggestions but I have found little else to make this better.

    omv 7.7.10-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.2 | kvm 7.1.8 | compose 7.6.10 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoarron You're right they are only valid for service units. After finding and skimming this thread (Retry option for mount units) I realise now my question was extremely naive. Pottering makes his view plain in that thread.


    Someone decided to make the mount a service instead, as kernel mounts for NFS and CIFS have no re-try mechanism themselves, e.g:



    Other suggested workarounds were more complex, taking the network state into account.



    I shouldn't think you'd want to take that route. After all, isn't the re-try thing just an edge-case wrt OMV?

    • Offizieller Beitrag

    Other suggested workarounds were more complex, taking the network state into account.

    I shouldn't think you'd want to take that route. After all, isn't the re-try thing just an edge-case wrt OMV?

    I don't think using a service is a horrible idea. I would be curious if it really did help the problem. It might be an edge case but if it worked with all cases in OMV, that is all that matters.


    Another approach might be a service that is a mount "manager". The problem is ordering and what to do if something isn't mounted that a service depends on. Example:


    A system using ext4 on top of LUKS on top of mdadm raid with docker, nfs, and samba using that ext4 filesystem. If the mount manager found that ext4 was not mounted because LUKS was not unlocked, it would want to mount the ext4 filesystem. If it did mount the ext4 filesystem, would it restart docker, nfs, and/or samba?


    Another example would be a remotemount being used in the docker container. If the remote mount went down, would the mount manager keep retrying to mount the remotemount until the remote server was back up? And once it was mounted, would it restart the docker container?

    omv 7.7.10-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.2 | kvm 7.1.8 | compose 7.6.10 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @rycecoarron The idea of a general "mount manger" service sounds like a vipers nest to me but I suppose it's how "general" its need to be. What did people do before the days of systemd to implement some kind of remote mount re-try? Does the idea of using noauto in remote mounts and then creating a service to restart the remote mounts have any legs?

    • Offizieller Beitrag

    The idea of a general "mount manger" service sounds like a vipers nest to me

    Oh it is. Not even sure I want to go down that road. Just throwing ideas out there.


    What did people do before the days of systemd to implement some kind of remote mount re-try?

    autofs. It does a very good job but it is problematic for OMV and docker and other services that don't like filesystems being remounted.


    Does the idea of using noauto in remote mounts and then creating a service to restart the remote mounts have any legs?

    People should be able to put that in the remotemount options now and use the mount button to mount it. I didn't test that though. Still might have the issues I mentioned above.

    omv 7.7.10-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.2 | kvm 7.1.8 | compose 7.6.10 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron How about polling for "systemctl -t mount --state=failed" and send equiv of push mount option command to Webui? Sounds horrible. You can see why I'm only an end user. As the current arrangement is simple and straightforward and my systemd knowledge limit, I give in.

    • Offizieller Beitrag

    How about polling for "systemctl -t mount --state=failed" and send equiv of push mount option command to Webui?

    Someone could just set up a cron job to run every minute that runs this command to do that:


    systemctl --type=mount --plain --quiet --no-pager --failed | awk '{ print $1 }' | grep -E 'srv-mergerfs|srv-remotemount' | xargs -r systemctl restart

    omv 7.7.10-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.2 | kvm 7.1.8 | compose 7.6.10 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Someone could just set up a cron job to run every minute that runs this command to do that:


    systemctl --type=mount --plain --quiet --no-pager --failed | awk '{ print $1 }' | grep -E 'srv-mergerfs|srv-remotemount' | xargs -r systemctl restart

    This script would work for my setup but I don't use mergerfs. Would this script work?

    Code
    systemctl --type=mount --plain --quiet --no-pager --failed | awk '{ print $1 }' | grep srv-remotemount | xargs -r systemctl restart
    • Offizieller Beitrag

    This script would work for my setup but I don't use mergerfs. Would this script work?

    It shouldn't be needed anymore since remotemount and mergerfs are using monit to automatically restart the pool/mount if not mounted.

    omv 7.7.10-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.2 | kvm 7.1.8 | compose 7.6.10 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!