Remote Mount not re-mounting after power failure / restart

  • Good day,


    I am using Remote Mount to mount shared folders from my Linux Mint Desktop.

    The shares are configured as smb / cifs shares (NFS was to tricky for me at this stage of learning Linux).

    I can successfully mount the shares using Remote Mount and everything works as expected.

    Living in South Africa we have a lot of Loadshedding (http://tinyurl.com/y23u5ehf) which means my server and desktop tend to go down quite often.

    After loadshedding when everything starts up again, OMV won't auto / remount the shares. The shares are available to be mounted, I confirmed this by starting the Linux Mint Desktop first, making sure I can access the shares and then starting up the OMV server.

    Using the command mount -a does not mount the Remote Mounts. I have to manually select each one and click the mount button.


    Server info:

    Version 6.9.11-4 (Shaitan)

    Processor AMD Turion(tm) II Neo N54L Dual-Core Processor

    Kernel Linux 6.1.0-0.deb11.13-amd64


    Remote Mount config




    options:

    iocharset=utf8,vers=2.1,nofail,file_mode=0777,dir_mode=0777,auto


    Any assistance will be greatly appreciated.


    Regards,

    Marlin

  • chente

    Hat das Thema freigeschaltet.
  • Marlin The "remote mounts" do not have a corresponding entry in /"etc/fstab," so "mount -a" has no effect. AFAIK, there isn't an "auto" option for a SMB/CIFS mount. Check man "mount.cifs". You can see this on OMV using "mount | grep CIFS", for example:


    Code
    root@om7vm:~# mount | grep cifs
    
    //192.168.0.91/md-folder on /srv/remotemount/remote1 type cifs (rw,relatime,vers=3.0,cache=strict,username=chris,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.0.91,file_mode=0755,dir_mode=0755,iocharset=utf8,soft,nounix,serverino,mappo
    six,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)
    
    root@om7vm:~#



    The mount options I entered on the WebUI were: "iocharset=utf8,vers=3.0,nofail,auto", but the "auto" has been dropped, or at least has no effect.


    In OMV the remote mount plugin makes use of "systemd mounts", not entries in "etc/fstab". Systemd mount can have associated systemd automounts but AFAIK the OMV plugin does not use the. Hence the need to manually select the mount button in your case. ( The "auto" part in automount does not refer to the boot process: automount units define mount points that are mounted on-demand, i.e. only when they are accessed. It doesn't make sense to use this on the OMV server )


    But rather than getting bogged down on the detail, you seem to be doing things in the reverse of what I'd expect. Set the shares up on your OMV NAS and connect to them from your Linux Desktop is the normal way to do things. The NAS provides services to other devices on the same network, the most basic being serving files to other devices.


    What you are doing not only seems back to front but you're going to run into permissions problems using those CIFS remote mounts.


    It's ironic that although NFS is the more natural choice for using network shares from a linux server to a linux client, NFS is not an "out of the box" experience on typical linux desktops. Get this working first and then move on to NFS.

    Einmal editiert, zuletzt von Krisbee () aus folgendem Grund: clarify.

    • Offizieller Beitrag

    But rather than getting bogged down on the detail, you seem to be doing things in the reverse of what I'd expect. Set the shares up on your OMV NAS and connect to them from your Linux Desktop is the normal way to do things. The NAS provides services to other devices on the same network, the most basic being serving files to other devices.

    Exact. I would move those drives from the desktop PC to the NAS.

  • Hi Krisbee and chente


    Thanks for the detailed response.


    The reason I have it this way (reverse) is that my omv (NAS) can only accommodate 4 drives (excluding OS drive). I have a total of 9 drives ranging from 4Tb - 10Tb. My Emby server runs on Docker on the omv server. I need to supply the drives to the omv server for Emby to use. I have the permissions pretty much sorted. Everything read and rights as it should. In fact, everything is 100% as I want it, apart from these drives that won't automatically mount via remote mount. I read about other solutions (autofs, etc) but wanted to only make use of native omv functionality.

    I could build another Emby server on the Linux Mint desktop, but a prefer (personal choice) to have everything one Emby server.


    What alternative(s) do you recommend for mounting the drives (autofs, fstab toute, etc.) instead of Remote Mounts


    I ran the mount | grep CIFS command from my omv server and it returned no results.

    I also tried the man "mount.cifs" command and also no result.

    is that normal or weird?

    Code
    root@omv:/# mount | grep CIFS
    root@omv:/# 
    Code
    marlin@omv:/$ man "mount.cifs".
    -bash: man: command not found
    marlin@omv:/$ man
    -bash: man: command not found
    marlin@omv:/$ sudo man "mount.cifs".
    sudo: man: command not found

    Regards,

    Marlin

  • Marlin Run "man mount.cifs" on your desktop if man pages not installed on OMV. Use lower case for CIFS. So apart from the loadshedding, all is as you want it. In your use case you're effectively turning the OMV system designed as a server into a client of a service running in your desktop. It's not surprising native OMV functionality for this is limited. It is possible to modify the OMV etc/fstab if your careful, maybe some creative scripting.. Search the forum for that. Otherwise, invest in UPS?

  • Thanks Krisbee


    OMV is still my server and used as a server. All my Dockers (Radarr, Sonarr, Sabzbd, NetCloud, etc) runs on it. I just don't have enough ports to mount all my hard drives. Synology, QNAP and other branded NAS drives (10 - 12 bay) and new 16 - 20Tb Hard Drives are a bit expensive for me. New bigger hard drives will solve most of my problems. I have a UPS, but it only lasts about 30 minutes, at the moment I am using it for it;s purpose to gracefully shutdown everything. Loadshedding lasts up to 2 hours. Inverters with batteries are also too expensive.


    I will have a look at fstab and autofs.

  • I am experimenting with Docker volumes. Will see how it goes.

    Volumes
    Learn how to create, manage, and use volumes instead of bind mounts for persisting data generated and used by Docker.
    docs.docker.com

  • Some feedback on using Docker Volumes. Lots of permissions issues. My Linux skills are not yet at a level of resolving these issues, despite reading through tons of documentation on how to configure it and other users have similar issues.

    So, I am back on Remote Mounts as it works flawlessly (apart from not auto mounting on restart).

  • Marlin I'm not a much of a docker user, but did you declare the volume something like this?


    Code
    volumes:
      data:
        driver: local
        driver_opts:
          type: cifs
          device: "//192.168.1.17/share"
          o: "username=XXX,password=YYY,uid=1000,gid=100"

    Not that you really want your password in plain text. But doesn't the uid/gid have to line up in the OMV host, container, and your Mint desktop, along with user XXX?


    You're just mimicking the appropriate kernel mount.cifs command.

  • So, I am back on Remote Mounts as it works flawlessly (apart from not auto mounting on restart).


    Clicking on the mount option in the WebUI for remote mounts equates to systemctl restart for the associated systemd mount unit. One simple way to avoid having to individually select each of your remote mounts is to create a simple bash script that includes a line to restart each of those systemd mount units. Add that script as a schedule task which you'll only run on demand. A bit of work to save three clicks. But this way it doesn't matter if you re-boot OMV before your desktop.

  • Marlin I'm not a much of a docker user, but did you declare the volume something like this?


    Code
    volumes:
      data:
        driver: local
        driver_opts:
          type: cifs
          device: "//192.168.1.17/share"
          o: "username=XXX,password=YYY,uid=1000,gid=100"

    Not that you really want your password in plain text. But doesn't the uid/gid have to line up in the OMV host, container, and your Mint desktop, along with user XXX?


    You're just mimicking the appropriate kernel mount.cifs command.

    I had it configured exactly like that, but did not seem to work.

  • Clicking on the mount option in the WebUI for remote mounts equates to systemctl restart for the associated systemd mount unit. One simple way to avoid having to individually select each of your remote mounts is to create a simple bash script that includes a line to restart each of those systemd mount units. Add that script as a schedule task which you'll only run on demand. A bit of work to save three clicks. But this way it doesn't matter if you re-boot OMV before your desktop.

    So, it took me a while, but I finally manged to find out how to get the logs for why this is failing. Probably not how anyone else would have done it, but I have it.


    So it seems (after a reboot) it's the network that is unreachable.


    Code
    Jan 19 05:21:59 omv mount[959]: mount error(101): Network is unreachable
    Jan 19 05:21:59 omv mount[959]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
    Jan 19 05:21:59 omv systemd[1]: srv-remotemount-media05.mount: Mount process exited, code=exited, status=32/n/a
    Jan 19 05:21:59 omv mount[966]: mount error(101): Network is unreachable
    Jan 19 05:21:59 omv mount[966]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)

    I added this according to link below.

    Code
    [Unit]
    After=local-fs.target

    Delay docker startup until all shared folders is mounted


    Will test with my next reboot.

  • Marlin You've lost me now. Are you talking about failure of the docker cifs mount, rather than the results of loadshedding? By the log date it seems to be the later. If so, that fix looks to be addressing a different problem to me.

  • Krisbee this back to using RemoteMount in OMV. I ditched the Docker Volumes for now, permissions are to much of a headache.

    But needless to say it didn't resolve the issue. I was trying to delay container startup until all the drives mounted, but I don't think RemoteMount classifies as mounts.

    I found another post advising to use _netdev (iocharset=utf8,vers=2.1,nofail,file_mode=0777,dir_mode=0777,_netdev) but it seems to only be applicable to /etc/fstab mounts.

    After=local-fs.target

  • Marlin


    1. The loadshedding Problem


    So, if one or other, or both, of OMV and Mint go offline you can end up with the remount mounts in a failed state. The simplest mechanism to make them active again is just to restart the offending systemd mount unit. That's what is happening in the background when you click the mount option in the WebUI. See this interchange i had yesterday : RE: RemotePlugin: Has "Service Restart=on-failure" with "StartLimitIntervalSec and StartLimitBurst" ever been suggested/requested?


    If you read the thread I linked to you'll see there's no simple systemd mechanism to trigger a systemd mount unit restart. The one liner given by ryecoaaron will attempt to restart any remote mounts that are in a failed state. Using that in a cron job is the way to go.


    2. Remote Mounts in Docker


    A quick test on my part points to the kind of perms problem you may have run into. If your docker volumes are created under "/var/lib/docker/volumes" you can end up with everything having root:root perms. Also, using docker compose down doesn't seem to umount the remote share which confuses things if you're editing the compose file between docker compose up & downs.


    Lastly, reverting to remote mounts on the OMV host, which are then bound to docker containers, is regraded as better practice wrt security as I was reminded looking at this thread: RE: Remote mounts, reboots, and Docker...








  • Hi Krisbee


    Just want to say a big thank you. That seems to have resolved my issue. The drives are now mapped in time every time after a power outage.


    I'm not going to fiddle with it any more, as it is working.


    Regards,

    Marlin

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!