Beiträge von Marlin

    Krisbee this back to using RemoteMount in OMV. I ditched the Docker Volumes for now, permissions are to much of a headache.

    But needless to say it didn't resolve the issue. I was trying to delay container startup until all the drives mounted, but I don't think RemoteMount classifies as mounts.

    I found another post advising to use _netdev (iocharset=utf8,vers=2.1,nofail,file_mode=0777,dir_mode=0777,_netdev) but it seems to only be applicable to /etc/fstab mounts.

    After=local-fs.target

    Clicking on the mount option in the WebUI for remote mounts equates to systemctl restart for the associated systemd mount unit. One simple way to avoid having to individually select each of your remote mounts is to create a simple bash script that includes a line to restart each of those systemd mount units. Add that script as a schedule task which you'll only run on demand. A bit of work to save three clicks. But this way it doesn't matter if you re-boot OMV before your desktop.

    So, it took me a while, but I finally manged to find out how to get the logs for why this is failing. Probably not how anyone else would have done it, but I have it.


    So it seems (after a reboot) it's the network that is unreachable.


    Code
    Jan 19 05:21:59 omv mount[959]: mount error(101): Network is unreachable
    Jan 19 05:21:59 omv mount[959]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
    Jan 19 05:21:59 omv systemd[1]: srv-remotemount-media05.mount: Mount process exited, code=exited, status=32/n/a
    Jan 19 05:21:59 omv mount[966]: mount error(101): Network is unreachable
    Jan 19 05:21:59 omv mount[966]: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)

    I added this according to link below.

    Code
    [Unit]
    After=local-fs.target

    Delay docker startup until all shared folders is mounted


    Will test with my next reboot.

    Marlin I'm not a much of a docker user, but did you declare the volume something like this?


    Code
    volumes:
      data:
        driver: local
        driver_opts:
          type: cifs
          device: "//192.168.1.17/share"
          o: "username=XXX,password=YYY,uid=1000,gid=100"

    Not that you really want your password in plain text. But doesn't the uid/gid have to line up in the OMV host, container, and your Mint desktop, along with user XXX?


    You're just mimicking the appropriate kernel mount.cifs command.

    I had it configured exactly like that, but did not seem to work.

    Some feedback on using Docker Volumes. Lots of permissions issues. My Linux skills are not yet at a level of resolving these issues, despite reading through tons of documentation on how to configure it and other users have similar issues.

    So, I am back on Remote Mounts as it works flawlessly (apart from not auto mounting on restart).

    Thanks Krisbee


    OMV is still my server and used as a server. All my Dockers (Radarr, Sonarr, Sabzbd, NetCloud, etc) runs on it. I just don't have enough ports to mount all my hard drives. Synology, QNAP and other branded NAS drives (10 - 12 bay) and new 16 - 20Tb Hard Drives are a bit expensive for me. New bigger hard drives will solve most of my problems. I have a UPS, but it only lasts about 30 minutes, at the moment I am using it for it;s purpose to gracefully shutdown everything. Loadshedding lasts up to 2 hours. Inverters with batteries are also too expensive.


    I will have a look at fstab and autofs.

    Hi Krisbee and chente


    Thanks for the detailed response.


    The reason I have it this way (reverse) is that my omv (NAS) can only accommodate 4 drives (excluding OS drive). I have a total of 9 drives ranging from 4Tb - 10Tb. My Emby server runs on Docker on the omv server. I need to supply the drives to the omv server for Emby to use. I have the permissions pretty much sorted. Everything read and rights as it should. In fact, everything is 100% as I want it, apart from these drives that won't automatically mount via remote mount. I read about other solutions (autofs, etc) but wanted to only make use of native omv functionality.

    I could build another Emby server on the Linux Mint desktop, but a prefer (personal choice) to have everything one Emby server.


    What alternative(s) do you recommend for mounting the drives (autofs, fstab toute, etc.) instead of Remote Mounts


    I ran the mount | grep CIFS command from my omv server and it returned no results.

    I also tried the man "mount.cifs" command and also no result.

    is that normal or weird?

    Code
    root@omv:/# mount | grep CIFS
    root@omv:/# 
    Code
    marlin@omv:/$ man "mount.cifs".
    -bash: man: command not found
    marlin@omv:/$ man
    -bash: man: command not found
    marlin@omv:/$ sudo man "mount.cifs".
    sudo: man: command not found

    Regards,

    Marlin

    Good day,


    I am using Remote Mount to mount shared folders from my Linux Mint Desktop.

    The shares are configured as smb / cifs shares (NFS was to tricky for me at this stage of learning Linux).

    I can successfully mount the shares using Remote Mount and everything works as expected.

    Living in South Africa we have a lot of Loadshedding (http://tinyurl.com/y23u5ehf) which means my server and desktop tend to go down quite often.

    After loadshedding when everything starts up again, OMV won't auto / remount the shares. The shares are available to be mounted, I confirmed this by starting the Linux Mint Desktop first, making sure I can access the shares and then starting up the OMV server.

    Using the command mount -a does not mount the Remote Mounts. I have to manually select each one and click the mount button.


    Server info:

    Version 6.9.11-4 (Shaitan)

    Processor AMD Turion(tm) II Neo N54L Dual-Core Processor

    Kernel Linux 6.1.0-0.deb11.13-amd64


    Remote Mount config




    options:

    iocharset=utf8,vers=2.1,nofail,file_mode=0777,dir_mode=0777,auto


    Any assistance will be greatly appreciated.


    Regards,

    Marlin

    Hi,


    I'm new to docker and used it for the first time yesterday. I referenced Techno Dad Life's youtube videos for instructions.
    I installed Transmissions, Jackett, Sonarr, Glances, Headphones and Radarr. Transmissions, Jackett and Glances work fine. The others all give DB errors similar to error below.




    NzbDrone.Core.Datastore.CorruptDatabaseException: Database file: /config/nzbdrone.db is corrupt, restore from backup if available. See: https://github.com/Radarr/Rada…e-disk-image-is-malformed ---> System.Data.SQLite.SQLiteException: disk I/O error


    I looked at the Githum article, but it's not really addressing the issue I have.


    Thanks in advance,
    Marlin