What would be the best practice to give access to only a defined samba share remotely?

  • Hi I was wondering, what would be the best\easiest\most secure practice to share a defined samba share over the internet for external collaborators?

    I'm using omv primarly as a nas on a LAN, ultimately I will need to give access to files on a share to collaborators outside of my LAN, but maybe the way I'm thinking is not the best way, so I would like to know some suggestions.

    My first thought was to create a tailscale with my server in it and give access to the tailnet machine (omv) to the external users, so I could avoid setting up a bunch of services like nginx or similar, but then I realised I cannot find a way to give access to just that samba folder as a resource, I thought tailscale ACLs were more granular or I'm missing how to do it properly.

    From what I understood using tailscale on the omv server will expose all the services on the openmediavault server if I'm not mistkaen, instead I would like to keep ssh and all other stuff private and give WAN access only at that particular share.


    How can I achive this in the most clean\easy and secure way? any suggestion?

    I didn't went through the simple "wireguard" route, because as I said that would expose my whole lan to the external person connecting through the wireguard client.

    Instead I loved the idea of tailnet control on the machines connected as I want to be able to remove access easily to the external collaborators when the necessity arise.

    Main focus:

    • Let the external users access only that particular samba shares
    • Be an easy setup so to mitigate at a minimum any secuirty risk / attack vector
    • Grant or revoke access to the resource and be able to cut the user off when will not be required anymore throgh a Web GUI


    Any help\idea is much appreciated as I maybe overthinking it.

    • Official Post

    share a defined samba share over the internet for external collaborators

    Nextcloud in docker. https://wiki.omv-extras.org/doku.php?id=omv7:docker_in_omv

  • chente mmmm doesn't this need to be also coupled with nginx and let's encrypt? to make it more secure? also how nextcloud would help me to have just some part of the samba shared and not the whole samba? I don't think nextcloud has user permissions etc. does it?

    Also it lacks the "protection" of sharing encrypted files like through a vpn (tailascale,wireguard) etc. or am I mistaken?

    kind regards for the reply :)

    • Official Post

    doesn't this need to be also coupled with nginx and let's encrypt? to make it more secure?

    Yes, in fact it is essential. At the end of that document you have the way to do it.

    how nextcloud would help me to have just some part of the samba shared and not the whole samba? I don't think nextcloud has user permissions etc. does it?

    Nextcloud allows you to share the folder you want with the user you want and the permissions you want. At any time you want, you can remove access from the user you want. Nextcloud is designed to do precisely all that.

    Also it lacks the "protection" of sharing encrypted files like through a vpn (tailascale,wireguard) etc. or am I mistaken?

    The fundamental difference between Nextcloud and Wireguard is that if you give a person access through Wireguard you are giving them access to your local network but with Nextcloud you only give them access to the folder you want. Additionally, to give someone access through Wireguard, it is necessary to install Wireguard on the client and configure access. The advantage of Nextcloud is that access is done through any browser, with a username and password. You can set additional security measures for the users you configure, such as two-step access, etc. Do a little research into what Nextcloud does. https://nextcloud.com/

    I use Wireguard to access my local network remotely. I use Nextcloud to access my shares and to share them with other people.

  • chente I think you convinced me, about the pro and cons I will definitely look into this setup seems a much more easy process to manage external and new users.


    Especially because as an admin of the server I want to be out of the way of creating new user, explaining them how to use vpns\wireguard\tailscale and so forth, and let the less techy "admin" of the servers doing this things on their own, without risking they compromise everything.


    It sounds good :)

    Also I was afraid security wise about give access to the open especially concerning weak passwords made by users, but the 2FA would solve this issue.


    Thank you for the headsup, I tried back then to setup an nginx setup for this kind of stuff, but letsencrypt\cloudflare\domain names setup gave me an headache, hopefully is becoming more streightforward.

    Thank you a lot

  • chente I was following the documentation you gave me especially for the nextcloud one, but I'm a bit stuck as the containers nginx+nextcloud-aio are not working properly.

    I'm looking into it to understand what I'm doing wrong.
    I have a couple of concerns though since on the guide is not really explained well could you give me any hint?

    1. I tried to launch the container nginx-proxy-manager with ad-hoc user nginx as PID and GUID to not make it execute with root privileges could it be a problem? is it better to leave it alone and run it as root?


    2. on the nginx itself configuration to make it work with nextcloud-aio it says to cofigure it this way:

    Code
    Adjust localhost or 127.0.0.1 to point to the Nextcloud server IP or domain depending on where the reverse proxy is running. See the following options.
    
    On the same server in a Docker container
    For this setup, you can use as target host.docker.internal:$APACHE_PORT instead of localhost:$APACHE_PORT. ⚠️ Important: In order to make this work on Docker for Linux, you need to add --add-host=host.docker.internal:host-gateway to the docker run command of your reverse proxy container or extra_hosts: ["host.docker.internal:host-gateway"] in docker compose (it works on Docker Desktop by default).
    Another option and actually the recommended way in this case is to use --network host option (or network_mode: host for docker-compose) as setting for the reverse proxy container to connect it to the host network. If you are using a firewall on the server, you need to open ports 80 and 443 for the reverse proxy manually. By doing so, the default sample configurations that point at localhost:$APACHE_PORT should work without having to modify them.

    I think this is a big roadblock for me as I tried to put what it says, but apparently it gives me error unless I just put localhost. but then again when I tried to connect through internet to the server it gave me the 502 error page.

    So definetely I'm doing something wrong.

    3. does the router needs to have open ports for 80, 443 only tcp or udp as well?

    4. For the nextcloud container instead I don't see options to make it run unprivileged, does it need to be run as root as well? as I don't see any PID PGUID variables for the container.

    • Official Post

    Everything you need to know is in the Nextcloud AIO documentation. The guide in the omv-extras wiki is not intended to be a Nextcloud guide but rather a guide on using Docker in OMV, which is why you have links to the Nextcloud AIO documentation that you should consult. There is no point in duplicating all that information.


    Using PUID and PGID with NPM should not be a problem. In my case I configured it that way a long time ago and it works without problems. Although reading the Nextcloud AIO documentation it is possible that that has changed. I would follow the instructions specified here. https://github.com/nextcloud/a…xy.md#nginx-proxy-manager


    The ports that you must open are explained here. https://github.com/nextcloud/a…en-in-your-firewallrouter


    Nextcloud AIO is a container that configures other containers just like Portainer does, so it runs as root. I haven't looked for it but I'm sure it's also explained in the Nextcloud AIO documentation.

  • thank you very much chente I guess I was doing it in the wrong way just following the OMV one I'll take a deeper look at the nextcloud documentation itself

    Edit.

    I managed to make it work indeed was a misconfiguration on nextcloud part :) once I went through the aio documentation once again everything was more clear thank you chente :)

  • Wek

    Added the Label resolved
  • chente ok I reached a road block with this setup, I tried so many stuff so far, ending up with cloudflare tunnels etc or reverse proxy through nginx and cloudflare dns, but both of this scnario has a HUGE problem, the 100MB limit of cloudflare to upload stuff, esecially through the nextcloud desktop app, also I think the web interface has the same issue.

    So how do you solved it? as I think just use port forwarding and exposing 443 of ngix-proxy to the public with a real IP is not that great from a stand point security.

    But the limitation of clouflare tunnel or dns proxy of the 100MB make nextcloud useless..is there any other way?

    • Official Post

    I use an Nginx Proxy Manager container and access the server through a domain on port 443. I like the simplicity of the NPM GUI and it supports Nextcloud AIO. https://nginxproxymanager.com/

    I rely on Nextcloud's security measures (which are many and configurable) to keep unauthorized access at bay.

    In reality I only have two containers open to the internet, Nextcloud and Jellyfin, the rest of the accesses are always through Wireguard.

    On top of that I use a router that also helps with its own security measures.

    NPM can add another layer of security using a second password (I don't use it).

  • yep I was using the npm as well, then I tried cloudflare tunnel, it sucks, so back to the idea of using the npm and opening ports, I'm not that fond of put it pubicly though, since I saw nextcloud git issues of nextcloud and the teams seems really slow to patch bugs, don't really know if it is worth it.

    is there a way to mask the ip? instead of putting a service on the open internet just like that with an open port of the router?
    I was looking at people using vps etc etc. it seems a mess though, and accessing it through tailscale or something like thtat is not really doable for the avarage user.

    So I guess the only way here is to put on the internet just the real ip unless I'm missing some other way to do it properly

    • Official Post

    I don't know any other way to mask the IP besides the ones you already mentioned, but I don't like them either.

    If only you or your family are going to access this service, you have the option of accessing through a VPN, in my opinion it is the safest way to access server resources remotely. Once everything is configured, simply close ports 80 and 443 and configure Wireguard for remote access. You will have to configure a Wireguard client on each device, of course. You will only need to open ports 443 and 80 on your router once every three months to renew the Let's encrypt certificate. If you want third parties to access that option is not viable.

  • yep unfortunately I have to give access to external collaborators and they are not techy enough to setup a vpn on their own this would make it painful :D moreover most of them I don't want to access the whole network that the only reason I went through nextcloud route, otherwise I would already have done it thrugh vpn.

    Well, thanks anyway at least now I know all my options, at this point I have to think if it's better to use some proprietary stuff like google drive or one drive alltogether and share just that stuff on the cloud, and bypass alltogether the issue :)

    I will think through the risks and benefits of proprietary cloud/expose my lan to the internet.

    Thank you very much you have been really helpful

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!