Permission issue with share user on docker config folders created by containers

  • Hi,

    I currently have omv 6.3.12-1 running on a Pine64 (2gb) with armbiam (kernel Linux 5.15.93-sunxi64).
    I have Docker and Portainer installed through omv-extras
    I have a share with a separate user to the folder where my containers store their persistent data (config), added to smb service.
    This folder is on a NTFS formatted usb drive

    This all works as expected, I can access/edit config files from my windows machine via the smb share using the specified user.


    Now I have omv 6.6.0-1 on a raspberry pi 4 (8gb) with raspbian (kernel Linux 6.1.21-v8+).
    I enabled docker repo in omv-extras
    I installed openmediavault-compose from plugins
    I added Portainer from example compose file
    I also have a share with a separate user to the folder where my containers store their persistent data (config), added to smb service.

    This folder is on a EXT4 formatted usb drive
    Any folder created by a container is not accessible from my windows machine via the smb share using the specified user.


    I tried changing the user/group in the compose files settings, and also the permissions of the directory and files
    I also tried adding the share user to the docker/adm/root groups
    I tried adding user: "puid:pgid" of the share user to compose files
    So far I have only been able to browse the data of sabnzbd when I add puid and pgid to the environment section in the compose file, but this does not work for other examples/consistently


    Is this a bug? Did this only work before because the drive is NTFS and permissions are different? Or is there a setting I am missing? I would like to migrate to the rpi4 but I often access the config data from windows and I don't want to login to update permissions every time.


    Regards,
    Mattie

  • KM0201

    Hat das Thema freigeschaltet.
  • I don't think this is a bug but I have had a similar issue that I was only able to resolve using ACL permissions. The good news is that I fixed it once and have not had the issue again (assuming the folder is not deleted / recreated).


    Permissions for NTFS (windows) v. ext4 or BTRFS (linux/omv) are different for sure and it is much better to use linux 'native' on your OMV system. I moved from ext4 to BTRFS and I am very happy with this. If you use BTRFS you can combine multiple disks, hold metadata on both/all disks, create snapshots etc to help see issues such as bitrot - so it is worth looking into for sure.


    To fix the permissions issue for the docker config data, this is what I did:


    1. create a dedicated account for docker. e.g. appuser - see below and note group membership (and the IDs as you'll need these in your compose files)

    2. use the appuser deets in your compose files - i think you know how to do this. Test that the containers start etc.

    3. make sure both the appuser and the regular user account you want to access the folders have read/write permissions (shared folders > permissions)

    4. update ACLs so that the regular user can access the shares created by the docker containers - see below (shared folders > ACL). I do this at the top level shared folder. Make sure the 'user' group has read/right permissions and you need to select 'recursive' to apply the permissions to all of the subfolders. click save to update.

    5. login to the shared folder from windows and check it is working. :)


    Hope this is helpful!



    • Offizieller Beitrag

    In my opinion the persistent data folder of the containers should not be shared with a network service. It's a system folder, so I think it's just as bad an idea as sharing rootfs.

    Based on that premise, the only way I access that folder from Windows is via WinSCP logging in as root, so I don't have any permissions issues when I need to modify anything.

    As for the use of NTFS, it could be said that it is supported only for compatibility purposes to make large data transfers over USB or some other similar case. Using a non-native linux file system permanently on a linux server is a mistake. It will end up causing problems sooner or later.

  • Generally agree chente but I find for certain docker containers - such as Home Assistant - where you need to make lots of changes to the configuration.yml etc, it is very helpful to have smb access to make these changes (or is this not considered 'persistent' data folders)?


    The other container that forced me to take this approach is a ovpn/transmission docker where the completed files folder could not be accessed and it was a hassle to use winSCP or Cyberduck (mac) just to move a file to the correct folder.


    If I misunderstood the OP - then my apologies :)


    Also keen to understand if there is a way to achieve what I need using a different approach that is less risky...

    • Offizieller Beitrag

    Generally agree chente but I find for certain docker containers - such as Home Assistant - where you need to make lots of changes to the configuration.yml etc, it is very helpful to have smb access to make these changes (or is this not considered 'persistent' data folders)?

    The docker yaml is still a system file in my opinion, many of them contain sensitive information. In any case if you use the openmediavault-compose GUI the changes to the file are made from the OMV GUI. In fact it doesn't allow you to modify the yaml file directly. So problem solved.

    You can also modify that file in Portainer from its GUI.

    The other container that forced me to take this approach is a ovpn/transmission docker where the completed files folder could not be accessed and it was a hassle to use winSCP or Cyberduck (mac) just to move a file to the correct folder.

    I don't know that specific case, since I don't use that container. But I suspect that it is only a permissions configuration problem and there should be a more direct resolution path.

  • Then we are misunderstanding each other. I do not allow network share/access the docker folder (where compose files live) or the docker install folder but I do share my appdata folder as I need access to the configuration files (I think this must not be 'persistent' data)?


    Also I have accidentally taken this thread off-topic

    • Offizieller Beitrag

    but I do share my appdata folder as I need access to the configuration files (I think this must not be 'persistent' data)?

    The appdata folder contains application volumes moved to the host. Generally these directories are transferred to the host to be able to interact with them since they contain application configuration files. Therefore they are system files and in my opinion they have the same category as rootfs and should not be shared on the network.

    It's persistent data, you need to keep it to keep your settings intact inside the container, that's why it's passed to the host.

    • Offizieller Beitrag

    Then I must accept the risk vs. convieience. 😂

    This dilemma is very common in Linux ^^

    • Offizieller Beitrag

    An extreme case of comfort could be applying 777 permissions to / with -R, though I don't know yet if you'd break anything (DON'T DO IT) ^^

  • All good. I’m not changing anything right now.


    Everything working really well plus I have a good backup strategy for all this stuff.


    I’m the only user on my home network anyway. Wife and kids have no idea. They are consumers that expect everything (e.g Plex) to ‘just work’

    • Offizieller Beitrag

    I’m the only user on my home network anyway.

    Breaking a wifi network can be relatively easy. I hope you don't have any hacker as a neighbor ;)

    Wife and kids have no idea. They are consumers that expect everything (e.g Plex) to ‘just work’

    It is my case too. Except my eldest son who is starting to study computer science, a potential mini-hacker.

  • I don't think this is a bug but I have had a similar issue that I was only able to resolve using ACL permissions. The good news is that I fixed it once and have not had the issue again (assuming the folder is not deleted / recreated).

    Yes I have been using ACL as well, but every new app has the same issue again


    In my opinion the persistent data folder of the containers should not be shared with a network service. It's a system folder, so I think it's just as bad an idea as sharing rootfs.

    Based on that premise, the only way I access that folder from Windows is via WinSCP logging in as root, so I don't have any permissions issues when I need to modify anything.

    I partly agree, but there are some config files that are easier to edit through the share, and also, like jata1 mentioned, some applications have output folders, and it is easier to access them like this as well


    As for the use of NTFS, it could be said that it is supported only for compatibility purposes to make large data transfers over USB or some other similar case. Using a non-native linux file system permanently on a linux server is a mistake. It will end up causing problems sooner or later.

    Agreed, however, the disk on the first server came from a windows machine and I do not have enough room to move the data around for a conversion to a linux file system at the moment.

    Having my container data moved from ntfs hdd to ext4 ssd was a step in the right direction for me ;)


    Thanks everyone so far, I will look into how this works out, perhaps I can live without sharing the docker data folders

    • Offizieller Beitrag

    but there are some config files that are easier to edit through the share

    Actually they are all easier to edit from a shared folder. That falls into the comfort/security category discussed above.

    some applications have output folders, and it is easier to access them like this as well

    If a container has an output folder, I take it to be providing data for the user to use. In that case, that folder stops being a system folder and becomes a user data folder. As such it must/can be shared on the network.

    Agreed, however, the disk on the first server came from a windows machine and I do not have enough room to move the data around for a conversion to a linux file system at the moment.

    You should always have a backup. Otherwise something important is failing in your system.

  • You should always have a backup. Otherwise something important is failing in your system

    Important data is, but as always there is cost and benefit, not all data is worth backing up, but does not mean I would like to delete it to convert the harddrive ;)

    • Offizieller Beitrag

    Important data is, but as always there is cost and benefit, not all data is worth backing up

    I backup everything. If you don't mind losing data then go ahead. Good luck with that. :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!