Posts by Sorion

    Hello guys,


    I am lazy person, and I have to say that doing things via the GUI can be a little tedious at times because I need to do them one at a time and wait and so on

    (examples: change VPN country means I have to manually down the VPN-container and all containers that run through it one by one then up them one by one again

    same for pulling new versions, every single container needs to be pulled manually and then uped manually - that's a lot of steps)


    So at first compose was easier but by now I understood that I can also just install new containers via SSH and the command lines. So technically I can just write myself a short python script to down all my containers, pull new versions and up them again in one step by setting up and also have that be a regular scheduled task so I don't even have to think about it myself anymore.


    Here comes the question: Will the GUI be able to follow that process or will it loose it? As in will it notice that containers have been downed by bypassing the GUI and later respectively been UPed again?

    I'm honestly to scared to just try it cause I don't wanna break things out of lack of understanding.


    I'm also very open to an alternative solution to my lazy problem if there is any that's already implemented that I just haven't seen yet.


    Greetings and thanks!

    Issue solved.


    Tried to put UMASK=002 into the yml another time and this time it worked. I have no idea what caused issues when I tried it before but since my default UMASK is set to 022 it lines up with my issue so that apprently was it.

    Gonna look into changing the system default one to 002 since I don't see any logic in having it as 022 for my usecase but will do that another time.


    Anyway, marking thread as solved, thanks for the input enjoy your sunday :)

    Okay this is confusing af to me.

    I decided to just see what happens when I change the folder structure, only handing over new subdirectorys in the downloads category.

    So I updated the volumes of qbit to look like this:


    Code
    volumes:
    - /SSD/appdata/qbittorrent/config:/config
    - /MEDIA/downloads/unfinished:/unfinished
    - /MEDIA/downloads/finished:/finished

    The /MEDIA/downloads/ directory was pre created and had the correct permissions set as:

    Code
    drwxrwsr-x 1 root users   0 May 14 08:25 downloads


    So after putting up the container I got this:


    Code
    dh_user@dh-sv:/$ ls -al /MEDIA/downloads/
    total 0
    drwxrwsr-x 1 root users 36 May 14 08:28 .
    drwxrwsr-x 1 root users 60 May 14 08:25 ..
    drwxr-sr-x 1 root root   0 May 14 08:28 finished
    drwxr-sr-x 1 root root   0 May 14 08:28 unfinished


    And now I am totally lost how the frick the owner and group is now set as root when I have defined:


    Code
          - PUID=1002
          - PGID=100


    With appuser being:

    Code
    dh_user@dh-sv:/$ id appuser
    uid=1002(appuser) gid=100(users) groups=100(users),992(docker)

    I think gderf uses bittorrent. Maybe he can help you.

    Well said :)

    It's not an exclusive qbittorrent issue at this point - like the very same problem occurs when using deluge too.

    So it shouldn't be a container issue at this point. Both containers doing that seems werid to me.

    Me setting things up wrong seems way more likely.


    But simplicitys sake let me just post my yml files for people to look through:


    qbittorrrent:



    And deluge:

    So, sadly this is still relevant.
    I also tried to use Deluge instead but I still have the same issue. Every folder that gets created within /MEDIA/downloads will have 741 (should be correct? As in drwxr-sr-x)


    I removed the downloads folder completely and had it be recreated by the containers, to no avail.


    I found about umask in the qbittorrent documentary and (while honeslty not exactly understanding what it does?) tried to add umask=000 and umask=022 into the yml file. That seemed to put the proper permissions on the folder as to "ls -al" command but weirdly I then still was not able to move the folder because then I would get the error prompt that the file was in use, no idea by what though.


    At this point I am at a loss for ideas. Of course I could just access my SMH Folders with the appdata user and everyhing would be fine but I kind of refuse to let the system dictate how I structure my setup :s

    Sorry I don't use that container. You can do a search on the forum. I think I've read that problem before. You will probably find solutions for that bittorrent behavior.

    Since I have no good reason to use qbittorrent other than just .. well ...

    You mind sharing your way? Always curious for input and it's probably very clear I just started setting up :D


    All in all, thanks for all the help so far from you and everyone :D


    aaaand you edited an answer to the problem .. you guys are great :s thanks

    Creating the folder ahead of time shouldn't be a problem, as long as you create that folder with access permissions for the user defined in the container.

    One way to ensure that is to create the folder with read and write permissions for the users group and that user is in the users group. There are other ways to do it.

    Okay. That I can do easily.


    But I just noticed that my beginning issue is not fixxed.


    As is the nature of a downloads folder, the container will create frequently subfolders in there.

    And as I just noticed - those, just as well as the top folder are read only for my SMB user.

    I can (and have) changed the permissions of the /MEDIA/downloads parent folder, but it seems a big hassle to manually do this for every sub folder that will be created there.


    So how can I shut that behaviour down?

    Not knowing which container did this, I assume that container created that folder, and by default it will do so with read-only permissions.

    If you want to make sure that doesn't happen you can create the folder before launching the container.

    To fix it simply change the permissions of that folder. The container will continue to function without issue.

    Was a bitorrent container. Assumed that wasn't relevant.


    If I create the folders beforehand it will be owned by my ssh account - could that pose a problem for any containers? I've tried avoiding this since I'm uncertain on the effects of that.

    Hey,


    this is probably a very simple issue but I don't get how it came to be.

    I setup a containter that contained the following volume:


    - /MEDIA/downloads:/downloads


    That worked fine, volume got created and the container has no issue writing in the folder.

    The issue that my SMB account now has only read access to the downloads and sub directory.

    This is the output for

    ls -lr /MEDIA/


    Code
    $ ls -al /MEDIA/
    total 60
    drwxrwsr-x  1 root    users   118 May 13 11:50 .
    drwxr-xr-x  1 root    root     18 May  6 10:14 ..
    drwxr-sr-x  1 appuser users   104 May 13 11:48 downloads
    drwxrwsr-x  1 root    users    36 May  9 22:07 entertainment
    $

    As you can see, the owner of downloads as wanted is appuser, but write access for users seems missing. I can figure out how to re-add that permission even without reset permissions, but I don't get why it was setup that way to beginn with.


    What did I screw up?


    Greetings.

    Might I pick your mind a little more on this one?


    Surely it's possible I just can't find it now.

    I want todo a monthly backup. That works fine. My command line for rsync looks like this now:


    Code
    rsync -av /SSD/compose/ /srv/remotemount/backup_compose/

    What I'd love to do though, is add a step beforehand, which is to create a folder named after the date of execution and then put the rsync into that one instead of just dumping it in the top layer - so that folders name would be different everytime the scheduled task gets executing, effectively creating an archive of backups that I would then manually delete after some time.


    So I'd want my structure on my backup pc to look like this:


    /srv/remotemount/backup_compose/

    -------------------------------------------------- /2023-05-11/

    ------------------------------------------------------------------/all the folders copied over from /SSD/compose/


    would that be possible for the scheduled tasks command?

    Just to conclude.


    This worked flawlessly on first attempt.

    Thank you for pointing me in that direction and therefore matter is resolved :D

    Thank you. I will look into all of that.


    But in case anyone ever stumbles over this threat:

    The executing user of the OMV GUI seems to simply be root. Since after adding the windows host to roots known_hosts list has changed the error message I'm getting in rsync :D

    This here is what I am talking about:



    What I did here is open a cmd on my windows machine, SSH into the server from my windows user authenticating as dh_user on my omv server.

    and once I'm there I then again established another SSH connection towards my windows pc.

    So the SSH Connection is trying to do:


    dh_user@server -> <removed for privacy>@windows_pc


    but as you can see, dh_user get's prompted by the server to add the windows pc as a trusted source first.


    And as you can see, once I answer that question with 'no' I get the same error message that I get when trying to execute the rsync job.


    That same thing must happen to the rsync too - must it not?


    I am not and never have tried to establish the rsync using an account that is native my server. I get it - to establish SSH I need login data that the host server recognizes. But the server side needs to okay the connection first, and I think that is not happening and that is why my rsync is failing.


    I scimmed through your guide, it doesn't seem to touch on what is bothering my mind.

    But I will read it throughoutly later when I got more time. Thanks

    I haven't read the whole thread. Just based on the last question.

    The user that runs a remote rsync task is the user that has permissions to read the remote share, that is, the one created on the remote server and their password.

    That can't be correct. Or am I misunderstanding?


    From my understanding what the rsync the way I set it up does is setup an SSH connection.

    But that SSH connection has to be setup from some user on my servers OS. Root, admin, you name it.

    From what I understand my rsync is failing at establishing the SSH connection, because the user trying to establish one has the target host not listed under known_hosts


    Having a hard time putting in words what I think is happening.


    edit: as in if I type in console "ssh <user>@<target server" - I am logged in as some user before I do that, am I not? and for that user to be able to establish a SSH connection, the <target server> has to be listed under the known_hosts - which brings me back to, how do I add it :D

    So made some progress I assume.


    Needed OpenSSH Authentication Agent and OpenSSH Server active on the windows machine, makes sense so far.


    Now I'm just running in the next issue that is:


    Code
    Host key verification failed.
    rsync: connection unexpectedly closed (0 bytes received so far) [sender]
    rsync error: unexplained error (code 255) at io.c(228) [sender=3.2.3]
    ERROR: The synchronisation failed.


    From what I could find this means there is an incorrect host key associated in the SSH client's known_hosts file. Which is stored in the executing users home directory and which I'd need to remove by using


    Code
    ssh-keygen -R <hostname>


    So here comes my problem. What is the executing user for the rsync task? And how would I modify that users known_hosts file specifically since command in the afformentioned form should just access the users list that I'm using to SSH with (which obviously doesn't even have a file since I don't use that user to SSH anywhere)


    Maybe someone has an idea on that now. Will keep looking and updating if I solve myself.


    edit: I am now thinking that the issue is less a wrong entry in known_hosts and more simply that there needs to be an entry added, which when using the console will be prompted and can be confirmed. But I assume the GUI will not perform that on it's own so I need to manually add my key. But the question remains similar, what is the user that is responsible for executing the rsync task and how do I add the key manually or if necessary delete an incorrect one for that specific user.

    As I reread this, a possible explanation comes to mind. It is possible that the old yaml file ended up badly formatted during the first edit. Copying and pasting this file again might have resolved it.

    If no one contributes, I'll just keep this idea in my head.

    unlikely. I did completely delete and remake the file a few times in between. After all I had to reinstall Docker which requires me to throw out compose too.

    Pretty sure I copied my linked here one back but did also once go to the wiki and copy everything from there and reenter the inputs.

    Hey everyone,


    I wanted to use rsync to backup a few config files from the omv server to my win 10 local machine.


    Setup of the remote push worked fine so far, except when I run it it says that it copies files but the sent and recieved bytes size is mismatching and on the recieving side no actual files are created:



    It works fine as local rsync but falls flat on remote.

    The account verifycation is definitly correct

    and the folder backup_compose is available in my network and the user that I'm using to access it has write permissions, I verified that with a different pc already.


    Any ideas?



    Edit: I did also try a different source folder into a different destination folder. Did not try a different destination pc, yet though.

    Edit2: Tried it to different destianation pc - also win 10 - same outcome:



    Edit:

    So I found where those rsyncs where executed. It created a bunch of folders in /

    All named after the "[blub@]192.168.178.28:test" part - so the example given for a the destination folder in OMV Gui is not to be taken literally and the [] around user@ can't be part of it - got that by now, but I guess no reason not to point that out here.

    I did as asked. And good news first: It worked.


    The check ran as it should, I assumed you had forgotten to ask for also "up"ing the file since without I couldn't possibly connect. Weirdly on the first attempt of "Up"ing I got an error. I tried it again and this time it properly pulled all jellyfin data and started the container as supposed.


    Access via different pc worked fine, all media is there - am currently in the process of redoing the stuff.


    So in short - thanks to everyone involved, I'd say the matter is resolved


    But if you don't mind going into detail, would you mind explaining why this worked now? What is the ":ro" ending doing? Why did we have to add the published server ip when that apparently isn't usually necessary?

    Why the split into all 3 categorys instead of just refering to parent folder?



    And lastly a semi related question:

    I assume I should going forwards setup all docker containers with compose - if I see a use in adding portainer for a GUI to the mix in the future, would you advice me to not do that for compability or security(?) concerns or should that be of no relevance?


    Again, thanks' a lot everyone!


    Allright - I'm back.

    I read all I missed so far and checked what I could nothing came up negativ so far.


    Here is all the inputs back that should verify that even though I'm not sure about the permissions. I think it's okay?



    sorion is just the user account for smb, but that shouldn't matter if I'm not stupid?


    I have no idea how to check for port usage though.
    Port forwarding is not used in my network as of now so for my understanding it must be in use by the server itself. I wouldn't know of anything but don't know how to verify that.

    The only thing is that I had a portainer installation active when all this started which had a functioning jellyfin container which used port 8888 - but as of now to my knowledge everything in regards to that has been removed from the server.


    I assume you're making a joke in regards to the cpu fan? If not I'm not following.


    Unrelated to what you suggested I did also try to redo the syslinks and leave them out, as in use the mounting paths in the compose - didn't work either.



    Edit:

    Since that too was mentioned, I did check for timezone too:


    Code
    $ timedatectl
                   Local time: Tue 2023-05-09 21:31:34 CEST
               Universal time: Tue 2023-05-09 19:31:34 UTC
                     RTC time: Tue 2023-05-09 19:31:34
                    Time zone: Europe/Berlin (CEST, +0200)
    System clock synchronized: yes
                  NTP service: active
              RTC in local TZ: no


    Even though it wouldn't make sense since the symlink is working I went and reconfirmed that both directorys are reachable under the non shorted link that I took from the symlink GUI page


    Code
    $ cd /srv/dev-disk-by-uuid-4692ac63-fd6e-4884-b875-118de220d9a8/media/entertainment
    $ pwd
    /srv/dev-disk-by-uuid-4692ac63-fd6e-4884-b875-118de220d9a8/media/entertainment
    
    
    $ cd /srv/dev-disk-by-uuid-f42bb887-736b-4ae1-87b4-3a71cff64bec
    $ pwd
    /srv/dev-disk-by-uuid-f42bb887-736b-4ae1-87b4-3a71cff64bec
    $ ls
    appdata  aquota.group  aquota.user  backup  backup_os  compose  docker  lost+found