Beiträge von tophee

    I am running rclone on my OMV to serve a cloud storage via SFTP and I have been struggling for months with rclone repeatedly being killed and I have been trying to trouble shoot this over on the rclone forum. We figured it had to do with memory usage, but since I have 24 GB of RAM in my OMV server, there isn't really any shortage of memory. So why does it get killed?


    It looks like we've finally identified the reason:


    rclone is bumping into the max memory size. Even root has this limitation of around 2.9 GB.


    So I'm wondering: why is OMV limiting memory usage in this way? And, more importantly: how do I change it?

    Here is what I found in the /var/log/syslog:



    Code
    Feb 28 18:49:07 server systemd[1275]: rclone@pcloud.service: start operation timed out. Terminating.
    Feb 28 18:49:07 server systemd[1275]: rclone@pcloud.service: Failed with result 'timeout'.
    Feb 28 18:49:07 server systemd[1275]: Failed to start rclone: make sure pcloud is served via sftp.
    Feb 28 18:49:09 server monit[1187]: 'server' mem usage of 94.6% matches resource limit [mem usage > 90.0%]
    Feb 28 18:49:10 server smbd[6312]: [2023/02/28 18:49:10.009073,  2] ../../source3/smbd/dosmode.c:137(unix_mode)
    Feb 28 18:49:10 server smbd[6312]:   unix_mode(.) inherit mode 42770
    Feb 28 18:49:13 server systemd[1275]: rclone@pcloud.service: Scheduled restart job, restart counter is at 578.
    Feb 28 18:49:13 server systemd[1275]: Stopped rclone: make sure pcloud is served via sftp.
    Feb 28 18:49:13 server systemd[1275]: Starting rclone: make sure pcloud is served via sftp...

    Another thing that puzzles me is that the command shown in ps aux is not identical with the one I entered in the Scheduled Tasks UI, which is

    export GOGC=50 && rclone serve sftp pcloud:Backup/ --addr :2022 --user ******* --pass *********** --log-file=/zfs/NAS/config/rclone/rclone.log --vfs-cache-mode writes --rc &


    Notably, the password section is missing. Maybe ps just truncates long commands, I don't know, but even so, it also added a whole section: --config=/zfs/NAS/config/homedirs/christoph/.config/rclone/rclone.conf, which is weird. I have no idea where that comes from.

    I have not created any cron job manually, but I have manually started the scheduled task shown in the OP. Does manually running a scheduled task have any particular side effects (such as the task becoming unstoppable?)


    I tried killing cron/anacron but rclone keeps coming back:


    Code
    $ sudo killall anacron
    anacron: no process found
    $ sudo killall cron
    $ ps aux | grep rclone
    christo+  186545  0.2  0.1 761996 33252 ?        Ssl  17:55   0:00 /usr/bin/rclone serve sftp pcloud:Backup/ --config=/zfs/NAS/config/homedirs/christoph/.config/rclone/rclone.conf --addr :2022 --vfs-cache-mode minimal --log-level INFO --log-file /zfs/NAS/config/rclone/rclone-pcloud.log --user christoph
    $ killall rclone
    $ ps aux | grep rclone
    christo+  187792  1.0  0.0 759820 21756 ?        Dsl  17:56   0:00 /usr/bin/rclone serve sftp pcloud:Backup/ --config=/zfs/NAS/config/homedirs/christoph/.config/rclone/rclone.conf --addr :2022 --vfs-cache-mode minimal --log-level INFO --log-file /zfs/NAS/config/rclone/rclone-pcloud.log --user christoph
    christo+  187815  0.0  0.0   6216   624 pts/2    S+   17:56   0:00 grep rclone


    Based on that there not being any anacron process, I assume that it can't be responsible for restarting rclone, right? And since killing cron didn't stop the madness, cron is not responsible either. So what is?

    I have scheduled rclone to run at reboot via the OMV UI like this:



    Now I'm trying to stop the process and it is impossible. Whenever I dokillall rclone I can see that the process is gone via ps aux | grep rclone but a second or two rclone is back. I don't understand what is going on.


    I saw in the documentation that the UI doesn't write directly to the crontab but in essence, I would still expect that adding a command to the scheduler will do the same as adding the same command into crontab, i.e. if it is scheduled to be executed at reboot, it will execute at reboot and never again until the next reboot.


    OMV is clearly not doing that and I fail to understand what it is doing or how I can prevent it from doing so. I have even disabled the scheduled task in the UI but rclone is still being restarted.


    Could anyone explain?

    Yesterday I upgraded to openmediavault 6.3.1-1. No problem.


    Today, from one second to the next, the server lost connectivity so I eventually had to restart it. Once it was back, I found this in `/var/logs/message`:



    From what I can tell (which is almost nothing), OMV started to update something (smartmontools?) which somehow caused the network connection to break down.


    Edit: sorry, I realized that the update was hours before the network issue. So I have no idea what cause the bridge to break down...


    What happened? And what should I do about it?

    A few weeks or so ago I installed ZeroTier on my OMV server via CLI (tried running it in a docker container, but couldn't get it to work). I installed it manually because my impression was that OMV doesn't support zerotier natively. But now I see a Zerotier update in my OMV updates



    So, I realized I must be misunderstanding something because, to me, this looks like OMV does somehow support zerotier. But then it occurred to me that OMV might simply be showing all system updates, regardless of whether something was installed via OMV or not. Is that correct?


    In that case, I just wonder whether there is anything wrong with installing something via cli and then updating it via the UI.


    Related to that: I don't remember installing portainer och yacht via OMV (though it's possible I did) - I've been running OMV for some years now - but OMV does see the portainer and yacht instances in the UI. Does that mean that they will be kept up to date via the OMV update management? I'm asking because portainer has for some time been telling me in its UI that there is an update but there is none showing in the OMV update management. Perhaps it just takes some extra time? Or do I still have to do the somewhat tricky process of update portainer manually (because portainer can't update itself)?

    Excellent! Thanks for explaining. No, my VMs are not similar and I do have 24 GB of RAM, so whatever savings ksm produces, I can probably afford not to have them. Will try disabling ksm and hopefully forget about it,

    But all the kvm plugin does is install it as a dependency. If you want to disable it, just stop and disable the service.

    The problem is, I don't know if I want to disable it. What does it do? I have two KVM virtual machines running. Do these need the ksm service?

    A couple of weeks ago, some OMV system process started to use a lot of CPU:



    I started looking into this today and it seems quite evident that the perpetrator is ksmd:


    What can I do about this?


    I found this: https://serverfault.com/a/1064801/399289 but I am somewhat reluctant to tweak these settings as I assume that OMV is using reasonable defaults...

    Patience :)

    LOL. yes, thats probably true in general, but in this case I was unsure what is going on. and wanted to understand it. So thanks for clarifying. I now realize that releases don't seem to be published on github at all, since https://github.com/openmediavault/openmediavault/releases is empty...


    Anyway: one question regarding the latest release:


    Zitat

    Adapted Samba vfs_fruit settings according to the wiki to better work with Mac OS X. The following environment variables have been introduced:

    OMV_SAMBA_SHARE_FRUIT_VETOAPPLEDOUBLE (defaults to ‘no’)

    OMV_SAMBA_SHARE_FRUIT_NFSACES (defaults to ‘no’)

    OMV_SAMBA_SHARE_FRUIT_WIPEINTENTIONALLYLEFTBLANKRFORK (defaults to ‘yes’)

    OMV_SAMBA_SHARE_FRUIT_DELETEEMPTYADFILES (defaults to ‘yes’)

    I wonder wether there is anything I need to do with my SMB-settings which currently look like this:



    I would assume that I can (should?) remove those settings that became redunant in the new release?

    OK, I see. It's good to know that there are still use cases for AFP. I have had both AFP and SMB running side by side for some time now, but it caused some confusion because the shares had identical names (because they are sharing identical directories on the server) so I figured I should either rename them or at least only mount AFP on my macs. Then I read about SMB being faster etc, so I thought perhaps I should get rid of it alltogether. Hence my question.


    I haven't gotten Spotlight to reliably index those SMB shares, which might actually have been the reason why I added AFP at some point. But since indexing still seems to fail, I guess it didnt help...

    look for the kernel you are running. If you don't reboot the running kernel is the same as before upgrading.

    Yes, that's what the second screenshot above is about. I'm running 5.4.174-2-pve, which, I believe is a proxmox kernel. So the funny thing is that I was running a proxmox kernel on OMV6 without having the kernel plugin installed....

    You should restart, if there are problems to solve the sooner the better

    You have a point.


    But the most acute argument is this one:


    The kernel thing is a bit confusing, though. I believe I was running a proxmox kernel before the upgrade but after the upgrade the kernel-option wasn't even available anymore. Had to (re-?)install the kernel-plugin. But if using a new kernel requires a reboot, then I must still be running the (old) proxmox-kernel (despite the option having disappeared after the upgrade)... Indeed:


    Funny... I'm almost afraid to reboot now...