Posts by flvinny521

    I have posted about this issue on the relevant github page, but I think the issue may not be specific to that project, so figured I would solicit some help here as well.


    I am running the auanasgheps snapraid-aio-script in place of the built-in script to run a SnapRAID sync each night at midnight. One feature of the script is that it uses Apprise to send notifications when the job is triggered and a summary once the job has completed (how many new files added, how many removed, etc.). I created the scheduled task directly in the OMV interface and it runs as root (full settings below).


    When I open the task and manually start it for the first time, the dependency pipx is installed, and I am asked to run the script again. Upon doing so, the script runs flawlessly: pipx installs Apprise, the sync and scrub commands run as configured, and I eventually receive the job completion notification. This is true every subsequent time I go into the OMV interface and trigger the job manually.


    However, when I allow the job to be triggered at midnight, it attempts to download pipx again (which is already installed and can be verified with apt as seen below), and fails, causing the script to end prematurely.


    Any suggestions on what is causing this issue?


    Image


    I am not seeing this issue with speed but I have seen quite a few issues with DNS not resolving inside my docker containers. This has been happening for a couple of months and I couldn't work out what had changed.


    It turned out that the issue was related to adguard and DNS rate limits together with some change made with docker DNS. I fixed the issue by setting adguard to have no DNS rate limit.

    That's very interesting. I could see how downloading through Usenet could bump into that limit very easily, as there are multiple connections to multiple download servers all running concurrently. I just made that change and will keep my fingers crossed, thanks for the suggestion.

    OK, nothing interesting in there unfortunately. It's just full of virtual interfaces going up and down which is all due to starting and stopping Docker containers usually. It's a general linux issue not OMV specific from what I see so you might be better off asking in a general linux questions Reddit for example.


    I was thinking that maybe it is something Docker-specific, because the last few times I have experienced my speeds coming to a halt, I could still ping, so DNS seemed to not be affected. Since my download utility is running in a container, and I haven't verified download speeds any other way, maybe it's only affecting Docker (or even that one container specifically).

    On my OMV server, I have SabNZBD installed via Docker to download from the usenet. Randomly, I will notice that my download speed will drop from the normal range (around 75-90MB/s) to a much lower speed (often <5MB/s). It is usually accompanied by messages that a handshake to one of my configured usenet servers has timed out (screenshot below). Also, when this situation occurs, it is sometimes, but not always, accompanied by DNS failures (trying to ping google.com will fail, for example).


    My network interface was previously set up as a bridge (needed for a Docker container I no longer use), but today I used omv-firstaid to set it back to a regular ethernet interface, and the problem has persisted. The details are below. It is set to use my Unifi Dream Machine for DNS, which in turn points to my self-hosted Ad-Guard instance (on a separate device, not the same physical machine as OMV). I have previously changed this to use Google or Cloudflare DNS, but the problem was not resolved, so I changed it back again.


    When this occurs, if I reboot OMV, the problem is immediately resolved and the same files will begin downloading at the normal speeds, and all the handshake errors are resolved.


    Any idea where to begin troubleshooting?


    Read the wiki - https://wiki.omv-extras.org/do…v7_plugins:docker_compose - specifically the backup section that talks about the SKIP_BACKUP flag.

    Perfect, so this is intended behavior and I should manually specify the /share directory to be skipped. Can I assume that the partial backup that I interrupted was non-destructive and all the files rsynced still exist at /share? I spot-checked a few specific files and everything seems to be in tact at the source directory, but doing this from my phone makes it a bit trickier to verify.


    If the backup doesn't alter or delete the source files, I'll just delete the backup and move forward from there.

    EDIT - I noticed that the sabNZBD backup process was actually still running and appears to actually have been moving the entire contents of /share to the container backup. I killed the rsync process and disabled the backup until I get my head around this.



    I have just begun playing with the Compose plug-in as a way to manage the docker containers I had created manually through Portainer or docker compose. I was interested in simplifying backups and restorations of containers and OMV itself and seems like this is the way to go.


    My Compose settings are below. Config sits on my SSD system drive and is where I've been storing all my compose files and persistent configuration of each container. Pool is a large mergerfs pool consisting of 20ish drives, and "share" is the single top -level folder that contains all the data. This is split into "storage" and "media," with each of those being further split, and on and on.



    I began by migrating a few less used containers, MeTube and Bonob, which seemed to go well, so I then created one for my primary media downloading service, sabNZBD. The compose file looks like this:


    services:

      sabnzbd:

        image: lscr.io/linuxserver/sabnzbd:latest

        container_name: sabnzbd2

        environment:

          - PUID=1000

          - PGID=100

          - TZ=America/New_York

        volumes:

          - /config/sabnzbd:/config

          - /share:/share

        ports:

          - 38092:8080

        restart: unless-stopped


    I also have backups enabled every Tuesday at 11PM.



    At midnight each day, a snapraid sync and scrub runs, which usually takes between 6-8 hours. When I noticed that it was still running after about 11 hours, I looked at the output and found that it appears that the backup of the sabNZBD container included the entire contents of the /share directory that was mapped inside the container. I assume these are hardlinks, as there's no way I could store dupes of all the data in that folder. However, what I want to back up is only the compose file itself and the persistent configuration, but not the entire contents of every mapped directory in each container.


    I assume I've made an error with my configuration somewhere. Can anybody help me identify it? I'm trying to post an example of the line in the snapraid output that led me to these conclusions, but I am traveling and only have JuiceSSH on Android to access the system. I can't seem to figure out how to copy the text that extends past the width of the console window.

    I am pulling a data drive from my SnapRAID array but don't see a way to follow the official FAQ steps using the OMV plugin. I have first transferred all data off this drive to other data drives in the array using rsync. Now, the official documentation says to follow these steps:


    Code
    How can I remove a data disk from an existing array?
    To remove a data disk from the array do:
    
    Change in the configuration file the related "disk" option to point to an empty directory
    Remove from the configuration file any "content" option pointing to such disk
    Run a "sync" command with the "-E, --force-empty" option:
    snapraid sync -E
    The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
    When the "sync" command terminates, remove the "disk" option from the configuration file.
    Your array is now without any reference to the removed disk.


    I know that I can't manually edit the config file since the plugin will overwrite it, so how do I point the drive ("a1" in my case) to an empty directory? This is my current config:


    Not sure I would reinstall OMV just because of this. If you can chroot into the install, you can install another kernel.


    I think OMV 7 is safe. I am running it on all of my systems.


    Well this was a daunting issue for me to stumble into this morning, but thanks to your suggestions, I'm sticking with 6.9.11 for a bit longer. I was able to chroot in and install a new kernel as you said. I'll post my process here on the off chance somebody else runs into the same issue (or, more likely, I somehow do it again in the future).


    On a side note, I greatly appreciate your dedication to this project and all the time you put into helping the community.


    Boot into live Ubuntu ISO

    Mount OMV system drive sudo mount /dev/nvme0n1p2 /mnt

    Mount EFI partition sudo mount /dev/nvme0n1p1 /mnt/boot/efi

    Mount additional filesystems for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -B $i /mnt$i; done

    chroot sudo chroot /mnt

    Install new kernel sudo apt install linux-image-x-amd64

    Exit chroot (ctrl+d)

    Reboot

    I would put the debian netinst iso on a usb stick and boot from it. Then choose to repair grub.

    Just did this a couple of times after reading the documentation as it had been a while since I've done any of this. Ultimately, the boot still hangs at the same place.


    As a sanity check, here's what I did in the live iso:


    Mounted my system partition as root (/dev/nvme0n1p2)

    Accepted the prompt to mount the /boot/efi partition

    Selected to reinstall GRUB

    Entered /dev/nvme0 as the device on which to install GRUB

    Rebooted


    Is there something I can verify by launching a shell and viewing the bootloader files?