Beiträge von flvinny521

    Read the wiki - https://wiki.omv-extras.org/do…v7_plugins:docker_compose - specifically the backup section that talks about the SKIP_BACKUP flag.

    Perfect, so this is intended behavior and I should manually specify the /share directory to be skipped. Can I assume that the partial backup that I interrupted was non-destructive and all the files rsynced still exist at /share? I spot-checked a few specific files and everything seems to be in tact at the source directory, but doing this from my phone makes it a bit trickier to verify.


    If the backup doesn't alter or delete the source files, I'll just delete the backup and move forward from there.

    EDIT - I noticed that the sabNZBD backup process was actually still running and appears to actually have been moving the entire contents of /share to the container backup. I killed the rsync process and disabled the backup until I get my head around this.



    I have just begun playing with the Compose plug-in as a way to manage the docker containers I had created manually through Portainer or docker compose. I was interested in simplifying backups and restorations of containers and OMV itself and seems like this is the way to go.


    My Compose settings are below. Config sits on my SSD system drive and is where I've been storing all my compose files and persistent configuration of each container. Pool is a large mergerfs pool consisting of 20ish drives, and "share" is the single top -level folder that contains all the data. This is split into "storage" and "media," with each of those being further split, and on and on.



    I began by migrating a few less used containers, MeTube and Bonob, which seemed to go well, so I then created one for my primary media downloading service, sabNZBD. The compose file looks like this:


    services:

      sabnzbd:

        image: lscr.io/linuxserver/sabnzbd:latest

        container_name: sabnzbd2

        environment:

          - PUID=1000

          - PGID=100

          - TZ=America/New_York

        volumes:

          - /config/sabnzbd:/config

          - /share:/share

        ports:

          - 38092:8080

        restart: unless-stopped


    I also have backups enabled every Tuesday at 11PM.



    At midnight each day, a snapraid sync and scrub runs, which usually takes between 6-8 hours. When I noticed that it was still running after about 11 hours, I looked at the output and found that it appears that the backup of the sabNZBD container included the entire contents of the /share directory that was mapped inside the container. I assume these are hardlinks, as there's no way I could store dupes of all the data in that folder. However, what I want to back up is only the compose file itself and the persistent configuration, but not the entire contents of every mapped directory in each container.


    I assume I've made an error with my configuration somewhere. Can anybody help me identify it? I'm trying to post an example of the line in the snapraid output that led me to these conclusions, but I am traveling and only have JuiceSSH on Android to access the system. I can't seem to figure out how to copy the text that extends past the width of the console window.

    I am pulling a data drive from my SnapRAID array but don't see a way to follow the official FAQ steps using the OMV plugin. I have first transferred all data off this drive to other data drives in the array using rsync. Now, the official documentation says to follow these steps:


    Code
    How can I remove a data disk from an existing array?
    To remove a data disk from the array do:
    
    Change in the configuration file the related "disk" option to point to an empty directory
    Remove from the configuration file any "content" option pointing to such disk
    Run a "sync" command with the "-E, --force-empty" option:
    snapraid sync -E
    The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
    When the "sync" command terminates, remove the "disk" option from the configuration file.
    Your array is now without any reference to the removed disk.


    I know that I can't manually edit the config file since the plugin will overwrite it, so how do I point the drive ("a1" in my case) to an empty directory? This is my current config:


    Not sure I would reinstall OMV just because of this. If you can chroot into the install, you can install another kernel.


    I think OMV 7 is safe. I am running it on all of my systems.


    Well this was a daunting issue for me to stumble into this morning, but thanks to your suggestions, I'm sticking with 6.9.11 for a bit longer. I was able to chroot in and install a new kernel as you said. I'll post my process here on the off chance somebody else runs into the same issue (or, more likely, I somehow do it again in the future).


    On a side note, I greatly appreciate your dedication to this project and all the time you put into helping the community.


    Boot into live Ubuntu ISO

    Mount OMV system drive sudo mount /dev/nvme0n1p2 /mnt

    Mount EFI partition sudo mount /dev/nvme0n1p1 /mnt/boot/efi

    Mount additional filesystems for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -B $i /mnt$i; done

    chroot sudo chroot /mnt

    Install new kernel sudo apt install linux-image-x-amd64

    Exit chroot (ctrl+d)

    Reboot

    I would put the debian netinst iso on a usb stick and boot from it. Then choose to repair grub.

    Just did this a couple of times after reading the documentation as it had been a while since I've done any of this. Ultimately, the boot still hangs at the same place.


    As a sanity check, here's what I did in the live iso:


    Mounted my system partition as root (/dev/nvme0n1p2)

    Accepted the prompt to mount the /boot/efi partition

    Selected to reinstall GRUB

    Entered /dev/nvme0 as the device on which to install GRUB

    Rebooted


    Is there something I can verify by launching a shell and viewing the bootloader files?

    Is a debian kernel still installed on your system? Can you get your system to boot after selecting a debian kernel on the GRUB screen?

    It appears to be, as the kernel is listed (along with memtest and UEFI settings) an an option in the GRUB menu, but I am not sure how to confirm that.


    Can you choose the other kernel on GRUB?

    The only other kernel available is the recovery version of the same kernel, but selecting it also causes the system to hang the same way. In fact, even trying to launch memtest results in the same issue.

    Today I attempted to install the KVM plugin but it appeared to fail due to some other out of date packages. I refreshed the available updates and installed them (in the UI) and received the "connection lost" error. I refresh my browser using ctrl+shift+R, but still some updates remained. I waited a while and then rebooted the server. However, the web UI would not load. I connected a monitor and found that the system hangs at the "Loading Linux 5.15.131-2-pve" message after the grub menu. Does anybody have suggestions on how to fix the system?


    Do you have any updates on this issue? I am experiencing the same problem.

    On OMV6 I have the same problem. The /run/php folder is not present after rebooting the system.

    The only plugin I'm using is the remotemount plugin. When I delete it, it is working and /run/php is still present after rebooting.

    For now I'm using a cronjob to reinstall php7.4-fpm @reboot. Not nice, but working for now.

    Has anyone an idea why the folder is deleted?



    Any update on this? I'm having the same problem now, and I am also not using the flashmemory plugin (OMV6 installed on an NVME drive).

    It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.


    Thanks for chiming in. While I had the mergerfs plugin installed, I hadn't actually created a pool with it yet as all the filesystems that were going to be used in the pool weren't able to be mounted without the issues I was running into as discussed in the thread.


    Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...

    Well my disk was giving some errors regarding the superblock having an invalid journal and a corrupt partition table, so I used GParted to wipe the OS drive and install OMV6 once again. This time I did everything EXCEPT install the flashmemory plugin and have had no issues whatsoever. I think this is the likely culprit by process of elimination. Thanks for spending so much time working through this with me.


    ryecoaaron, any idea how flashmemory would render my root drive read-only?

    (Edit - See below, not fixed as I had hoped) Since I had some time to kill and nothing to lose, I did a fresh installation of OMV 6. I followed almost the exact same process, but this time, I was able to mount all my filesystems without issue. Either the whole thing was a fluke, or one one of the following things is what caused the error (I didn't do any of these before mounting the filesystems, unlike the first time when I experienced all the problems):


    1. changing my kernal to Proxmox and removing the non-Proxmox kernel
    2. Installing omv-extras and the following plugins: flashmemory, mergerfs, resetperms, snapraid, and symlinks


    Edit - Well, now I am unable to gain access to the GUI (Error 500 - Internal Server Error, Failed to Connect to Socket). This time I installed omv-extras and all the plugins listed above AFTER everything was mounted. I have no evidence to support this, but I feel like it may be flashmemory. I noticed that it was not running (red status on the dashboard), realized I never rebooted after installing, so I rebooted to see if the service would run. Immediately I was faced with this new issue.


    I found this thread which sounded similar, and tried the command that was suggested there:


    Code
    dpkg --configure -a
    dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system


    And then, to test this, did the following:

    Code
    mkdir test
    mkdir: cannot create directory ‘test’: Read-only file system


    So, somehow my root filesystem has been turned read-only. Thoughts?