Beiträge von flvinny521

    I am pulling a data drive from my SnapRAID array but don't see a way to follow the official FAQ steps using the OMV plugin. I have first transferred all data off this drive to other data drives in the array using rsync. Now, the official documentation says to follow these steps:


    Code
    How can I remove a data disk from an existing array?
    To remove a data disk from the array do:
    
    Change in the configuration file the related "disk" option to point to an empty directory
    Remove from the configuration file any "content" option pointing to such disk
    Run a "sync" command with the "-E, --force-empty" option:
    snapraid sync -E
    The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
    When the "sync" command terminates, remove the "disk" option from the configuration file.
    Your array is now without any reference to the removed disk.


    I know that I can't manually edit the config file since the plugin will overwrite it, so how do I point the drive ("a1" in my case) to an empty directory? This is my current config:


    Not sure I would reinstall OMV just because of this. If you can chroot into the install, you can install another kernel.


    I think OMV 7 is safe. I am running it on all of my systems.


    Well this was a daunting issue for me to stumble into this morning, but thanks to your suggestions, I'm sticking with 6.9.11 for a bit longer. I was able to chroot in and install a new kernel as you said. I'll post my process here on the off chance somebody else runs into the same issue (or, more likely, I somehow do it again in the future).


    On a side note, I greatly appreciate your dedication to this project and all the time you put into helping the community.


    Boot into live Ubuntu ISO

    Mount OMV system drive sudo mount /dev/nvme0n1p2 /mnt

    Mount EFI partition sudo mount /dev/nvme0n1p1 /mnt/boot/efi

    Mount additional filesystems for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -B $i /mnt$i; done

    chroot sudo chroot /mnt

    Install new kernel sudo apt install linux-image-x-amd64

    Exit chroot (ctrl+d)

    Reboot

    I would put the debian netinst iso on a usb stick and boot from it. Then choose to repair grub.

    Just did this a couple of times after reading the documentation as it had been a while since I've done any of this. Ultimately, the boot still hangs at the same place.


    As a sanity check, here's what I did in the live iso:


    Mounted my system partition as root (/dev/nvme0n1p2)

    Accepted the prompt to mount the /boot/efi partition

    Selected to reinstall GRUB

    Entered /dev/nvme0 as the device on which to install GRUB

    Rebooted


    Is there something I can verify by launching a shell and viewing the bootloader files?

    Is a debian kernel still installed on your system? Can you get your system to boot after selecting a debian kernel on the GRUB screen?

    It appears to be, as the kernel is listed (along with memtest and UEFI settings) an an option in the GRUB menu, but I am not sure how to confirm that.


    Can you choose the other kernel on GRUB?

    The only other kernel available is the recovery version of the same kernel, but selecting it also causes the system to hang the same way. In fact, even trying to launch memtest results in the same issue.

    Today I attempted to install the KVM plugin but it appeared to fail due to some other out of date packages. I refreshed the available updates and installed them (in the UI) and received the "connection lost" error. I refresh my browser using ctrl+shift+R, but still some updates remained. I waited a while and then rebooted the server. However, the web UI would not load. I connected a monitor and found that the system hangs at the "Loading Linux 5.15.131-2-pve" message after the grub menu. Does anybody have suggestions on how to fix the system?


    Do you have any updates on this issue? I am experiencing the same problem.

    On OMV6 I have the same problem. The /run/php folder is not present after rebooting the system.

    The only plugin I'm using is the remotemount plugin. When I delete it, it is working and /run/php is still present after rebooting.

    For now I'm using a cronjob to reinstall php7.4-fpm @reboot. Not nice, but working for now.

    Has anyone an idea why the folder is deleted?



    Any update on this? I'm having the same problem now, and I am also not using the flashmemory plugin (OMV6 installed on an NVME drive).

    It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.


    Thanks for chiming in. While I had the mergerfs plugin installed, I hadn't actually created a pool with it yet as all the filesystems that were going to be used in the pool weren't able to be mounted without the issues I was running into as discussed in the thread.


    Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...

    Well my disk was giving some errors regarding the superblock having an invalid journal and a corrupt partition table, so I used GParted to wipe the OS drive and install OMV6 once again. This time I did everything EXCEPT install the flashmemory plugin and have had no issues whatsoever. I think this is the likely culprit by process of elimination. Thanks for spending so much time working through this with me.


    ryecoaaron, any idea how flashmemory would render my root drive read-only?

    (Edit - See below, not fixed as I had hoped) Since I had some time to kill and nothing to lose, I did a fresh installation of OMV 6. I followed almost the exact same process, but this time, I was able to mount all my filesystems without issue. Either the whole thing was a fluke, or one one of the following things is what caused the error (I didn't do any of these before mounting the filesystems, unlike the first time when I experienced all the problems):


    1. changing my kernal to Proxmox and removing the non-Proxmox kernel
    2. Installing omv-extras and the following plugins: flashmemory, mergerfs, resetperms, snapraid, and symlinks


    Edit - Well, now I am unable to gain access to the GUI (Error 500 - Internal Server Error, Failed to Connect to Socket). This time I installed omv-extras and all the plugins listed above AFTER everything was mounted. I have no evidence to support this, but I feel like it may be flashmemory. I noticed that it was not running (red status on the dashboard), realized I never rebooted after installing, so I rebooted to see if the service would run. Immediately I was faced with this new issue.


    I found this thread which sounded similar, and tried the command that was suggested there:


    Code
    dpkg --configure -a
    dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system


    And then, to test this, did the following:

    Code
    mkdir test
    mkdir: cannot create directory ‘test’: Read-only file system


    So, somehow my root filesystem has been turned read-only. Thoughts?

    Thanks votdev, I checked the log and these are all the entries from the time I rebooted until mounting the filesystem in the GUI and before I accepted the changes. Nothing here stands out to my eyes: https://pastebin.com/6cyfDV4k.


    After clicking the check box and confirming the changes, resulting in the errors described earlier, a great deal of the end of the log is actually gone completely. The timestamp for the latest entries is a full 5 hours earlier than the previous log: https://pastebin.com/1KwKLXrq.

    See the output below (shortened for sanity). Afterwards, I rebooted and tried to mount again, same issue.

    Code
    omv-salt stage run prepare
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 6 (changed=5)
    Failed:    0
    ------------
    Total states run:     6
    Total run time:  16.266 s


    Code
    omv-salt stage run deploy
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 2 (changed=2)
    Failed:    0
    ------------
    Total states run:     2
    Total run time:  31.393 s

    Thanks for the heads up that mount is only temporary and does not persist through a reboot. I am open for suggestions on where to go from here.


    One of the drives is brand new with no data on it, so I tried a few things since there was no risk of data loss. First, the drive itself was visible in the drives section of the GUI, so I tried to create a new filesystem on that drive, and it didn't show up in the drop-down menu. I assume this is because the existing FS was being detected. I then wiped the drive and created a new filesystem directly in OMV6 (this was much faster than OMV5 on a 14TB drive, by the way). This newly created filesystem could also NOT be mounted. I experienced the exact same issues as all the others.