Posts by hoobatoo

    /etc/initramfs-tools/conf.d/resume should get the update swap uuid as well.

    It seems I missed the removal of the swap disable instructions in OMV 6 and apparently successfully disabled swap under OMV 5 which then carried over to 6 with my in-place upgrade.


    I completed all the other steps in the OP's instructions and swapon -s, and cat /proc/swaps all appear to be accurately referencing the swapfile indicated in place of /path/to/swapfile. However, it appears that the resume file doesn't currently exist on my system.


    Is this a significant issue? if so would I create and populate that file with /path/to/swapfile or would I reference the UUID that was returned from the mkswap command?


    If it is the latter I assume it would need to follow the form found here: https://ubuntuforums.org/showt…2&p=13800490#post13800490


    Code
    RESUME=UUID=MY-SWAPFILE-UUID

    Would i need to run any update commands or would i then just need to reboot my system?


    Thanks!

    @olduser. Thanks. I knew about "Ctrl +/-" in Linux, that is a basic knowledge there. I just did not realized that the same shortcut is available in OMV GUI.

    It’s a browser feature, not specific to OMV GUI except that you access the omv gui in a browser.

    Funnily enough thats why I stooped using gluetun, it would cause problems with other containers connecting to it, I changed to binhex's qbittorentvpn and his delugevpn, and if needed ran the other containers through 1 of those. It's worked well for quite some time, must admit I rather use gluetun, but needs must.

    Gluetun stopping other containers when the vpn disconnects is actually a feature not a bug. This is a reason I use it and also the easiest way to manage that is through a single yaml with multiple containers. But from reading the thread that doesn’t seem to be the dev preferred method so won’t really be supported in the new plug-in.


    Which is fine I can continue using portainer, just seems a little silly that everyone wants all the existing bells and whistles of portainer in the compose plug-in instead of just using portainer.

    Ok, thanks didn't realize it could be done in the GUI. Just so I have my head on straight here, the error was related to the config indicating that there were >6 parity drives which appears to exceed the allowable number of parity drives in the snapraid documentation, correct?

    Code
    drives.drive[7].paritynum: The value 7 is bigger than 6.

    I thought I would document the changes I made to update this in case anyone else runs into similar issues or if I need to do it again for whatever reason.


    Looking at the snapraid drives tab again in the GUI it looks like the parity number incremented up with each drive.



    So to edit each data drive I needed to uncheck the data check box, check the parity check box so the parity number field would display, enter the new parity number, then uncheck parity, and recheck data.


    I originally arbitrarily guessed and chose "1" for data1-4 and parity1 and "2" for data5-7 and parity2 but then found after saving each data drive it actually saved Parity Num as 1 for all drives but parity2, as you can see in the screenshot below.



    After successfully applying changes in the GUI the info button in the Arrays tab functions as expected and all is well.


    Thanks so much for the help and again for all the work you put into OMV, the plugins, and the community!

    I received the following error in the webGUI SnapRAID plugin tab:

    Code
    A config file for this array does not exist.
    
    OMV\Exception: A config file for this array does not exist. in /usr/share/openmediavault/engined/rpc/snapraid.inc:424
    Stack trace:
    #0 [internal function]: OMVRpcServiceSnapRaid->executeCommand(Array, Array)
    #1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('executeCommand', Array, Array)
    #3 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SnapRaid', 'executeCommand', Array, Array, 1)
    #4 {main}

    This is following shortly after I successfully upgraded from OMV5->OMV6. In order to do so I deleted and recreated my mergerfs pool. I am able to successfully mount the the pool using mergerfs and I get expected output when I use the snapraid status and snapraid smart cli commands. In the webGUI when I select the array and press any of the options under Info the terminal popup displays but only reads END OF LINE.


    Here is the output of omv-showkey snapraid

    Attached as a textfile as the message was too long otherwise.

    snapraid_showkey.txt


    When I run omv-salt deploy run snapraid i get the following errors.

    Thanks!

    Nope it doesn't appear to be referenced anywhere in fstab.


    Code
    root@omvserver:~# cat /etc/fstab | grep "79684322-3eac-11ea-a974-63a080abab18"
    root@omvserver:~#

    It also isn't shown in blkid output whereas /dav/sdn2 is present.

    Code
    root@omvserver:~# blkid | grep "3754f4b8-beb6-4111-82ce-d19539d4241a"
    /dev/sdn2: UUID="3754f4b8-beb6-4111-82ce-d19539d4241a" TYPE="ext4" PARTUUID="ed4fd749-28e2-47a0-8a49-4f80a3819a4c"
    root@omvserver:~# blkid | grep "79684322-3eac-11ea-a974-63a080abab18"
    root@omvserver:~#

    I was able to upgrade from OMV 5 -> OMV 6 after removing my mergerfs pool and then upgrading. After upgrading I noticed that I had a missing file system that wasn't showing as mounted or referenced in the webGUI. I went to remove this file system (/dev/sdk2) and as I was doing so noticed at the last second that the mount point was "/". Immediately after removing /dev/sdk2 I lost access to ssh and was stuck in an apply settings 502 error loop in the webGUI. (Yes I know this was dumb haha)


    I decided to use a backup clone of my USB thumb-drive that was still prior to the OMV 6 upgrade.


    Looking at /etc/openmediavault/config.xml and the output of omv-showkey mntent it shows that fsname /dev/sdk2 is associated with dir /

    This is also reflected in the webgui under Storage -> Filesystems



    However the output of lsblk shows that the mounted root partition is actually /dev/sdn2 and that /dev/sdk2 no longer exists.

    Code
    root@omvserver:~# lsblk /dev/sdn
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sdn      8:208  1 29.8G  0 disk
    ├─sdn1   8:209  1  512M  0 part /boot/efi
    ├─sdn2   8:210  1 28.4G  0 part /
    └─sdn3   8:211  1  977M  0 part
    root@omvserver:~# lsblk /dev/sdk
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sdk      8:160  0  2.7T  0 disk
    └─sdk1   8:161  0  2.7T  0 part /srv/dev-disk-by-uuid-0cd91f30-b4f1-4395-95c8-c10963e0ee8c

    The output of blkid /dev/sdn2 matches the root mount found in fstab as well.

    Anyway I am just wondering what is causing this to persist in the config.xml file and even seemingly carry over with the /dev/sdk2 reference after the upgrade to OMV6? What is the best way to remove it?


    Or if it even matters at all that there is an incorrect mntent entry and a missing filesystem in the webgui filesystem tab?


    Thanks!

    You can use character classes in square brackets: These are my drives: dev-disk-by-uuid-16578662-b8be-4ded-a43b-bed36de32f6b dev-disk-by-uuid-468c061d-4943-44db-9d6e-7efad6fcad8f dev-disk-by-uuid-b000e4b4-0947-4345-b0e3-010472ab7c5b dev-disk-by-uuid-b31200c5-2906-4a2e-b8ed-bc428ef76da2 dev-disk-by-uuid-bc99bd8a-854c-426a-96e7-a6187ead074a dev-disk-by-uuid-e1695f6d-370d-4ac2-aa49-bb3f9ed5bd75 If I only want to include the first four I use this expression: /srv/dev-disk-by-uuid-[14b][603]*/ This will include all drives which UUIDs start with 1, 4 or b and have a 6, 0 or 3 in the second position, which matches the first four UUIDs, and only those.


    You don't have to do anything. The upgrade process should install openmediavault-mergerfs, migrate your unionfilesystems pool (which is mergerfs too), and remove unionfilesystems.

    That is a core omv plugin not an omv-extras plugin. And it is available on omv6.

    I tried upgrading from 5 to 6 last August and had it fail but couldn’t capture logs due to user error. I use mergerfs and snapraid and found this thread which seemed to point to my issue. It appeared from following this thread that the 255 char systemd pool mount limit had been solved. I just tried upgrading again and the upgrade failed while trying to restart collectd.service


    Maybe I just screwed my system up in special and unique ways though? lol


    Or is the solution proposed by CrowleyAJ above still the viable alternative?


    Thanks for all your great work on this!


    Here is a snippet of the upgrade logs where the failure happened, let me know if this is enough or if I need to upload everything?