Glad to hear it! I've been avoiding upgrading from 6 to 7 until I have a moment to clone my USB stick boot drive on my daily driver PC.
Posts by hoobatoo
-
-
/etc/initramfs-tools/conf.d/resume should get the update swap uuid as well.
It seems I missed the removal of the swap disable instructions in OMV 6 and apparently successfully disabled swap under OMV 5 which then carried over to 6 with my in-place upgrade.
I completed all the other steps in the OP's instructions and swapon -s, and cat /proc/swaps all appear to be accurately referencing the swapfile indicated in place of /path/to/swapfile. However, it appears that the resume file doesn't currently exist on my system.
Is this a significant issue? if so would I create and populate that file with /path/to/swapfile or would I reference the UUID that was returned from the mkswap command?
If it is the latter I assume it would need to follow the form found here: https://ubuntuforums.org/showt…2&p=13800490#post13800490
Would i need to run any update commands or would i then just need to reboot my system?
Thanks!
-
@olduser. Thanks. I knew about "Ctrl +/-" in Linux, that is a basic knowledge there. I just did not realized that the same shortcut is available in OMV GUI.
It’s a browser feature, not specific to OMV GUI except that you access the omv gui in a browser.
-
Did you reduced the display by accident?
Click Ctrl and PLUS (or mouse wheel) and see if it changes.Glad this was the right answer I gave him last night on discord lol
-
OMV GUI-> Services-> Compose-> Files-> Create
Make a Portainer Container
Log in via http://LAN_IP:9000
Do what you need on portainer as you did before.
No, you won't have a button as before on the omv-extras.
So?!?
That’s what he did? Why so aggro?
-
Funnily enough thats why I stooped using gluetun, it would cause problems with other containers connecting to it, I changed to binhex's qbittorentvpn and his delugevpn, and if needed ran the other containers through 1 of those. It's worked well for quite some time, must admit I rather use gluetun, but needs must.
Gluetun stopping other containers when the vpn disconnects is actually a feature not a bug. This is a reason I use it and also the easiest way to manage that is through a single yaml with multiple containers. But from reading the thread that doesn’t seem to be the dev preferred method so won’t really be supported in the new plug-in.
Which is fine I can continue using portainer, just seems a little silly that everyone wants all the existing bells and whistles of portainer in the compose plug-in instead of just using portainer.
-
Generally transcoding 4K to 1080p is going to be much worse quality and much more intensive than just using a 1080p file.
-
Ok, thanks didn't realize it could be done in the GUI. Just so I have my head on straight here, the error was related to the config indicating that there were >6 parity drives which appears to exceed the allowable number of parity drives in the snapraid documentation, correct?
I thought I would document the changes I made to update this in case anyone else runs into similar issues or if I need to do it again for whatever reason.
Looking at the snapraid drives tab again in the GUI it looks like the parity number incremented up with each drive.
So to edit each data drive I needed to uncheck the data check box, check the parity check box so the parity number field would display, enter the new parity number, then uncheck parity, and recheck data.
I originally arbitrarily guessed and chose "1" for data1-4 and parity1 and "2" for data5-7 and parity2 but then found after saving each data drive it actually saved Parity Num as 1 for all drives but parity2, as you can see in the screenshot below.
After successfully applying changes in the GUI the info button in the Arrays tab functions as expected and all is well.
Thanks so much for the help and again for all the work you put into OMV, the plugins, and the community!
-
Ah of course, cheap used 3TB enterprise SAS drives at $6/TB is my downfall again. lol
That's fine, how do I go about editing the parity number? Would any of the snapraid documentation point me in the right direction?
-
I received the following error in the webGUI SnapRAID plugin tab:
CodeA config file for this array does not exist. OMV\Exception: A config file for this array does not exist. in /usr/share/openmediavault/engined/rpc/snapraid.inc:424 Stack trace: #0 [internal function]: OMVRpcServiceSnapRaid->executeCommand(Array, Array) #1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #2 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('executeCommand', Array, Array) #3 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SnapRaid', 'executeCommand', Array, Array, 1) #4 {main}
This is following shortly after I successfully upgraded from OMV5->OMV6. In order to do so I deleted and recreated my mergerfs pool. I am able to successfully mount the the pool using mergerfs and I get expected output when I use the snapraid status and snapraid smart cli commands. In the webGUI when I select the array and press any of the options under Info the terminal popup displays but only reads END OF LINE.
Here is the output of omv-showkey snapraid
Attached as a textfile as the message was too long otherwise.
When I run omv-salt deploy run snapraid i get the following errors.
Code
Display Moreroot@omvserver:~# omv-salt deploy run snapraid debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.snapraid.default' failed: Jinja error: drives.drive[7].paritynum: The value 7 is bigger than 6. Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl output = template.render(**decoded_context) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1090, in render self.environment.handle_exception() File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 832, in handle_exception reraise(*rewrite_traceback_stack(source=source)) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 28, in reraise raise value.with_traceback(tb) File "<template>", line 18, in top-level template code File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 465, in call return __context.call(__obj, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__ return self.loader.run(run_func, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as return _func_or_method(*args, **kwargs) File "/var/cache/salt/minion/extmods/modules/omv_conf.py", line 41, in get objs = db.get(id_, identifier) File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 85, in get query.execute() File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 735, in execute self._response = self._elements_to_object(elements) File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 493, in _elements_to_object result.validate() File "/usr/lib/python3/dist-packages/openmediavault/config/object.py", line 236, in validate self.model.validate(self.get_dict()) File "/usr/lib/python3/dist-packages/openmediavault/config/datamodel.py", line 202, in validate self.schema.validate(data) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 175, in validate self._validate_type(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 230, in _validate_type raise last_exception File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 201, in _validate_type self._validate_object(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 306, in _validate_object self._check_properties(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 521, in _check_properties self._validate_type(value[propk], propv, path) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 230, in _validate_type raise last_exception File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 201, in _validate_type self._validate_object(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 306, in _validate_object self._check_properties(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 521, in _check_properties self._validate_type(value[propk], propv, path) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 230, in _validate_type raise last_exception File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 195, in _validate_type self._validate_array(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 297, in _validate_array self._check_items(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 549, in _check_items self._validate_type(itemv, schema['items'], path) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 230, in _validate_type raise last_exception File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 201, in _validate_type self._validate_object(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 306, in _validate_object self._check_properties(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 521, in _check_properties self._validate_type(value[propk], propv, path) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 230, in _validate_type raise last_exception File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 204, in _validate_type self._validate_integer(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 254, in _validate_integer self._check_maximum(value, schema, name) File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 328, in _check_maximum raise SchemaValidationException( openmediavault.json.schema.SchemaValidationException: drives.drive[7].paritynum: The value 7 is bigger than 6. ; line 18 --- [...] # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. {% set config = salt['omv_conf.get']('conf.service.snapraid') %} <====================== {% set confDir = '/etc/snapraid' %} {% set confPrefix = 'omv-snapraid-' %} configure_borg_envvar_dir: file.directory: [...] ---
Thanks!
-
Thanks! That worked, it is no longer in the webGUI.
-
Ok, thanks. Do I need to run any cli commands or just reboot after editing config.xml?
-
-
I was able to upgrade from OMV 5 -> OMV 6 after removing my mergerfs pool and then upgrading. After upgrading I noticed that I had a missing file system that wasn't showing as mounted or referenced in the webGUI. I went to remove this file system (/dev/sdk2) and as I was doing so noticed at the last second that the mount point was "/". Immediately after removing /dev/sdk2 I lost access to ssh and was stuck in an apply settings 502 error loop in the webGUI. (Yes I know this was dumb haha)
I decided to use a backup clone of my USB thumb-drive that was still prior to the OMV 6 upgrade.
Looking at /etc/openmediavault/config.xml and the output of omv-showkey mntent it shows that fsname /dev/sdk2 is associated with dir /
Code
Display Moreomv-showkey mntent <mntent> <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid> <fsname>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx|/dev/xxx</fsname> <dir>/xxx/yyy/zzz</dir> <type>none|ext2|ext3|ext4|xfs|jfs|iso9660|udf|...</type> <opts></opts> <freq>0</freq> <passno>0|1|2</passno> <hidden>0|1</hidden> </mntent> <mntent> <uuid>79684322-3eac-11ea-a974-63a080abab18</uuid> <fsname>/dev/sdk2</fsname> <dir>/</dir> <type>ext4</type> <opts>noatime,nodiratime,errors=remount-ro</opts> <freq>0</freq> <passno>1</passno> <hidden>1</hidden> </mntent>
This is also reflected in the webgui under Storage -> Filesystems
However the output of lsblk shows that the mounted root partition is actually /dev/sdn2 and that /dev/sdk2 no longer exists.
Coderoot@omvserver:~# lsblk /dev/sdn NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdn 8:208 1 29.8G 0 disk ├─sdn1 8:209 1 512M 0 part /boot/efi ├─sdn2 8:210 1 28.4G 0 part / └─sdn3 8:211 1 977M 0 part root@omvserver:~# lsblk /dev/sdk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdk 8:160 0 2.7T 0 disk └─sdk1 8:161 0 2.7T 0 part /srv/dev-disk-by-uuid-0cd91f30-b4f1-4395-95c8-c10963e0ee8c
The output of blkid /dev/sdn2 matches the root mount found in fstab as well.
Code
Display Moreroot@omvserver:~# blkid /dev/sdn2 /dev/sdn2: UUID="3754f4b8-beb6-4111-82ce-d19539d4241a" TYPE="ext4" PARTUUID="ed4fd749-28e2-47a0-8a49-4f80a3819a4c" root@omvserver:~# cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdb2 during installation UUID=3754f4b8-beb6-4111-82ce-d19539d4241a / ext4 noatime,nodiratime,errors=remount-ro 0 1
Anyway I am just wondering what is causing this to persist in the config.xml file and even seemingly carry over with the /dev/sdk2 reference after the upgrade to OMV6? What is the best way to remove it?
Or if it even matters at all that there is an incorrect mntent entry and a missing filesystem in the webgui filesystem tab?
Thanks!
-
Ok, I will give it a try thanks!
-
You can use character classes in square brackets: These are my drives: dev-disk-by-uuid-16578662-b8be-4ded-a43b-bed36de32f6b dev-disk-by-uuid-468c061d-4943-44db-9d6e-7efad6fcad8f dev-disk-by-uuid-b000e4b4-0947-4345-b0e3-010472ab7c5b dev-disk-by-uuid-b31200c5-2906-4a2e-b8ed-bc428ef76da2 dev-disk-by-uuid-bc99bd8a-854c-426a-96e7-a6187ead074a dev-disk-by-uuid-e1695f6d-370d-4ac2-aa49-bb3f9ed5bd75 If I only want to include the first four I use this expression: /srv/dev-disk-by-uuid-[14b][603]*/ This will include all drives which UUIDs start with 1, 4 or b and have a 6, 0 or 3 in the second position, which matches the first four UUIDs, and only those.
You don't have to do anything. The upgrade process should install openmediavault-mergerfs, migrate your unionfilesystems pool (which is mergerfs too), and remove unionfilesystems.
That is a core omv plugin not an omv-extras plugin. And it is available on omv6.
I tried upgrading from 5 to 6 last August and had it fail but couldn’t capture logs due to user error. I use mergerfs and snapraid and found this thread which seemed to point to my issue. It appeared from following this thread that the 255 char systemd pool mount limit had been solved. I just tried upgrading again and the upgrade failed while trying to restart collectd.service
Maybe I just screwed my system up in special and unique ways though? lol
Or is the solution proposed by CrowleyAJ above still the viable alternative?
Thanks for all your great work on this!
Here is a snippet of the upgrade logs where the failure happened, let me know if this is enough or if I need to upload everything?
Code
Display MoreFailed to restart collectd.service: Unit srv-e46d63a6\x2d0e74\x2d40ff\x2d8a9c\x2def155179c1cd.mount failed to load properly: File name too long. See system logs and 'systemctl status collectd.service' for details. invoke-rc.d: initscript collectd, action "restart" failed. [0;1;32m*[0m collectd.service - Statistics collection and monitoring daemon Loaded: loaded (]8;;file://omvserver/lib/systemd/system/collectd.service/lib/systemd/system/collectd.service]8;;; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/collectd.service.d `-]8;;file://omvserver/etc/systemd/system/collectd.service.d/openmediavault.confopenmediavault.conf]8;; Active: [0;1;32mactive (running)[0m since Mon 2023-02-13 13:40:35 CST; 8min ago Docs: ]8;;man:collectd(1)man:collectd(1)]8;; ]8;;man:collectd.conf(5)man:collectd.conf(5)]8;; ]8;;https://collectd.orghttps://collectd.org]8;; Main PID: 10358 (collectd) Tasks: 12 (limit: 4915) Memory: 4.7M CGroup: /system.slice/collectd.service `-10358 /usr/sbin/collectd Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "interface" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "load" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "memory" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "rrdcached" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "syslog" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "unixsock" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: plugin_load: plugin "uptime" successfully loaded. Feb 13 13:40:35 omvserver collectd[10358]: [0;1;39m[0;1;31m[0;1;39mSystemd detected, trying to signal readiness.[0m Feb 13 13:40:35 omvserver systemd[1]: Started Statistics collection and monitoring daemon. Feb 13 13:40:35 omvserver collectd[10358]: Initialization complete, entering read-loop. dpkg: error processing package collectd (--configure): installed collectd package post-installation script subprocess returned error exit status 1 dpkg: dependency problems prevent configuration of openmediavault: openmediavault depends on collectd; however: Package collectd is not configured yet.