Hi everyone,
After much effort to resolve this by myself and consulting previous posts/threads from this forum and reddit, I call for your help and knowledge, and also for future google search reference.
I periodically plug in an offsite backup (USB hard drive) onto which I use Rsync to do specific folder backups. I've done this for years since the days of OMV4 and have not got any issue (while my config stayed pretty much the same).
When I mount the drive under Storage>Filesystems>+Mount, I now get the following error while applying change from the Web-GUI:
ZitatAlles anzeigenFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color collectd 2>&1' with exit code '1': debian:
Data failed to compile:
----------
Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja error: get_fs_parent_device_file() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl
output = template.render(**decoded_context)
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "<template>", line 52, in top-level template code
File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 465, in call
return __context.call(__obj, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__
return self.loader.run(run_func, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run
return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as
return _func_or_method(*args, **kwargs)
TypeError: get_fs_parent_device_file() takes 1 positional argument but 2 were given
; line 52
---
[...]
# "fsname": "008530ff-a134-4264-898d-9ce30eeab927",
# }
{% if salt['mount.is_mounted'](mountpoint.dir) %}
# Get the canonical device file to extract the device name. The collectd disk
# plugin wants this format: https://collectd.org/wiki/index.php/Plugin:Disk
{% set parent_device_file = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname, True) %} <======================
{% if parent_device_file | is_device_file %}
# Extract the device name from '/dev/xxx'.
{% set _ = disks.append(parent_device_file[5:]) %}
{% endif %}
{% endif %}
[...]
---
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color collectd 2>&1' with exit code '1': debian:
Data failed to compile:
----------
Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja error: get_fs_parent_device_file() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl
output = template.render(**decoded_context)
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "<template>", line 52, in top-level template code
File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 465, in call
return __context.call(__obj, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__
return self.loader.run(run_func, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run
return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as
return _func_or_method(*args, **kwargs)
TypeError: get_fs_parent_device_file() takes 1 positional argument but 2 were given
; line 52
---
[...]
# "fsname": "008530ff-a134-4264-898d-9ce30eeab927",
# }
{% if salt['mount.is_mounted'](mountpoint.dir) %}
# Get the canonical device file to extract the device name. The collectd disk
# plugin wants this format: https://collectd.org/wiki/index.php/Plugin:Disk
{% set parent_device_file = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname, True) %} <======================
{% if parent_device_file | is_device_file %}
# Extract the device name from '/dev/xxx'.
{% set _ = disks.append(parent_device_file[5:]) %}
{% endif %}
{% endif %}
[...]
--- in /usr/share/php/openmediavault/system/process.inc:220
Stack trace:
#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
#1 /usr/share/openmediavault/engined/rpc/config.inc(174): OMV\Engine\Module\ServiceAbstract->deploy()
#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(619): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusBm...', '/tmp/bgoutput5L...')
#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
#7 /usr/share/openmediavault/engined/rpc/config.inc(195): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
#8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)
#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
#11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
#12 {main}
I've found multiple threads with similar issue although not exactly the same error and tried their solutions. The main one was to look for drive UUID duplicates under the 2 following config file but I did not have any.
/etc/openmediavault/config.xml
/etc/monit/conf.d/openmediavault-filesystem.conf
The error message seems to report a specific function getting 2 arguments instead of 1 and the following fsname is given : 008530ff-a134-4264-898d-9ce30eeab927
I've looked everywhere I could think of to find this tag but it does not seems to be associated to any of my drive (not found in two specified files above neither).
Note that sdi1 is the problematic drive. The output of lsblk -f shows symptoms of the problem:
ZitatAlles anzeigenNAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
└─sda1 ext4 1.0 NASHDDRED401 09fdfccc-e399-4cb2-947e-77c060aa8d75 1.6T 54% /srv/dev-disk-by-uuid-09fdfccc-e3sdb
└─sdb1 ext4 1.0 NASHDDRED402 7de043af-3dc1-4647-a57b-58c5f74c3b58 1T 71% /srv/dev-disk-by-uuid-7de043af-3dsdc
└─sdc1 ext4 1.0 DKTPHDDBLUE201 1ea657f3-f829-4b2a-a2fe-a587a6b16891 540.9G 70% /srv/dev-disk-by-uuid-1ea657f3-f8sdd
└─sdd1 ext4 1.0 NASHDDRED403 45f1e85e-9be2-456c-b00a-f952c53e506f 1.6T 54% /srv/dev-disk-by-uuid-45f1e85e-9bsde
└─sde1 ext4 1.0 DKTPHDDBLUE401 ee31dd44-65f1-4720-a1f4-939678dde8de 1T 71% /srv/dev-disk-by-uuid-ee31dd44-65sdf
├─sdf1 vfat FAT32 419D-27EF 510.8M 0% /boot/efi
├─sdf2 ext4 1.0 e602aa98-6b64-4321-969a-fc2d9369d039 192.2G 4% /
└─sdf3 swap 1 f8f1a1e4-9262-4e13-800b-d76a85c442bc [SWAP]
sdg
└─sdg1 ext4 1.0 bef1b06d-cbc9-41ee-b156-a2f16b1bb78b 572.8G 37% /srv/dev-disk-by-uuid-bef1b06d-cbsdh
└─sdh1 ext4 1.0 4548d2d0-1bd0-4ac4-9f6a-f8f49743fa26 572.8G 37% /srv/dev-disk-by-uuid-4548d2d0-1bsdi
└─sdi1 ext4 1.0 RemoteBackup 187d7325-f60d-4d38-8762-54f6a30a20c4 978.5G 79% /srv/dev-disk-by-uuid-187d7325-f6rooroot@rroorooroot@sroroot@rorrrrootrroroorrrrrorootrorrrrroorrrrroot@root@sroororroorrororrooroorrorooroot@serroot@root@rrrootrororororoot@sroot@serrr
Important note : I am able to manually access the disk to the actual mount point (/srv/dev-disk-by-uuid-187d7325-f60d-4d38-8762-54f6a30a20c4/) using sftp even if lsblk reports this random mounting point.
I would like to find the bottom of it since I feel like the problem comes from my config and not the drive itself (wipping and redoing entire backup would work but the problem could come back again).
General info :
- OMV is up to date.
- OMV is directly installed on PC. Not running in container nor using proxmox kernel.
Previously tried solution that gave same error :
- Unmounting & unplugging drive, replug and remount.
- Unmounting & unplugging drive, reboot, replug and remount.
- Reboot after trying to applying change & getting error and keeping drive plugged in.
- Installed drive on different PC and drive is running OK.
Thanks for your help, you guys are awesome!
AK