Applied OMV update 5.0.11, now when saving configs, erroring with: unable to resolve 'UUID=20005550-3d0c-4d82-9ec5-7e4abb396640' debian: Data failed to compile

  • I am new to OMV, and have hads no real issues until I applied the OMV 5.0.11 update, after that update I am receiving the following anytime I make a config change, mount a drive, add a NIC, etc.:



    Error #0:OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run collectd 2>&1' with exit code '1': findfs: unable to resolve 'UUID=20005550-3d0c-4d82-9ec5-7e4abb396640'debian: Data failed to compile:


    ---------- Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja error: Command '['findfs', 'UUID=20005550-3d0c-4d82-9ec5-7e4abb396640']' returned non-zero exit status 1.Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 393, in render_jinja_tmpl output = template.render(**decoded_context) File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "<template>", line 49, in top-level template code File "/var/cache/salt/minion/extmods/modules/omv_utils.py", line 165, in get_fs_parent_device_file return fs.get_parent_device_file() File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 163, in get_parent_device_file device = pyudev.Devices.from_device_file(context, self.device_file) File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 127, in device_file ['findfs', 'UUID={}'.format(self._id)] File "/usr/lib/python3/dist-packages/openmediavault/subprocess.py", line 63, in check_output return subprocess.check_output(*popenargs, **kwargs) File "/usr/lib/python3.7/subprocess.py", line 395, in check_output **kwargs).stdout File "/usr/lib/python3.7/subprocess.py", line 487, in run output=stdout, stderr=stderr)subprocess.CalledProcessError: Command '['findfs', 'UUID=20005550-3d0c-4d82-9ec5-7e4abb396640']' returned non-zero exit status 1.; line 49---[...] # "dir": "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1", # "freq": 0, # "fsname": "008530ff-a134-4264-898d-9ce30eeab927", # } {% if salt['mount.is_mounted'](mountpoint.dir) %} {% set disk = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname) %} <====================== # Extract the device name from '/dev/xxx'. {% set _ = disks.append(disk[5:]) %} {% endif %}{% endfor %}# Append the root filesystem.[...]--- in /usr/share/php/openmediavault/system/process.inc:182Stack trace:#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(60): OMV\System\Process->execute()#1 /usr/share/openmediavault/engined/rpc/config.inc(164): OMV\Engine\Module\ServiceAbstract->deploy()#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusKD...', '/tmp/bgoutputoI...')#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#7 /usr/share/openmediavault/engined/rpc/config.inc(186): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#12 {main}





    I have tried several suggestions within the forums with no success, any and all assistance is greatly appreciated. ?(

  • Okay, here is the output from blkid (thank you for the assist)


    /dev/sdi1: UUID="99FA-43F2" TYPE="vfat" PARTUUID="54d25a3e-fa74-4e53-bc6f-a3e84a618941"
    /dev/sdi2: UUID="9d11ad09-087b-47a1-9a74-07feb7242639" TYPE="ext4" PARTUUID="d40b5223-73d3-4caa-bca5-68453582ed67"
    /dev/sdi3: UUID="da792d56-1a1d-41b8-a30c-8c7b7dd0b190" TYPE="swap" PARTUUID="70336956-3757-4adc-aba1-6b6c7a728a10"
    /dev/sdj1: LABEL="Mega" UUID="15cb5421-3585-4c14-8044-ff61479abf23" TYPE="ext4" PARTUUID="2c0ee77e-f5c2-41cd-a448-07004a3b76c0"
    /dev/sdk1: LABEL_FATBOOT="SP UFD U3" LABEL="SP UFD U3" UUID="F871-4FE0" TYPE="vfat" PARTUUID="d4dd1f34-01"
    /dev/sdg1: LABEL="Sea2" UUID="b6a08aa6-3ba8-4f45-87ac-895f65ce5e30" TYPE="ext4" PARTUUID="8bdb71e9-0c1e-4aef-8e90-9a29ad6a128d"
    /dev/sdf1: LABEL="Sea1" UUID="03058734-e115-481c-a487-149104a34b00" TYPE="ext4" PARTUUID="c5711bc0-ea37-4484-a96e-f7f36392f792"
    /dev/sda1: LABEL="Sea3" UUID="be4a4443-3ed2-4e7e-9562-aa888f4a5087" TYPE="ext4" PARTUUID="c8f00f98-fd7d-447f-8e79-84142013f7f1"
    /dev/sdh1: LABEL="SeaParity" UUID="b2415523-ecef-44ff-aa8b-1f481bfcb6f7" TYPE="ext4" PARTUUID="5b30e738-6593-40ce-b0fa-ac4666bf4ccd"
    /dev/sde1: LABEL="HitachiParity" UUID="5a290932-0b64-4a43-be08-b14b036366f2" TYPE="ext4" PARTUUID="c228ceb1-7e3f-4985-998b-c292749fae83"
    /dev/sdd1: LABEL="Hitachi3" UUID="1bb744e2-fba2-4c78-a320-e0abed60d2e2" TYPE="ext4" PARTUUID="90403866-513e-4c6b-ae5e-16e0bae7294c"
    /dev/sdb1: LABEL="Hitachi1" UUID="f84ffdb4-7700-4a65-8291-2ee595d85831" TYPE="ext4" PARTUUID="9eeec0c1-41b2-4218-a287-307e2f7460d3"
    /dev/sdc1: LABEL="Hitachi2" UUID="6ce4a721-4f0c-4cd9-a333-c3e679aaca23" TYPE="ext4" PARTUUID="7a591bff-30f9-476a-8fbf-4e5c5c07b340"

  • If I do a cat of fstab, I see the following: (appears the UUID that can not be found above is referenced there as fuse.mergerfs see highlighted in green)


    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdi2 during installation
    UUID=9d11ad09-087b-47a1-9a74-07feb7242639 / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1
    # /boot/efi was on /dev/sdi1 during installation
    UUID=99FA-43F2 /boot/efi vfat umask=0077 0 1
    /dev/sdi3 none swap sw 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/Mega /srv/dev-disk-by-label-Mega ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,discard,acl 0 2
    /dev/disk/by-label/Sea1 /srv/dev-disk-by-label-Sea1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Sea2 /srv/dev-disk-by-label-Sea2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Sea3 /srv/dev-disk-by-label-Sea3 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/SeaParity /srv/dev-disk-by-label-SeaParity ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Hitachi1 /srv/dev-disk-by-label-Hitachi1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Hitachi2 /srv/dev-disk-by-label-Hitachi2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Hitachi3 /srv/dev-disk-by-label-Hitachi3 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/HitachiParity /srv/dev-disk-by-label-HitachiParity ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /srv/dev-disk-by-label-Sea1:/srv/dev-disk-by-label-Sea2:/srv/dev-disk-by-label-Sea3:/srv/dev-disk-by-label-Hitachi1:/srv/dev-disk-by-label-Hitachi2:/srv/dev-disk-by-label-Hitachi3 /srv/20005550-3d0c-4d82-9ec5-7e4abb396640 fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=mfs,minfreespace=4G 0 0
    # <<< [openmediavault]

  • Applied changes to disk.sls (cat of file below), error changed and Mergefs does not mount now:


    {% set disks = [] %}
    # Get the configured mount points.
    {% set mountpoints = salt['omv_conf.get_by_filter'](
    'conf.system.filesystem.mountpoint',
    {'operator': 'and', 'arg0': {'operator': 'equals', 'arg0': 'hidden', 'arg1: '0'}, 'arg1': {'operator': 'not', 'arg0': {'operator': 'stringContains', 'arg0': 'opts', 'arg1': 'bind'}}}) %}


    # Filter mounted file systems.
    {% for mountpoint in mountpoints %}
    # Example:
    # {
    # "dir": "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1",
    # "freq": 0,
    # "fsname": "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1",
    # "hidden": false,
    # "opts": "defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",
    # "passno": 2,
    # "type": "ext4",
    # "uuid": "dd838a0f-d39c-4158-afc0-1622bf8cde78"
    # }
    # or
    # {
    # "dir": "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1",
    # "freq": 0,
    # "fsname": "008530ff-a134-4264-898d-9ce30eeab927",
    # }
    {% if salt['mount.is_mounted'](mountpoint.dir) %}
    {% set disk = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname) %}
    # Extract the device name from '/dev/xxx'.
    {% set _ = disks.append(disk[5:]) %}
    {% endif %}
    {% endfor %}


    # Append the root filesystem.
    {% set root_fs = salt['omv_utils.get_root_filesystem']() %}
    {% set disk = salt['omv_utils.get_fs_parent_device_file'](root_fs) %}
    {% set _ = disks.append(disk[5:]) %}


    configure_collectd_conf_disk_plugin:
    file.managed:
    - name: "/etc/collectd/collectd.conf.d/disk.conf"
    - source:
    - salt://{{ slspath }}/files/collectd-disk.j2
    - template: jinja
    - context:
    disks: {{ disks | unique | json }}


    Error I receive is below:

    Error #0:OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run collectd 2>&1' with exit code '1': debian: Data failed to compile:----------


    Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja syntax error: expected token ':', got 'integer'; line 28---[...]{% set disks = [] %}# Get the configured mount points.{% set mountpoints = salt['omv_conf.get_by_filter']( 'conf.system.filesystem.mountpoint', {'operator': 'and', 'arg0': {'operator': 'equals', 'arg0': 'hidden', 'arg1: '0'}, 'arg1': {'operator': 'not', 'arg0': {'operator': 'stringContains', 'arg0': 'opts', 'arg1': 'bind'}}}) %} <======================# Filter mounted file systems.{% for mountpoint in mountpoints %} # Example: # {[...]--- in /usr/share/php/openmediavault/system/process.inc:182Stack trace:#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(60): OMV\System\Process->execute()#1 /usr/share/openmediavault/engined/rpc/config.inc(164): OMV\Engine\Module\ServiceAbstract->deploy()#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatus3f...', '/tmp/bgoutputTJ...')#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#7 /usr/share/openmediavault/engined/rpc/config.inc(186): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#12 {main}

  • Okay, I re-entered the updated lines to the disk.lsl file, rebooted and that seems to have addressed the error in OMV when applying changes, however the fuse.mergefs file system is showing online but not mounting and that is where all my data and file share reside. Should I manually mount the file system or is something else required?


    Thanks again :)


  • Same issue as reported, but I'm unable to resolve with the above fix.



    Running version 5.2.1-1, and unionfilesystems v5.0.2


    Additionally, blkid:


    Code
    /dev/sda1: UUID="d1058e63-3ed6-421c-86bb-f542ba88220a" TYPE="ext4" PARTUUID="ea251f6b-01"
    /dev/sda5: UUID="eacea850-7f5a-40d4-877f-f2657260cfe6" TYPE="swap" PARTUUID="ea251f6b-05"
    /dev/sdb1: LABEL="data1" UUID="8128de93-6fec-40e6-ab2b-e1cb339f8fa4" TYPE="ext4" PARTUUID="192e1182-e85f-4282-b391-c575f97f91d5"
    /dev/sdc1: LABEL="data2" UUID="eb206320-60a3-48cf-a0e3-a70ecbf2c5d4" TYPE="ext4" PARTUUID="8f29305e-ab79-4357-a4e1-0e8805b10936"
    /dev/sdd1: LABEL="parity1" UUID="f3bd9609-89ba-4ff6-885c-6c02394a1da8" TYPE="ext4" PARTUUID="b839cfe0-d95e-4157-b836-96ea427e1600"
  • Same issue as reported, but I'm unable to resolve with the above fix.

    Also having the same issue on fresh install started today.




    Running OMV version 5.2.1-1, diskstats 5.0.2-1 and unionfilesystems 5.1


    Thanks!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!