Repairing shares on ZFS disks after a failed kernel upgrade

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Repairing shares on ZFS disks after a failed kernel upgrade

      I tried to update the backport kernel the other day, which did not work, so I reverted to 4.14. I had to use dpkg-reconfigure to rebuild the ZFS modules. My ZFS volumes are being mounted again now, but OMV seems to have lost track of them. If I try to edit my shares, I get:

      Failed to execute XPath query '//system/fstab/mntent[uuid='0a27a87b-80c8-4b53-b701-1664a7189ed7']'.

      I assume that the UUID of my ZFS drives has changed due to all of the changes. How do I get OMV to use the correct, current UUID (and how do I find out what the correct UUID is)? Thanks.
    • I had the same issue during the downgrade to 4.14.
      I can confirm, that the UUID's of the ZFS filesystems changed.

      I fixed the issue with editing the config file stored /etc/openmediavault/config.xml

      I created a test shared folder in each file system to get the new UUID. blkid does not show the ZFS UUIDs, therefore I decided to go this way.

      Search for the Tag "sharedfolder", you will find a construct like that:

      Source Code

      1. <sharedfolder>
      2. <uuid>beefbae3-4fb8-6a47-bac9-64aed8bf57f7</uuid>
      3. <name>GOLD</name>
      4. <comment/>
      5. <mntentref>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</mntentref>
      6. <reldirpath>/</reldirpath>
      7. [...]
      8. </sharedfolder>

      Search for the new shared folder and copy the mntentref value. This is the UUID of your ZFS Filesystem.
      After that search for the existing old folders and replace the mntentref with the copied new value.

      Safe the file, and confirm the config change in the webinterface. (I had an outstanding confirmation)



      There is also a section "mntent" with all your mountpoints and the corresponding UUID's. Maybe you can get the UUID from this section without adding a new shared folder.

      Good Luck!

      Source Code

      1. <mntent>
      2. <uuid>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</uuid>
      3. <fsname>data/GOLD</fsname>
      4. <dir>/data/GOLD</dir>
      5. <type>zfs</type>
      6. <opts>rw,relatime,xattr,noacl</opts>
      7. <freq>0</freq>
      8. <passno>0</passno>
      9. <hidden>1</hidden>
      10. </mntent>
    • Hi,

      I had the same issue as above and couldn't roll back, so I wanted just to share my fix in-case this helps anyone else. My issue was that after automatic upgrade the system installed the new kernel 4.15 which comes from the stretch backports, my ZFS stopped working because the system could not re-compile the dkms packages. The reason was that the linux headers were held back due to broken package links, or rather missing package links.

      the 4.15 kernel relies on a later gcc compiler so to fix this issue you can just get the gcc package from the backports:
      apt install -t stretch-backportslinux-compiler-gcc-6-x86

      after this I had to re-install the zfs plugin in OMV as it was no longer active, if it is you can just
      dpkg-reconfigure zfs-dkms

      My next issue was that the zfs pools were not active, so to resolve this I deactivated all the plugins, removed the folders that OMV created in place of the ZFS mounts and re-imported the pools in OMV. (I did make sure the folders were completely empty before deleting them).

      The next issue was the same as above where I did not have entries or the right UUID's in the config.xml so I edited the /etc/openmediavault/config.xml and added the entries matching my pools and the UUID's from the shared folders.

      <mntent>
      <uuid>Shared Folder mntentref UUID</uuid>
      <fsname>Share Name</fsname>
      <dir>Share Mount Path</dir>
      <type>zfs</type>
      <opts>rw,relatime,xattr,noacl</opts>
      <freq>0</freq>
      <passno>0</passno>
      <hidden>1</hidden>
      </mntent>

      After this I re-enabled the plugins. I did have a problem with the PHP FPM from NGINX not able to open the socket file in /var/run, to resolve this I just changed something in the PHP pools in NGINX and re-saved it this re-created the socket file in /var/run.

      Everything now works again. The only issue I see with the new kernel is the ZFS plugin is not automatically creating the <mntent> entries in config.xml, not sure why. I repeated this on a clean build of OMV:
      1. Install OVM
      2. apt install -t stretch-backports linux-compiler-gcc-6-x86
      3. apt update and apt dist-upgrade
      4. Install OMV extras and the ZFS plugin
      5. Create a new pool
      6. After creating the new pool I am not prompted to Save as per previous releases and so the MNT entries are not created.


      Hope this helps someone else with the same issue took me quite a few hours to isolate and resolve each issue.
    • OK, so copying the relevant UUID from the mntent section to the share section does the trick BUT ...

      For some reason one of my ZFS filessystems has no mntent entry. It cannot be selected from the filesystems dropdown.

      Would it be safe to create an mntent entry for it using the old mntent entry? Is there a way to generate a new UUID to put in an mntent entry?
    • Using the old UUID did not work. As there wasn't much on the relevant filesystem, I ended up destroying it on the command line then re-creating it in the GUI. My root zpool still has no UUID, but since I don't stick anything in the root except file systems, that's not so much of an issue. Thanks to everyone who helped out.
    • christiscarborough wrote:



      For some reason one of my ZFS filessystems has no mntent entry. It cannot be selected from the filesystems dropdown.

      I know you solved it with recreation of the filesystem, but maybe it helps others:

      Should be enough to go to the Filesystem Section in the webinterface, and click to "mount". The ZFS filesystem is already mounted, but in my case this generates the entry in the XML.