Repairing shares on ZFS disks after a failed kernel upgrade

  • I tried to update the backport kernel the other day, which did not work, so I reverted to 4.14. I had to use dpkg-reconfigure to rebuild the ZFS modules. My ZFS volumes are being mounted again now, but OMV seems to have lost track of them. If I try to edit my shares, I get:


    Failed to execute XPath query '//system/fstab/mntent[uuid='0a27a87b-80c8-4b53-b701-1664a7189ed7']'.


    I assume that the UUID of my ZFS drives has changed due to all of the changes. How do I get OMV to use the correct, current UUID (and how do I find out what the correct UUID is)? Thanks.

  • I had the same issue during the downgrade to 4.14.
    I can confirm, that the UUID's of the ZFS filesystems changed.


    I fixed the issue with editing the config file stored /etc/openmediavault/config.xml


    I created a test shared folder in each file system to get the new UUID. blkid does not show the ZFS UUIDs, therefore I decided to go this way.


    Search for the Tag "sharedfolder", you will find a construct like that:

    Code
    <sharedfolder>
            <uuid>beefbae3-4fb8-6a47-bac9-64aed8bf57f7</uuid>
            <name>GOLD</name>
            <comment/>
            <mntentref>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</mntentref>
            <reldirpath>/</reldirpath>
            [...]
          </sharedfolder>


    Search for the new shared folder and copy the mntentref value. This is the UUID of your ZFS Filesystem.
    After that search for the existing old folders and replace the mntentref with the copied new value.


    Safe the file, and confirm the config change in the webinterface. (I had an outstanding confirmation)




    There is also a section "mntent" with all your mountpoints and the corresponding UUID's. Maybe you can get the UUID from this section without adding a new shared folder.


    Good Luck!


    Code
    <mntent>
            <uuid>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</uuid>
            <fsname>data/GOLD</fsname>
            <dir>/data/GOLD</dir>
            <type>zfs</type>
            <opts>rw,relatime,xattr,noacl</opts>
            <freq>0</freq>
            <passno>0</passno>
            <hidden>1</hidden>
          </mntent>
  • Hi,


    I had the same issue as above and couldn't roll back, so I wanted just to share my fix in-case this helps anyone else. My issue was that after automatic upgrade the system installed the new kernel 4.15 which comes from the stretch backports, my ZFS stopped working because the system could not re-compile the dkms packages. The reason was that the linux headers were held back due to broken package links, or rather missing package links.


    the 4.15 kernel relies on a later gcc compiler so to fix this issue you can just get the gcc package from the backports:
    apt install -t stretch-backportslinux-compiler-gcc-6-x86


    after this I had to re-install the zfs plugin in OMV as it was no longer active, if it is you can just
    dpkg-reconfigure zfs-dkms


    My next issue was that the zfs pools were not active, so to resolve this I deactivated all the plugins, removed the folders that OMV created in place of the ZFS mounts and re-imported the pools in OMV. (I did make sure the folders were completely empty before deleting them).


    The next issue was the same as above where I did not have entries or the right UUID's in the config.xml so I edited the /etc/openmediavault/config.xml and added the entries matching my pools and the UUID's from the shared folders.


    <mntent>
    <uuid>Shared Folder mntentref UUID</uuid>
    <fsname>Share Name</fsname>
    <dir>Share Mount Path</dir>
    <type>zfs</type>
    <opts>rw,relatime,xattr,noacl</opts>
    <freq>0</freq>
    <passno>0</passno>
    <hidden>1</hidden>
    </mntent>


    After this I re-enabled the plugins. I did have a problem with the PHP FPM from NGINX not able to open the socket file in /var/run, to resolve this I just changed something in the PHP pools in NGINX and re-saved it this re-created the socket file in /var/run.


    Everything now works again. The only issue I see with the new kernel is the ZFS plugin is not automatically creating the <mntent> entries in config.xml, not sure why. I repeated this on a clean build of OMV:

    • Install OVM
    • apt install -t stretch-backports linux-compiler-gcc-6-x86
    • apt update and apt dist-upgrade
    • Install OMV extras and the ZFS plugin
    • Create a new pool
    • After creating the new pool I am not prompted to Save as per previous releases and so the MNT entries are not created.


    Hope this helps someone else with the same issue took me quite a few hours to isolate and resolve each issue.

  • OK, so copying the relevant UUID from the mntent section to the share section does the trick BUT ...


    For some reason one of my ZFS filessystems has no mntent entry. It cannot be selected from the filesystems dropdown.


    Would it be safe to create an mntent entry for it using the old mntent entry? Is there a way to generate a new UUID to put in an mntent entry?

  • Using the old UUID did not work. As there wasn't much on the relevant filesystem, I ended up destroying it on the command line then re-creating it in the GUI. My root zpool still has no UUID, but since I don't stick anything in the root except file systems, that's not so much of an issue. Thanks to everyone who helped out.

  • For some reason one of my ZFS filessystems has no mntent entry. It cannot be selected from the filesystems dropdown.


    I know you solved it with recreation of the filesystem, but maybe it helps others:


    Should be enough to go to the Filesystem Section in the webinterface, and click to "mount". The ZFS filesystem is already mounted, but in my case this generates the entry in the XML.

  • I know you solved it with recreation of the filesystem, but maybe it helps others:


    Should be enough to go to the Filesystem Section in the webinterface, and click to "mount". The ZFS filesystem is already mounted, but in my case this generates the entry in the XML.

    Sadly, the "mount" button in the filesystems tab seems to be greyed out, so this may not work for everyone.

  • First of all I WILL NOT GUARANTEE if this work for you aswell. I'm not responsible for any data loss if you delete all of your data by not checking your mount points correctly. So take care and ensure the zfs is not mounted when fireing the rm command.



    I solved this problem after restoring my OMV setup with following steps:


    installing zfs again.


    after that i saw my ZFS Pool but got no propper Information about the Storage in File-Systems tab at the Web-GUI - so i decided to do some further investigation at the shell:


    getting info about the pool (named raid)
    zfs list
    NAME PROPERTY VALUE SOURCE
    raid mountpoint /raid local



    getting info about mount state:
    zfs get mounted
    NAME PROPERTY VALUE SOURCE
    raid mounted no -


    so the ZFS pool is not mounted but the folder /raid is still present on the root-fs.


    cross-check it by getting file system information
    df -h
    Filesystem Size Used Avail Use% Mounted on
    udev 1.9G 0 1.9G 0% /dev
    tmpfs 374M 16M 359M 5% /run
    /dev/sda1 231G 22G 198G 10% /
    tmpfs 1.9G 0 1.9G 0% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
    tmpfs 1.9G 0 1.9G 0% /tmp
    tmpfs 374M 0 374M 0% /run/user/0


    no filesystem present at /raid so it may just be a folder



    i checked if the pool is mounted by changing the mountpoint of the pool to /raid2


    zfs set mountpoint=/raid2 raid


    after that i remounted the pool
    zfs mount raid


    checked the mount state and location:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    raid 3.30T 6.91T 3.30T /raid2


    after that i created a testfile on the old mountpoint and checkt if also present on the new mount point verified.


    touch /raid/test.txt


    with ls i checked if the file is also present at /raid2 (new mountpoint of the working pool)


    IF the file IS NOT PRESENT its likely the old mountpoint is just a folder within the file system table of /


    so i deleted the old mountpoint completley


    rm -rf /raid


    after that i checked if files on the mountpoint /raid2 were damaged. God thanks - no files lost.


    so i unmounted the raid pool with:


     zfs umount raid


    changed the mountpoint to /raid again


    zfs set mountpoint=/raid raid


    and re-mounted the pool again


    zfs mount raid



    after those steps my zfs pool was displayed correctly within ZFS on the Web-GUI and File-Systems Tab. Shared folders were working again.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!