Unable to mount existing ZFS pool

  • Hi everyone,

    after trying out truenas scale ive switched back to OMV.

    I do want to keep using ZFS, which is why i installed the proxmox kernel as well as the zfs-plugin.


    I am able to import the pool, however upon trying to mount it (mount existing filesystem) im getting an error:

    Code
    400 - Bad Request
    id: The value "wd8tb" does not match exactly one schema of
    [{"type":"string","format":"fsuuid"},
    {"type":"string","format":"devicefile"}].


    How can i get my existing pool mounted ?


    I have tried renaming the pool to get rid of the number but it still fails. I have created a new pool via the zfs-plugin that mounts without an issue and i cant seem to find a difference.


    Any help is appreciated!


    OMV: 7.2.1-1

    ZFS: zfs-2.2.3-pve2

  • Import is working as expected, from shell and gui alike.

    I can see the pool in ZFS addon (and zpool status) and am only having issues automounting it with OMV.


    Manual mount also works via shell without an error.

  • Initially it did have incompatibilitys because vdev_zaps_v2 was missing. After i switched to proxmox-kernel zfs was recent enough for that.


  • Import is working as expected, from shell and gui alike.

    I can see the pool in ZFS addon (and zpool status) and am only having issues automounting it with OMV.


    Manual mount also works via shell without an error.

    so if i understood well, your problem is that on OMV reboot the Zpool is not mounted , right?



    post zpool history to see how wdbigboi was created in first time. and i'll try to search how to import using full path

  • Im unable to mount it in OMV-GUI at all, but yes it also does not get remounted on boot.

    The remount on boot will be another problem because its encrypted and need to automate the unlocking, but for the moment the mounting would help after loading it manually with zfs load-key


    History (i removed some parts where containers were created to not spam the log too much):

  • Was the OMV zfs-plugin written before native zfs encryption was introduced? Anyway, the zfs-plugin does not support encryption, you have to use the CLI. So to you would have to execute a zfs load-key and zfs mount at the CLI as there is no automounting of encrypted zfs pools in OMV.


    A solution is to create a systemd zfs-load-key service as shown in this thread:


  • Yeah i dont see any encryption-options in the addon.

    But it does not make a difference when trying to mount it anyway. Even if the key is loaded and manually mounting it works, im still getting the "bad request" error when trying to do it in OMV.

  • Yeah i dont see any encryption-options in the addon.

    But it does not make a difference when trying to mount it anyway. Even if the key is loaded and manually mounting it works, im still getting the "bad request" error when trying to do it in OMV.

    I suspect no sullution until the pluging where updated to support encryption, and this can take long time probably.


    for your response I asume that you can access to your encryted data but is not well reflected on webGUI, is this correct?

  • Yes, i do have access.

    After manually loading the key with zfs load-key im am able to mount the zfs datasets from commandline.


    But i cannot mount them in OMV-GUI ever. It does not matter if the key is loaded or not, im always getting the "string does not match" error.


    Im not sure if this is a problem in zfs though, as regular mounting would work after the key is loaded.

    This error-message is either completely misleading or its having another issue way before trying to mount the zfs datasets.

  • After giving up trying to mount this existing pool i have now wiped the drive (zlog and data) via OMV.

    I now wanted to create a new pool in OMV but now the drive does not show up in the selector:




    /dev/sdc is missing completely.


    Creating a filesystem and mounting it work perfectly fine (ext4) but no zfs allowed it seems.


    Any clues as to why ? I cant imagine it having anything to do with the size, sdb is also the only drive left so it cant let me not choose drives that would be too big to mirror, even when pool type basic is selected..


    I dont know how to go forward, if nothing comes from this thread, im probably going back to ext4 and scrapping any zfs experiments for now.

  • For documentary purposes:

    I have now created the pool from commandlinde and am using it successfully in OMV.

    After creation its showing up in the zfs-plugin and can further be used in omv when creating shares etc.


    I created the pool and added a logdevice with

    Code
    zpool create wdbigboi /dev/sdc
    zpool add wdbigboi log /dev/sdb

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!