ZFS plugin: "mountpoint not unique"

  • I have a Debian machine that boots from a zpool on a pair of mirreored 1TB disks.
    A third disk of 4TB disk with zpool 'ariadne' is available for sharing:



    Now, when I select the ZFS plugin it gives me this error:



    my /etc/fstab reads:

    Code
    # UNCONFIGURED FSTAB FOR BASE SYSTEM
    /dev/zvol/rpool/swap none swap defaults 0 0
    # >>> [openmediavault]
    # <<< [openmediavault]


    It seems that every time I use OMV a new <mntent> is appended to the config.xml - which fails because the <dir> value is not unique.
    Whenever I delete such an entry, a new one pops in:


    Code
    <mntent>
            <uuid>b63450e7-4c63-43e4-aefc-262d57d28e90</uuid>
            <fsname>rpool</fsname>
            <dir>/</dir>
            <type>zfs</type>
            <opts>rw,relatime,xattr,noacl</opts>
            <freq>0</freq>
            <passno>0</passno>
            <hidden>1</hidden>
          </mntent>


    One strange thing is that the UUIDs change each time but are not familiar, that is to say they cannot be found under /dev/disk/...


    Why does OMV keep inserting entries?
    How can I set the list of zfs pools straight?
    Does this behaviour have to do with the existing ZFS pools at all?
    Is booting from ZFS a problem here?


    Kind regards


    birnbacs

    • Offizieller Beitrag

    I think booting from zfs is the issue. This was never accounted for, but I need to confirm this with testing.


    For every dataset and pool the plugin inserts an entry in the db.


    Now looking at the error and the zpool list I see there are two datasets pointing to the same mount point, maybe that’s the issue, the filesystem backend of Omv doesn’t allow this, because parsing the db all objects must be unique.

  • Agreed. IMO he plugin's task is only to manage the zfs entities so enforcing the guidelines is not up to it (and probably would not work anyway). On the other hand OMV does not support ZFS deeply enough to enforce the guidelines over ZFS peculiarlities.
    It seems to me that OMV should be able to work with ZFS datasets even if the policy cannot be inforced properly. After all, a ZFS-booted machine is not a standard install and does not need 100% accounting for.

  • I would have loved to go for this option but the two conflicting ZFS elements are required for booting and I don't see how I could do without any one of them.


    For installation I followed the (very good) instructions on
    https://github.com/zfsonlinux/…ebian-Stretch-Root-on-ZFS
    so maybe I will not be the only one with a layout like this.


    Apart from ZFS administration OMV is completely usable without the plugin and there is no protection against sharing system folders. Being serious about the guideline will require handling it in core functionality.


    I guess the usage rule "don't share the system disk" will have to be rephrased as a disk does not mean much on ZFS.

    • Offizieller Beitrag

    I followed this guide a few years ago for testing. Reading there i do not see why rpool needs to be pointed to /, rpoot/ROOT/debian points correctly, but the other one could be anywhere without pointing to / IMO, correct me if i am wrong and i missed something.



    Apart from ZFS administration OMV is completely usable without the plugin and there is no protection against sharing system folders. Being serious about the guideline will require handling it in core functionality.

    I believe the changes need to be done at plugin level, but that's for more advanced developers. I have only submitted small patches to the plugin, this requires more work IMO.


    Without the plugin the web panel is useless as you won't be able to create shared folders. If you have a small disk around to spare you can start mounting datasets to that disks in ext4 format to have them available without the plugin.

    I guess the usage rule "don't share the system disk" will have to be rephrased as a disk does not mean much on ZFS.

    Well, the small patch i submitted was to cover this, as prevents to use any existing folder, given the fact that when datasets are created they generate the mount folder. In this case is different as they are already defined.


    https://github.com/OpenMediaVa…penmediavault-zfs/pull/35


    edit: i see point 2.3 that creates rpool at /, still curious if the system will boot without that mountpoint option.

  • Without the plugin the web panel is useless as you won't be able to create shared folders.

    I may be getting you wrong here but with the plugin installed but effectively nonfunctional all sharing functions are available. Creating shared folders, assigning privileges and all. What is missing is creating and maintaining datasets, making snapshots etc.



    edit: i see point 2.3 that creates rpool at /, still curious if the system will boot without that mountpoint option.

    Me too but I would hate to render my system unbootable. Not sure how to change that back from a live USB install.


    Let's see if I can work up the courage today to try changing the / mountpoint as explained here:
    https://unix.stackexchange.com…ount-point-for-a-zfs-pool


    That way I could see what is inside and whether or not it is necessary for booting.

  • Without the plugin the web panel is useless as you won't be able to create shared folders.

    I may be getting you wrong here but with the plugin installed but effectively nonfunctional all sharing functions are available. Creating shared folders, assigning privileges and all. What is missing is creating and maintaining datasets, making snapshots etc.



    edit: i see point 2.3 that creates rpool at /, still curious if the system will boot without that mountpoint option.

    Me too but I would hate to render my system unbootable. Not sure how to change that back from a live USB install.


    Let's see if I can work up the courage today to try changing the / mountpoint as explained here:
    https://unix.stackexchange.com…ount-point-for-a-zfs-pool


    That way I could see what is inside and whether or not it is necessary for booting.

    • Offizieller Beitrag

    The plugin implements a filesystem backend, means every dataset and pools are registered in the internal database of omv. This creates volumes that you see in shared folders dropdown menu. If you deleted the plugin the dataset entries on the omv database might still be there (not sure if postrm deletes this points at plugin removal), but for sure if you go the fs section with zfs entries in the db you should see an error like "no filesystem support for zfs" if there is no plugin installed

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!