Errors importing ZFS pool with datasets mountpoint property set to 'legacy' or 'none'

  • Hi,


    I'm new to OMV, so this is my first albeit lengthy post. I'm currently in the process of building my own NAS for home use and am evaluating FreeNAS, NAS4Free, OMV 3.x and napp-it. My test system has a small boot drive and 4 x 4TB data drives that I have set up as a RAIDZ1 ZFS pool.


    I initially created the ZFS pool named "naspool" in FreeNAS 9.10. I have now installed the latest OMV 3.x version and ZFS plugin on my test system and am getting errors importing the ZFS pool via the GUI. I get the following error:


    The configuration has been changed. You must apply the changes in order for them to take effect.
    The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'fsname' and value 'naspool/.system' already exists.


    Error #0:
    exception 'OMV\AssertException' with message 'The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'fsname' and value 'naspool/.system' already exists.' in /usr/share/php/openmediavault/config/database.inc:469
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/fstab.inc(118): OMV\Config\Database->assertIsUnique(Object(OMV\Config\ConfigObject), 'fsname')
    #1 [internal function]: OMVRpcServiceFsTab->set(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('set', Array, Array)
    #4 /usr/share/omvzfs/Utils.php(343): OMV\Rpc\Rpc::call('FsTab', 'set', Array, Array)
    #5 /usr/share/openmediavault/engined/rpc/zfs.inc(200): OMVModuleZFSUtil::addMissingOMVMntEnt(Array)
    #6 [internal function]: OMVRpcServiceZFS->getObjectTree(Array, Array)
    #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
    #8 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('getObjectTree', Array, Array)
    #9 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('ZFS', 'getObjectTree', Array, Array, 1)
    #10 {main}


    From the command-line I can see that the pool was successfully imported, but it doesn't show on the Storage->ZFS screen. However, on the Storage->Filesystem screen you also can see that the ZFS filesystems (datasets) are mounted.


    [mynas ~]# df
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda1 15659952 1552956 13288464 11% /
    udev 10240 0 10240 0% /dev
    tmpfs 1639172 8940 1630232 1% /run
    tmpfs 4097924 0 4097924 0% /dev/shm
    tmpfs 5120 0 5120 0% /run/lock
    tmpfs 4097924 0 4097924 0% /sys/fs/cgroup
    tmpfs 4097924 16 4097908 1% /tmp
    naspool 5723008 0 5723008 0% /naspool
    naspool/Documents 5723008 0 5723008 0% /naspool/Documents
    naspool/Downloads 5767808 44800 5723008 1% /naspool/Downloads
    naspool/home 262144 128 262016 1% /naspool/home
    naspool/jails 5723008 0 5723008 0% /naspool/jails


    A closer look at thg ZFS pool and datasets from the command-line shows that there are some datasets that have the mountpoint property set to "legacy", which is carried over from the FreeNAS evaluation.


    [mynas ~]# zfs list -r naspool -o name,mountpoint
    NAME MOUNTPOINT
    naspool /naspool
    naspool/.system legacy
    naspool/.system/configs-7f4d67ae16c94917b949456bb9f364ad legacy
    naspool/.system/cores legacy
    naspool/.system/rrd-7f4d67ae16c94917b949456bb9f364ad legacy
    naspool/.system/samba4 legacy
    naspool/.system/syslog-7f4d67ae16c94917b949456bb9f364ad legacy
    naspool/Documents /naspool/Documents
    naspool/Downloads /naspool/Downloads
    naspool/home /naspool/home
    naspool/jails /naspool/jails


    If I change the mountpoint property for all the naspool/.system[/*] datasets to unique path(/naspool/<ds_name>) and then manually export the pool, then it is possible to import the ZFS pool via the GUI. If the mountpoint property is set to "none", then I also get an error as above.


    As a work-around I set the mountpoint property for all naspool/.system[/*] datasets to a valid path and also set the "canmount" property for each dataset to "noauto" as this has a similar effect as having the mountpoint property set to "legacy"; i.e. you need to explicitly/manually mount the filesystems. This time when importing the pool via the GUI I get the following error:


    The configuration has been changed. You must apply the changes in order for them to take effect.
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mount -v '/naspool/.system' 2>&1' with exit code '1': mount: can't find /naspool/.system in /etc/fstab


    Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mount -v '/naspool/.system' 2>&1' with exit code '1': mount: can't find /naspool/.system in /etc/fstab' in /usr/share/php/openmediavault/system/process.inc:174Stack trace:#0 /usr/share/php/openmediavault/system/mountpoint.inc(135): OMV\System\Process->execute()#1 /usr/share/openmediavault/engined/module/fstab.inc(71): OMV\System\MountPoint->mount()#2 /usr/share/openmediavault/engined/rpc/config.inc(189): OMVModuleFsTab->startService()#3 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(150): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(517): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatuscB...', '/tmp/bgoutput21...')#7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(151): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#8 /usr/share/openmediavault/engined/rpc/config.inc(208): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#9 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array)#10 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#11 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#12 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#13 {main}


    Can a bug ticket be raised (I don't know how to do this yet) so that the ZFS plugin will be able to handle all possible values for the mountpoint property, which are "mountpoint=path | none | legacy" as per the 'zfs' man page?


    Cheers,
    Günter

    • Offizieller Beitrag

    @luxflow


    You can file issues on github as well.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Bug will not be fixed soon (I'm currently busy and thinking how to handle this bug, there is some issue in OMV 3.x core to handle this problem )
    But there is workaround, `vim /etc/openmediavault/config.xml` search `fstab`
    You can see <mntent>/<dir>, you can change <dir> manually to correct mountpoint (change to `none` if mountpoint is none)

    OMV3 on Proxmox
    Intel E3-1245 v5 | 32GB ECC RAM | 4x3TB RAID10 HDD
    omv-zfs | omv-nginx | omv-letsencrypt | omv-openvpn
    Click link for more details

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!