Creation of shared folders is leading to ZFS not mountable

  • Hi,


    I face some serious problems. I have a freshly setup OMV3 with backport kernel 4.7 and ZFS-plugin.
    My drives are running in a RAIDZ1 and everything runs fine, execpt if I start creating shared folders over the NFS plugin.
    This shared folder points to an existant shared folder on the Raid. Nevertheless it creates an entry in fstab


    # >>> [openmediavault]
    UUID=51990365-417d-474b-9044-b1703fb50b75 /media/51990365-417d-474b-9044-b1703fb50b75 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /Raid1Pool/Filme /export/Filme none bind 0 0
    # <<< [openmediavault]


    And those lines have a relatively bad effect on boot. It will prevent zfs raid system from being mounted and following error message appears.


    ● zfs-mount.service loaded failed failed Mount ZFS filesystems
    A start job is running dor dev-disk-by\.......


    If I mount with mount zfs -all it will say that directory is not empty. And if I do a ls in my raid-mount point it will show me Filme. Once I delete it I can sucessfully mount.


    If I remove the lines in fstab the problems disappears. Thus the problem is somehow solved for me but I assumed that it is not the indented behaviour that a nfs shares prevents the drives from bein mounted.


    Best regards,
    dawansch

  • I found out that if I remove the bind mount my nfs share does not correctly work. This is because the bind from the "nfs-path" /export does not point to the dataset. I then decided to add a cronjob (@reboot) to have the bind mount set after the boot has completed. This works now.


    I can access the nfs share externally but I have no write rights. This is due the UID/GID nature of nfs and my client runs the access as root. I have added those additional options to the nfs share "anonuid=1000,anongid=100" to avoid this problem.
    Turns out that it did not work. I assume the user has still not the necessary write rights on the folder. Turns out that the ACL tab in the sharedfolder section is not working. I get following error message:


    "Fehler #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; setfacl --remove-all -M '/tmp/setfaclJSfwf0' -- '/Raid1Pool/Filme' 2>&1': setfacl: /Raid1Pool/Filme: Operation not supported' in /usr/share/openmediavault/engined/rpc/sharemgmt.inc:991Stack trace:#0 /usr/share/php/openmediavault/rpc/serviceabstract.inc(528): OMVRpcServiceShareMgmt->{closure}('/tmp/bgstatus7d...', '/tmp/bgoutputaz...')#1 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(996): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure), NULL, Object(Closure))#2 [internal function]: OMVRpcServiceShareMgmt->setFileACL(Array, Array)#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#4 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('setFileACL', Array, Array)#5 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('ShareMgmt', 'setFileACL', Array, Array, 1)#6 {main}"


    I assume this is related to the zfs filesystem and the zpool....
    Used
    zfs set acltype=posixacl tank/datatankto enable ACL on zfs. Now the ACL menu works how it should. And after setting the access rights im able to write through nfs. yeah...Hope I could be of any help for others encountering the same issues.Those items should be added to OVM for guys with ZFS filesystems.

  • Hi,


    I have written this script as a workaround :



    and I call it from a cronjob at reboot time :


    @reboot /root/OMV_ZFS_Fix.sh


    but it seams the script is not run at good timing ... result is not what expected ...


    how to run this script at END of reboot ?


    Best regards,
    Samuel

  • Some more info :


    I do not manged to make it work as I want using cron jobs.
    So I have make some try with systemd.


    Here my systemd service file calling the bash script from previous post :



    You have to copy the service file in /etc/systemd/system/ZFS_Fix.service


    Then, to enable the script :




    Code
    systemctl daemon-reload
    systemctl enable ZFS_Fix.service
    reboot

    It seams to work for me !


    Best regards,
    Samuel

  • Well


    after some research, this seams to be an issue with the NFS plugin, not ZFS.
    the /exports/* directories are not mounted as it should be when using ZFS.
    the is not the needed simlinks between the ZFS file system and the /exports mount points.


    still need to analyze this to found the exact error,
    but maybe it's an error occurring when you do not use the default ZFS mount points.


    Best regards,

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!