fsid seems seems to be applied dynamically

  • Hi, I am new to openmediavault but have significant experience with nfs. I have my server set up with the currently stable omv 5.6.19-1 and use it as nfs server.


    I noted quickly that all my nfs clients report "stale file handles" which should not happen when the server is restarted or reconfigured. nfs should carry out some state recovery procedure to sync client and server again after the server restarted. I saw that /etc/exports just seems to attach increasing fsid values to the nfs shares which is not consistent if a share is removed or added. Is that intentional? The fsid plays an important role in the state recovery and should be preserved across config changes!

    Related topics:

    - NfS error error: fileid changed

    - RE: NFS I do need help!

    • Offizieller Beitrag

    fsid is just a loop. So, if you add or remove shares, they could change. This might be a big change to fix...


    https://github.com/openmediava…/files/etc-exports.j2#L21

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • According to "man exports", fsid can be a uuid. The uuid could be associated to a share when it is created and used in the j2 file. Is there some more context for me to read into it - maybe I could try a patch. But I would only do that if it would have a chance to get integrated in the end.

    • Offizieller Beitrag

    hypnotoad Feel free to open a bug report and a PR that fixes the issue. Would be great to have someone with deeper NFS experience to look at this issue. You can use the shares UUID because this is unique and does not change.


    Old:

    Code
    {{ separator }}{{ share.client | trim }}(fsid={{ v3_loop.index }},{{ share.options }}{% if share.extraoptions | length > 0 %},{{ share.extraoptions }}{% endif %})

    New:

    Code
    {{ separator }}{{ share.client | trim }}(fsid={{ share.uuid }},{{ share.options }}{% if share.extraoptions | length > 0 %},{{ share.extraoptions }}{% endif %})
  • votdev: Your diff seems to work out of the box, great! On the client, the fsid field is now populated and it stays consistent over time. (It is not the same value as on the server but below I checked that the uuid value is used by the client). In the example below, the first fsid is an old one that was still mounted, and the second is a share that is re-mounted after applying the fix. Adding and deleting shares now works flawlessly for me - all clients continue to work.

    Code
    client # cat /proc/fs/nfsfs/volumes
    NV SERVER   PORT DEV          FSID                              FSC
    v3 c0a8b2cc  801 0:46         1:0                               no
    v3 c0a8b2cc  801 0:45         7476df8eaffe784d:0                no


    Checking if it works properly:

    1) The exports file seems to be parsed correctly:

    Code
    server # exportfs -s -v
    /export/srv  192.168.1.1(rw,wdelay,crossmnt,insecure,root_squash,fsid=d72b956b-69e2-4a20-9a53-6bc4e73d3c54,sec=sys,rw,insecure,root_squash,no_all_squash)


    2) As suggested here, I analyzed the traffic with wireshark an can see that files that I open via NFS do get a file handle that contains the above-specified uuid:

    Code
    01000601d72b956b69e24a209a536bc4e73d3c540a00b0b100000000f89b7200
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


    PR is here:

    https://github.com/openmediavault/openmediavault/pull/1135

  • I just saw that the change is in 5.6.20-1 now. I should have mentioned that after the change, the clients have to remount the folder (of course) as the fsid changed to something that hopefully will stay static forever.

  • I don't know nearly enough to know why, but for me this change has broken any of my NFS shares that live on mergerfs filesystems. As a temporary fix I've switched the relevant exports back to fsid=1, fsid=2, etc. What would the permanent solution be? Is using a uuid just going to be a no-go for mergerfs?


    When the uuid is being used attempting to mount the share on a remote server (or even locally using mount -t nfs) fails with a Stale file handle error immediately and mounting fails. Switching back to a simple integer for the fsid restores normal behaviour and the shares can be mounted successfully.

  • I did.


    When I wasn't able to mount the share on any clients I eventually tried mounting it on the server by running (as root) mount -vvv -t nfs 192.168.0.99:/sharename foldername. I've never mounted any of my NFS shares locally in this way before, but I wanted to rule out any network weirdness or caching issues from previous mounts (if these are even a thing).


    Unfortunately, the result was exactly the same.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!