fsid seems seems to be applied dynamically

  • Hi, I am new to openmediavault but have significant experience with nfs. I have my server set up with the currently stable omv 5.6.19-1 and use it as nfs server.

    I noted quickly that all my nfs clients report "stale file handles" which should not happen when the server is restarted or reconfigured. nfs should carry out some state recovery procedure to sync client and server again after the server restarted. I saw that /etc/exports just seems to attach increasing fsid values to the nfs shares which is not consistent if a share is removed or added. Is that intentional? The fsid plays an important role in the state recovery and should be preserved across config changes!

    Related topics:

    - NfS error error: fileid changed

    - RE: NFS I do need help!

  • According to "man exports", fsid can be a uuid. The uuid could be associated to a share when it is created and used in the j2 file. Is there some more context for me to read into it - maybe I could try a patch. But I would only do that if it would have a chance to get integrated in the end.

  • hypnotoad Feel free to open a bug report and a PR that fixes the issue. Would be great to have someone with deeper NFS experience to look at this issue. You can use the shares UUID because this is unique and does not change.


    {{ separator }}{{ share.client | trim }}(fsid={{ v3_loop.index }},{{ share.options }}{% if share.extraoptions | length > 0 %},{{ share.extraoptions }}{% endif %})


    {{ separator }}{{ share.client | trim }}(fsid={{ share.uuid }},{{ share.options }}{% if share.extraoptions | length > 0 %},{{ share.extraoptions }}{% endif %})
  • votdev: Your diff seems to work out of the box, great! On the client, the fsid field is now populated and it stays consistent over time. (It is not the same value as on the server but below I checked that the uuid value is used by the client). In the example below, the first fsid is an old one that was still mounted, and the second is a share that is re-mounted after applying the fix. Adding and deleting shares now works flawlessly for me - all clients continue to work.

    client # cat /proc/fs/nfsfs/volumes
    v3 c0a8b2cc 801 0:46 1:0 no
    v3 c0a8b2cc 801 0:45 7476df8eaffe784d:0 no

    Checking if it works properly:

    1) The exports file seems to be parsed correctly:

    server # exportfs -s -v

    2) As suggested here, I analyzed the traffic with wireshark an can see that files that I open via NFS do get a file handle that contains the above-specified uuid:


    PR is here:


  • I just saw that the change is in 5.6.20-1 now. I should have mentioned that after the change, the clients have to remount the folder (of course) as the fsid changed to something that hopefully will stay static forever.

  • I don't know nearly enough to know why, but for me this change has broken any of my NFS shares that live on mergerfs filesystems. As a temporary fix I've switched the relevant exports back to fsid=1, fsid=2, etc. What would the permanent solution be? Is using a uuid just going to be a no-go for mergerfs?

    When the uuid is being used attempting to mount the share on a remote server (or even locally using mount -t nfs) fails with a Stale file handle error immediately and mounting fails. Switching back to a simple integer for the fsid restores normal behaviour and the shares can be mounted successfully.

  • I did.

    When I wasn't able to mount the share on any clients I eventually tried mounting it on the server by running (as root) mount -vvv -t nfs foldername. I've never mounted any of my NFS shares locally in this way before, but I wanted to rule out any network weirdness or caching issues from previous mounts (if these are even a thing).

    Unfortunately, the result was exactly the same.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!