Posts by hypnotoad

    votdev: Your diff seems to work out of the box, great! On the client, the fsid field is now populated and it stays consistent over time. (It is not the same value as on the server but below I checked that the uuid value is used by the client). In the example below, the first fsid is an old one that was still mounted, and the second is a share that is re-mounted after applying the fix. Adding and deleting shares now works flawlessly for me - all clients continue to work.

    client # cat /proc/fs/nfsfs/volumes
    v3 c0a8b2cc 801 0:46 1:0 no
    v3 c0a8b2cc 801 0:45 7476df8eaffe784d:0 no

    Checking if it works properly:

    1) The exports file seems to be parsed correctly:

    server # exportfs -s -v

    2) As suggested here, I analyzed the traffic with wireshark an can see that files that I open via NFS do get a file handle that contains the above-specified uuid:


    PR is here:

    According to "man exports", fsid can be a uuid. The uuid could be associated to a share when it is created and used in the j2 file. Is there some more context for me to read into it - maybe I could try a patch. But I would only do that if it would have a chance to get integrated in the end.

    Hi, I am new to openmediavault but have significant experience with nfs. I have my server set up with the currently stable omv 5.6.19-1 and use it as nfs server.

    I noted quickly that all my nfs clients report "stale file handles" which should not happen when the server is restarted or reconfigured. nfs should carry out some state recovery procedure to sync client and server again after the server restarted. I saw that /etc/exports just seems to attach increasing fsid values to the nfs shares which is not consistent if a share is removed or added. Is that intentional? The fsid plays an important role in the state recovery and should be preserved across config changes!

    Related topics:

    - NfS error error: fileid changed

    - RE: NFS I do need help!