Unable to mount NFS shares after OVM8 update, stale filehandles from ZFS export

  • Been running OMV7 for years now with several shares working flawlessly being automouted by Ubuntu 24.04 clients.


    I upgraded to OMV8 and now none of these shares are working correctly. When I manually mount the share, there are no errors, but when I `ls -l` the directory, every file is listed but looks like the following.


    # ls -l

    ls: cannot access 'Downloads': Stale file handle

    ?????????? ? ? ? ?            ?  Downloads


    From the client I did a showmount with the following,


    # showmount -e server

    clnt_create: RPC: Program not registered


    Asking the great AI in the sky, it said to run rpcinfo and there should be a mountd service listed but not listed


    # rpcinfo -p localhost

       program vers proto   port  service

        100000    4   tcp    111  portmapper

        100000    3   tcp    111  portmapper

        100000    2   tcp    111  portmapper

        100000    4   udp    111  portmapper

        100000    3   udp    111  portmapper

        100000    2   udp    111  portmapper

        100024    1   udp  48704  status

        100024    1   tcp  41223  status

        100003    4   tcp   2049  nfs


    Tried a restart of nfs-mountd and no change, looking at the service status have the following


    # systemctl status nfs-mountd

    ● nfs-mountd.service - NFS Mount Daemon

         Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static)

    Active: active (running) since Sun 2026-02-01 06:56:54 MST; 7s ago

     Invocation: 58505c561df34b28ab70c63ed86d1fc3

           Docs: man:rpc.mountd(8)

        Process: 2169768 ExecStart=/usr/sbin/rpc.mountd (code=exited, status=0/SUCCESS)

       Main PID: 2169770 (rpc.mountd)

    Tasks: 1 (limit: 114925)

         Memory: 580K (peak: 1.8M)

            CPU: 6ms

         CGroup: /system.slice/nfs-mountd.service

    └─2169770 /usr/sbin/rpc.mountd


    Contents of the /etc/exports for this share is


    /export/public 192.168.100.0/24(fsid=56f3173a-3101-474e-a9da-84ee1b29cc67,rw,subtree_check,insecure,crossmnt)


    From the server, I can successfully mount the share, but the showmount even locally shows the RPC error.


    I'm at a loss at to what to try next.

  • macom

    Approved the thread.
  • Still been trying to figure this out and have tried the following and still showing issues.


    On my homeassisstant, I have configured it to mount my a share on the server and it is also showing an error during a backup to the NFS share.


    2026-02-02 20:12:11.837 ERROR (MainThread) [supervisor.backups.manager] Could not copy backup to Servbert_Backup due to: [Errno 30] Read-only file system: '/data/mounts/Server_Backup/Automatic_backup_2026.1.2_2026-02-02_20.11_28025048.tar'


    It says it is readonly but the cat of /etc/exports shows otherwise

    /export/homeassistant_backup homeassistant(fsid=57974949-446b-462b-a485-0a190aad3d77,rw,subtree_check,insecure,crossmnt)


    I also booted Ubuntu 24.04 and Kubuntu 25.10 from ISO just to have a clean version, and after installing nfs-client I tried mounting and these too show the "stale file handle" issue.

  • One thing to note is that as part of the OMV8 update, I "lost" my zfs drives that were being shared on NFS. I had to reimport the pools and things looked good, until I found the NFS stale file issue.


    Googling states that the fsid (inode) can change when a zpool import is run, but I have already removed all NFS shares in OMV and recreated them, but I am not sure how to tell if the fsid is correct in the /etc/exports.

  • I was facing the same issue after the upgrade to OMV8.


    I was able to fix it by modifying the Extra Options on the NFS exports like so:


    My old settings:

    Code
    subtree_check,insecure,crossmnt,no_root_squash

    My new settings:

    Code
    no_subtree_check,insecure,no_root_squash,mountpoint,crossmnt


    I had to change subtree_check to no_subtree_check

    and add mountpoint.


    This fixed it for me. So I suggest testing with these two options.

  • Trekkie,


    Thank you so much for that information. I added just the no_subtree_check option and that immediately fix the issue.


    I tried adding the mountpoint as well, which also worked in combination with the no_subtree_check, but it seems to be the no_subtree_check that really resolved the situation.


    I updated the thread title to include the fact that this was only a problem on NFS shares that were from a ZFS dataset. Creating a share from an ext4 did not have any problems.


    Other things I tried prior to Trekkie providing the solution.


    - A fresh install of OMV 8.0 using both the Proxmox 6.14 and 6.17 kernel with ZFS NFS exports

    - A fresh install going back to OVM 7.4, still had the issue.


    I really don't what changed that I couldn't even go back to the 7.4 version that was working.


    If anyone can explain what no_subtree_check is doing with regards to the ZFS, I would love to understand.

  • gmccone

    Changed the title of the thread from “Unable to mount NFS shares after OVM8 update, stale filehandles” to “Unable to mount NFS shares after OVM8 update, stale filehandles from ZFS export”.
  • Hey,


    glad to hear you could fix your problem.


    I don't really know what might have changed. Until now I thought it might be something that changed within nfs on Debian 13. But perhaps not, when going back there didn't fix the issue. Perhaps something changed the zpool when importing/mounting the first time with the new kernel.


    I suspected the problem to solved by no_subtree_check. The other option was just something I also kept from the zfs' own defaults. My first workaround was to ditch the OMV NFS exports and just set sharenfs=on on the zpool. This creates it's own exports file in /etc/exports.d/ (iirc). That's where I got the export options.

  • Just wanted to add my thanks -- I've been tearing my hair out all afternoon trying to work out why my NFS shares would return stale file handles immediately after mounting.


    I've been migrating to a new server setup (first time using OMV and ZFS) and had been trying to set up NFS shares from ZFS datasets (one for each dataset). I'd narrowed it down to something to do with the combination of ZFS and NFS but hadn't solved it. Changing the default subtree_check flag to no_subtree_check fixed it.

  • I've had this "stale file handle" problem on NFS shared from ZFS after upgrading to OMV 8. To solve it I've set OMV to boot to older kernel Linux 6.8.12-17-pve, but then I'd be missing on kernel updates.


    Today I've tried the no_subtree_check approach along the newer kernels and my problem was only solved when I set, in OMV web interface, NFS version to 4.2 (removing older versions there), and put ip:/export/share on clients (earlier, there was no need to put /export/ before share names). Now my OMV is booted on Linux 6.17.13-1-pve and all my shares are working again, even with my Proxmox client (for that one I've removed the shares and re-added them).


    Have been an user of OMV since 0.4 or 0.5, if I recall correctly, and love the work so far.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!