Stale NFS handle

  • Hallo.


    After power failure I get an error: Failed to mount: Stale NFS handle.
    Tried to find something about this error but nothing really matches my problem.
    Any ideas?


    THW

  • No. Fsck stops with error: flack died with exit status 4. Looks to me as this happens when the system checks the raid volume and not the OS volume. OS is installed on USB stick and raid /dev/md0 contains 3 disks in Raid 5.
    After the boot up is continue with Ctrl-D OMV start normal and is reachable via we interface. Raid is there but not mounted and shows the same error when I try to mount it.

  • Hello.
    Installation and USB stick is around 2 weeks old only (and the stick is this special type needed and not a standard USB stick).
    Not sure if this plugin is installed but as OMV boots normally I don't think the problem is with the USB.
    Unable to mount the reaid volume /dev/md0. USB stick is mounted fine and working fine.
    Maybe some kind of logfile will be helpful?

  • Tried fsck -n with result that /dev/sdd1 is fine (USB stick for OS)
    Looks like there is a problem with the file system "only"?


    Trying to finde more information about error and looks as this is not really a NFS problem to me?
    Raid is active and available but when I try to mount it I get (found a way to connect via SSH at last):


    Fehler #6000: exception 'OMVException' with message 'Failed to mount '89d25a53-f125-48f9-8048-1a0c3270a042': mount: Stale NFS file handle' in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:921 Stack trace: #0 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array) #1 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #2 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('mount', Array, Array) #3 /usr/sbin/omv-engined(500): OMVRpc::exec('FileSystemMgmt', 'mount', Array, Array, 1) #4 {main}

  • I read quite some pages yesterday but nothing matching that good that I tried something myself.
    There are already some TB on the raid. Not too important data but would really prefer not to loose.
    Maybe this information will help (fsck output at the end):


    Code
    root@omvnas:/# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sda[0] sdc[2] sdb[3]
          11720781824 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
          bitmap: 0/44 pages [0KB], 65536KB chunk
     
    unused devices: <none>


    Code
    root@omvnas:/# blkid
    /dev/sdd1: UUID="83d26d42-84b1-4933-a937-6c07f4e83960" TYPE="ext4"
    /dev/sdd5: UUID="2659e6f2-cae4-45a1-81b0-4ec833404128" TYPE="swap"
    /dev/sda: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="9521a054-d0ef-f4e7-ef66-4935b611ee1c" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"
    /dev/sdc: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="c7f9cd5a-1ba6-fa49-c2e5-2979ad6f0570" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"
    /dev/md0: LABEL="OMVNAS" UUID="89d25a53-f125-48f9-8048-1a0c3270a042" TYPE="ext4"
    /dev/sdb: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="a25aac9e-2eed-cd33-d98c-0a63be4c909e" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"



  • No luck with this. fsck only shows this:


    root@omvnas:~# fsck -y -v /dev/md0
    fsck from util-linux 2.20.1
    e2fsck 1.42.5 (29-Jul-2012)
    OMVNAS enthält ein fehlerhaftes Dateisystem, Prüfung erzwungen.
    Durchgang 1: Prüfe Inodes, Blocks, und Größen
    Root Inode ist kein Verzeichnis. Bereinige? ja


    Inode 5 hat unzulässigen Block(s). Bereinige? ja


    Illegal(er) Block #Block Nr.3703 (3452056091) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3708 (3429787282) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3721 (3359359035) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3727 (3225084742) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3734 (3774572544) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3737 (2973333561) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3739 (3422716129) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3747 (2986599424) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3755 (3829201920) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3759 (3815391235) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3766 (3246132628) in Inode 5. BEREINIGT.
    Zu viele unzulässige Blocks in Inode 5.
    Bereinige Inode? ja


    I tried even with:


    root@omvnas:~# fsck -y -v -C /dev/md0
    fsck from util-linux 2.20.1
    e2fsck 1.42.5 (29-Jul-2012)
    OMVNAS enthält ein fehlerhaftes Dateisystem, Prüfung erzwungen.
    Durchgang 1: Prüfe Inodes, Blocks, und Größen
    Root Inode ist kein Verzeichnis. Bereinige? ja


    Inode 5 hat unzulässigen Block(s). Bereinige? ja


    Illegal(er) Block #Block Nr.3767 (4123599192) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3770 (4168589763) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3775 (3533655559) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3791 (3053680323) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3793 (3129722901) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3800 (3137354080) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3803 (2996087043) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3808 (3165323760) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3812 (3508961644) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3814 (3784365983) in Inode 5. BEREINIGT.
    Illegal(er) Block #Block Nr.3819 (3223998164) in Inode 5. BEREINIGT.
    Zu viele unzulässige Blocks in Inode 5.
    Bereinige Inode? ja


    OMVNAS: | | 0.1%


    But for more than 8h not a singl 0.1% progress.


    :( Any idea what else can help without deleting the data?

    • Offizieller Beitrag

    If you have a drive to backup, you can use photorec. It can restore files analysing block by block, only problem is the output is raw, no folder no original file names, just the files with extension. You can even put filters for extensions like jpg or mp3, etc. to recover.


    Seems like the filesystem got really corrupted probably a write hole since you suffer power failure.

  • That's bad news :(
    As you wrote power failure it came into my mind that maybe I was not exact enough with power failure. One disks of the raid had a power failure and was completly dead. Power down the OMV and checked disk externally but not luck - not even spinning up any longer. Then I replace the failed disk with a new one I still had available and restarted the server. I expected it to boot up with a degraded raid 5 and that was as expected but that the file system had been impacted was really a surprise. Not sure if this is important but getting around 5TB of data without any file names is not really worth the effort to copy everything to another disk.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!