Hallo.
After power failure I get an error: Failed to mount: Stale NFS handle.
Tried to find something about this error but nothing really matches my problem.
Any ideas?
THW
Hallo.
After power failure I get an error: Failed to mount: Stale NFS handle.
Tried to find something about this error but nothing really matches my problem.
Any ideas?
THW
Disconnect all the nfs clients or stop them using the server. Then reboot the omv server
NFS service is not even active. No client configured or connected. Reboot of Server hangs with some fsck errors and needs to be continued with CTRL-D.
Well did the fsck finish without errors?
into what kind of media of omv installed (hard drive, USB stick )?
No. Fsck stops with error: flack died with exit status 4. Looks to me as this happens when the system checks the raid volume and not the OS volume. OS is installed on USB stick and raid /dev/md0 contains 3 disks in Raid 5.
After the boot up is continue with Ctrl-D OMV start normal and is reachable via we interface. Raid is there but not mounted and shows the same error when I try to mount it.
How old is your install with the USB drive?
do you have the flash memory plugin installed ?
Hello.
Installation and USB stick is around 2 weeks old only (and the stick is this special type needed and not a standard USB stick).
Not sure if this plugin is installed but as OMV boots normally I don't think the problem is with the USB.
Unable to mount the reaid volume /dev/md0. USB stick is mounted fine and working fine.
Maybe some kind of logfile will be helpful?
Tried fsck -n with result that /dev/sdd1 is fine (USB stick for OS)
Looks like there is a problem with the file system "only"?
Trying to finde more information about error and looks as this is not really a NFS problem to me?
Raid is active and available but when I try to mount it I get (found a way to connect via SSH at last):
Fehler #6000: exception 'OMVException' with message 'Failed to mount '89d25a53-f125-48f9-8048-1a0c3270a042': mount: Stale NFS file handle' in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:921 Stack trace: #0 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array) #1 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #2 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('mount', Array, Array) #3 /usr/sbin/omv-engined(500): OMVRpc::exec('FileSystemMgmt', 'mount', Array, Array, 1) #4 {main}
HAve you tried fsck in the array?
I read quite some pages yesterday but nothing matching that good that I tried something myself.
There are already some TB on the raid. Not too important data but would really prefer not to loose.
Maybe this information will help (fsck output at the end):
root@omvnas:/# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0] sdc[2] sdb[3]
11720781824 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/44 pages [0KB], 65536KB chunk
unused devices: <none>
root@omvnas:/# blkid
/dev/sdd1: UUID="83d26d42-84b1-4933-a937-6c07f4e83960" TYPE="ext4"
/dev/sdd5: UUID="2659e6f2-cae4-45a1-81b0-4ec833404128" TYPE="swap"
/dev/sda: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="9521a054-d0ef-f4e7-ef66-4935b611ee1c" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"
/dev/sdc: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="c7f9cd5a-1ba6-fa49-c2e5-2979ad6f0570" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"
/dev/md0: LABEL="OMVNAS" UUID="89d25a53-f125-48f9-8048-1a0c3270a042" TYPE="ext4"
/dev/sdb: UUID="a50e22b2-57da-0b06-2dfa-b2d7c6e67775" UUID_SUB="a25aac9e-2eed-cd33-d98c-0a63be4c909e" LABEL="omvnas:NASRAID" TYPE="linux_raid_member"
root@omvnas:/# fdisk -l
Disk /dev/sda: 6001.2 GB, 6001175126016 bytes
255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sda doesn't contain a valid partition table
Disk /dev/sdc: 6001.2 GB, 6001175126016 bytes
255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdb: 6001.2 GB, 6001175126016 bytes
255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdd: 16.0 GB, 15977152512 bytes
255 heads, 63 sectors/track, 1942 cylinders, total 31205376 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00016318
Device Boot Start End Blocks Id System
/dev/sdd1 * 2048 29833215 14915584 83 Linux
/dev/sdd2 29835262 31203327 684033 5 Extended
/dev/sdd5 29835264 31203327 684032 82 Linux swap / Solaris
Disk /dev/md0: 12002.1 GB, 12002080587776 bytes
2 heads, 4 sectors/track, -1364771840 cylinders, total 23441563648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
Alles anzeigen
root@omvnas:~# fsck -n /dev/md0
fsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
OMVNAS enthält ein fehlerhaftes Dateisystem, Prüfung erzwungen.
Root Inode ist kein Verzeichnis. Zurücksetzen? nein
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Root Inode ist kein Verzeichnis. Bereinige? nein
Inode 5 hat unzulässigen Block(s). Bereinige? nein
Illegal(er) Block #Block Nr.88 (3508168211) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.90 (4063934329) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.91 (3369928613) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.92 (3636791923) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.95 (3547115074) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.97 (3336172833) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.103 (4053301678) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.104 (2931344020) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.105 (3243040186) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.108 (4168121264) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.114 (4094629423) in Inode 5. IGNORIERT.
Zu viele unzulässige Blocks in Inode 5.
Bereinige Inode? nein
Ausgaben unterdrücken? nein
Illegal(er) Block #Block Nr.116 (3166199097) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.117 (3218373666) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.118 (3844470447) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.119 (3643680229) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.130 (3415081058) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.131 (3660844965) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.135 (4102791711) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.137 (4020322241) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.149 (3631676409) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.151 (3750244156) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.152 (3730100234) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.156 (3271481292) in Inode 5. IGNORIERT.
Zu viele unzulässige Blocks in Inode 5.
Bereinige Inode? nein
........ (a lot more of these lines)
Ausgaben unterdrücken? nein
Illegal(er) Block #Block Nr.1017 (4271898828) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1022 (3215158021) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1023 (3363521781) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1027 (4235193604) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1032 (3539704480) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1034 (2959270712) in Inode 5. IGNORIERT.
Illegal(er) Block #Block Nr.1035 (3377900302) in Inode 5. IGNORIERT.
Illegal(er) Block #indirekte Blöcke (3167713155) in Inode 5. IGNORIERT.
Illegal(er) Block #doppelt indirekte Blöcke (3554809733) in Inode 5. IGNORIERT.
Fehler beim Iterieren über Blocks in Inode 5: Es wurden doppelt indirekte Blöcke gefunden
OMVNAS: ********** WARNUNG: Noch Fehler im Dateisystem **********
e2fsck: abgebrochen
OMVNAS: ********** WARNUNG: Noch Fehler im Dateisystem **********
root@omvnas:~#
Alles anzeigen
I can't translate now in the phone. Does it say clean at the end or all errors repaired ?
No, warning -still failures in filesystem
e2fdck stopped
Can I run it without any risk? What options do you suggest to be used?
Check the smart values of the array disks. You can post them here with pastebin.
Hope this is correct - Never worked with pastebin before: http://pastebin.com/1KcQiSrU
Smart values look fine. Those are good drives. You can try an run fsck -y /dev/md0
that would try to fix all the errors assuming all questions yes
No luck with this. fsck only shows this:
root@omvnas:~# fsck -y -v /dev/md0
fsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
OMVNAS enthält ein fehlerhaftes Dateisystem, Prüfung erzwungen.
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Root Inode ist kein Verzeichnis. Bereinige? ja
Inode 5 hat unzulässigen Block(s). Bereinige? ja
Illegal(er) Block #Block Nr.3703 (3452056091) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3708 (3429787282) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3721 (3359359035) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3727 (3225084742) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3734 (3774572544) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3737 (2973333561) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3739 (3422716129) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3747 (2986599424) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3755 (3829201920) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3759 (3815391235) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3766 (3246132628) in Inode 5. BEREINIGT.
Zu viele unzulässige Blocks in Inode 5.
Bereinige Inode? ja
I tried even with:
root@omvnas:~# fsck -y -v -C /dev/md0
fsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
OMVNAS enthält ein fehlerhaftes Dateisystem, Prüfung erzwungen.
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Root Inode ist kein Verzeichnis. Bereinige? ja
Inode 5 hat unzulässigen Block(s). Bereinige? ja
Illegal(er) Block #Block Nr.3767 (4123599192) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3770 (4168589763) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3775 (3533655559) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3791 (3053680323) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3793 (3129722901) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3800 (3137354080) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3803 (2996087043) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3808 (3165323760) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3812 (3508961644) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3814 (3784365983) in Inode 5. BEREINIGT.
Illegal(er) Block #Block Nr.3819 (3223998164) in Inode 5. BEREINIGT.
Zu viele unzulässige Blocks in Inode 5.
Bereinige Inode? ja
OMVNAS: | | 0.1%
But for more than 8h not a singl 0.1% progress.
Any idea what else can help without deleting the data?
If you have a drive to backup, you can use photorec. It can restore files analysing block by block, only problem is the output is raw, no folder no original file names, just the files with extension. You can even put filters for extensions like jpg or mp3, etc. to recover.
Seems like the filesystem got really corrupted probably a write hole since you suffer power failure.
That's bad news
As you wrote power failure it came into my mind that maybe I was not exact enough with power failure. One disks of the raid had a power failure and was completly dead. Power down the OMV and checked disk externally but not luck - not even spinning up any longer. Then I replace the failed disk with a new one I still had available and restarted the server. I expected it to boot up with a degraded raid 5 and that was as expected but that the file system had been impacted was really a surprise. Not sure if this is important but getting around 5TB of data without any file names is not really worth the effort to copy everything to another disk.
Makes you think about raid solutions. Is it that important to use them at home ? You can use pooling solutions, if a disk dies you loose only data on that drive. Or snap raid that is much more flexible and works at filesystem level.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!