Not sure you can. Did you answer Y to the ignore error question?
Failed to mount error on Raid 5
-
- gelöst
- mountaindew
-
-
Here's where I got until I wasn't sure what I should select for the last prompt:
Code
Alles anzeigenfsck.ext4 /dev/md127 e2fsck 1.41.12 (17-May-2010) Error reading block 1953529856 (Invalid argument). Ignore error<y>? yes Force rewrite<y>? yes Error writing block 1953529856 (Invalid argument). Ignore error<y>? yes Superblock has an invalid journal (inode 8). Clear<y>? yes *** ext3 journal has been deleted - filesystem is now ext2 only *** Superblock has_journal flag is clear, but a journal inode is present. Clear<y>? yes The filesystem size (according to the superblock) is 2097152000 blocks The physical size of the device is 1953508608 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>?
-
-
-
-
Try it without the -p then: fsck.ext4 -f /dev/md127
-
Made some progress but not sure if it's any better:
Code
Alles anzeigenG54Vault:~# fsck.ext4 -f /dev/md127 e2fsck 1.41.12 (17-May-2010) Error reading block 1953529856 (Invalid argument). Ignore error<y>? yes Force rewrite<y>? yes Error writing block 1953529856 (Invalid argument). Ignore error<y>? yes Superblock has an invalid journal (inode 8). Clear<y>? yes *** ext3 journal has been deleted - filesystem is now ext2 only *** Superblock has_journal flag is clear, but a journal inode is present. Clear<y>? yes The filesystem size (according to the superblock) is 2097152000 blocks The physical size of the device is 1953508608 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? no Error writing block 1953529856 (Invalid argument). Ignore error<y>? no Pass 1: Checking inodes, blocks, and sizes Journal inode is not in use, but contains data. Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Error reading block 1954021376 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? no fsck.ext4: Can't read an block bitmap while retrying to read bitmaps for UltraNA S Recreate journal<y>? yes Creating journal (32768 blocks): Error reading block 1954021376 (Invalid argument). Ignore error<y>? no Error : Can't read an block bitmap while trying to create journal e2fsck: aborted root@G54Vault:~#
-
-
Maybe a newer version of fsck/ext tools would help. Try booting the system from systemrescuecd and try the fsck command again.
-
I ran systemrescue and I got a bunch of errors. I tried to force rewrite on them and after about 50, I gave up.
I think at this point I should just wipe the drives and start over.I appreciate everyone taking the time to troubleshoot this for me. It's the reason I keep using OMV
-
Sorry it didn't work. Something must have been too corrupt.
-
-
Well I guess I can't win...
I started over with the same 3 drives and OS hard drive (wiped the raid drives) and STILL got the same error message.
So, I had two spare drives laying around and replaced 2 out of the original 3 used on the raid. I also went back to version .5 since that worked on my other HP microserver (an N40; I'm trying to do this on the N54).
The raid took forever to build; something I should have mentioned in the previous attempts. Those only took maybe 12 hours. This time it took 2 days for a 3x4TB Raid 5. I went through the gui and selected 'mount' and STILL got the error message:
CodeFailed to mount '71427dae-94fd-4edf-8c54-26ca6309ec05'Error #6000:exception 'OMVException' with message 'Failed to mount '71427dae-94fd-4edf-8c54-26ca6309ec05'' in /usr/share/openmediavault/engined/module/fstab.inc:90Stack trace:#0 /usr/share/openmediavault/engined/rpc/config.inc(184): OMVModuleFsTab->startService()#1 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)#2 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc.inc(62): OMVRpcServiceAbstract->callMethod('applyChanges', Array, Array)#4 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(770): OMVRpc::exec('Config', 'applyChanges', Array, Array)#5 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array)#6 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)#7 /usr/share/php/openmediavault/rpc.inc(62): OMVRpcServiceAbstract->callMethod('mount', Array, Array)#8 /usr/sbin/omv-engined(495): OMVRpc::exec('FileSystemMgmt', 'mount', Array, Array, 1)#9 {main}
I did a reboot and now it's stuck; wanting to go in maintenance mode.
One thing I did notice when I went back in the gui is that the name i gave for the old raid was there... I'm wondering since I didn't wipe the OS drive that remnants from the old raid are being retained? Or am I reaching?
-
To build the raid ALLWAYS wait with creating the filesystem until the initialization is finished.
The mount error is definitly a new one so it may be that your raid degraded already again. Are you sure that all your drives are fine? smart data of them?
Greetings
David -
I did wait until the raid was done before attempting to mount the raid and creating a filesystem. When I got the error message, that's when I noticed the label for the raid had the old name from the previous setup. But the name was a bit messed up. The old name was UltraNAS but this time it showed up as Ultr^d when I attempted to mount the raid.
I will check the smart logs and post later today -
-
Did you zero the superblock of each drive before wiping and creating the new array?
-
No. Just did a quick wipe within omv. Should I do the secure wipe or run another command against the raid drives?
-
-
-
I think we're in business. I ran the above commands and built the raid. I noticed that, when I went to mount the filesystem (AFTER the raid was built), the OLD raid filesystem wasn't listed anymore and I had to create a new one; something I didn't have to do before when it wasn't working. I wish I had remembered this as it probably would have saved me a ton of time.
It's currently building the new filesystem. I'm pretty confident that this is going to work since I didn't get a chance to do this step in the previous attempts. It might be a couple a days before I can post the results but I'll be sure to respond ASAP.
-
Hope it works
-
WOOOOOOOOOOO HOOOOOOOOOOOOOOOoo!!! I now have a mounted filesystem!
I thought it was going to take all day but it just finished.
Thanks for the help everyone. I'll be making a contribution over the weekend to OMV.
Oh, and I enabled the SMART monitoring and all the drives are good. I wasn't checking this before but I suppose I should since this is hosting important stuff
-
-
Great to hear
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!