Failed to mount error on Raid 5

    • Offizieller Beitrag

    Not sure you can. Did you answer Y to the ignore error question?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Here's where I got until I wasn't sure what I should select for the last prompt:


    • Offizieller Beitrag

    That is starting to sound bad... I would abort and then:


    Code
    tune2fs -O extents,uninit_bg,dir_index,has_journal /dev/md127
    e2fsck -pf /dev/md127

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Try it without the -p then: fsck.ext4 -f /dev/md127

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Made some progress but not sure if it's any better:


    • Offizieller Beitrag

    Maybe a newer version of fsck/ext tools would help. Try booting the system from systemrescuecd and try the fsck command again.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I ran systemrescue and I got a bunch of errors. I tried to force rewrite on them and after about 50, I gave up.
    I think at this point I should just wipe the drives and start over.


    I appreciate everyone taking the time to troubleshoot this for me. It's the reason I keep using OMV

    • Offizieller Beitrag

    Sorry it didn't work. Something must have been too corrupt.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well I guess I can't win...


    I started over with the same 3 drives and OS hard drive (wiped the raid drives) and STILL got the same error message.


    So, I had two spare drives laying around and replaced 2 out of the original 3 used on the raid. I also went back to version .5 since that worked on my other HP microserver (an N40; I'm trying to do this on the N54).


    The raid took forever to build; something I should have mentioned in the previous attempts. Those only took maybe 12 hours. This time it took 2 days for a 3x4TB Raid 5. I went through the gui and selected 'mount' and STILL got the error message:

    Code
    Failed to mount '71427dae-94fd-4edf-8c54-26ca6309ec05'Error #6000:exception 'OMVException' with message 'Failed to mount '71427dae-94fd-4edf-8c54-26ca6309ec05'' in /usr/share/openmediavault/engined/module/fstab.inc:90Stack trace:#0 /usr/share/openmediavault/engined/rpc/config.inc(184): OMVModuleFsTab->startService()#1 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)#2 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc.inc(62): OMVRpcServiceAbstract->callMethod('applyChanges', Array, Array)#4 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(770): OMVRpc::exec('Config', 'applyChanges', Array, Array)#5 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array)#6 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)#7 /usr/share/php/openmediavault/rpc.inc(62): OMVRpcServiceAbstract->callMethod('mount', Array, Array)#8 /usr/sbin/omv-engined(495): OMVRpc::exec('FileSystemMgmt', 'mount', Array, Array, 1)#9 {main}


    I did a reboot and now it's stuck; wanting to go in maintenance mode.


    One thing I did notice when I went back in the gui is that the name i gave for the old raid was there... I'm wondering since I didn't wipe the OS drive that remnants from the old raid are being retained? Or am I reaching?

  • To build the raid ALLWAYS wait with creating the filesystem until the initialization is finished.


    The mount error is definitly a new one so it may be that your raid degraded already again. Are you sure that all your drives are fine? smart data of them?


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • I did wait until the raid was done before attempting to mount the raid and creating a filesystem. When I got the error message, that's when I noticed the label for the raid had the old name from the previous setup. But the name was a bit messed up. The old name was UltraNAS but this time it showed up as Ultr^d when I attempted to mount the raid.
    I will check the smart logs and post later today

    • Offizieller Beitrag

    Did you zero the superblock of each drive before wiping and creating the new array?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    This is what I would do for each drive:


    Code
    mdadm --zero-superblock /dev/sdX
    dd if=/dev/zero of=/dev/sdx bs=512 count=100000

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This is what I would do for each drive:


    Code
    mdadm --zero-superblock /dev/sdX
    dd if=/dev/zero of=/dev/sdx bs=512 count=100000


    I think we're in business. I ran the above commands and built the raid. I noticed that, when I went to mount the filesystem (AFTER the raid was built), the OLD raid filesystem wasn't listed anymore and I had to create a new one; something I didn't have to do before when it wasn't working. I wish I had remembered this as it probably would have saved me a ton of time.


    It's currently building the new filesystem. I'm pretty confident that this is going to work since I didn't get a chance to do this step in the previous attempts. It might be a couple a days before I can post the results but I'll be sure to respond ASAP.

    • Offizieller Beitrag

    Hope it works :)

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • WOOOOOOOOOOO HOOOOOOOOOOOOOOOoo!!! I now have a mounted filesystem!


    I thought it was going to take all day but it just finished.


    Thanks for the help everyone. I'll be making a contribution over the weekend to OMV.


    Oh, and I enabled the SMART monitoring and all the drives are good. I wasn't checking this before but I suppose I should since this is hosting important stuff ;)

    • Offizieller Beitrag

    Great to hear :)

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!