Lost RAID 5

  • I added a new drive to my RAID 5. The grow process was successful after 12 hours and then the resize went well.
    Next I added another drive to repeat the same process. The drive was recognized and the grow function started but
    soon threw a communication error with no details. At this point the system was frozen and non-responsive from CLI or GUI.


    After a hard boot It tried to recover the journal (I don't know if FSCK was successful) and my RAID has now disappeared.




    I have disconnected the new drive that failed to grow and the system will boot but the pool with 6TB of data is missing.


    Expert advice is welcomed.


    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdf[0] sdg[5] sdd[4] sdc[3] sdb[2] sda[1]
    11720301072 blocks super 1.2


    unused devices: <none>

    • Offizieller Beitrag

    Need this info

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    Personalities : [raid6] [raid5] [raid4]
     md127 : inactive sdf[0] sdg[5] sdd[4] sdc[3] sdb[2] sda[1]
     11720301072 blocks super 1.2
    
    
     unused devices: <none>


    Code
    /dev/sda: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="81f2c8c2-297e-be0f-5775-9603ebcb88fa" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdb: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="a94506be-8b62-af4a-79db-5e8422057948" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdd: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="e7e608d9-2085-5cac-8f60-8c75a2134be9" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdc: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="03ffbb2d-5203-f34c-d217-bf5231ac331e" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sde1: UUID="61341bc8-96ea-4048-b6b6-cd2e4786133d" TYPE="ext4"
    /dev/sde5: UUID="b8330aef-aede-45bd-b288-7ef0da62d7ce" TYPE="swap"
    /dev/sdf: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="bc580f54-8994-47ad-c653-9582a35a3927" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdg: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="18ede836-ad14-2fa1-caf3-e561c5403093" LABEL="openmediavault:Pool" TYPE="linux_raid_member"


    • Offizieller Beitrag

    Try:


    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/sd[abcdfg] --verbose --force

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Best support on the net here!




    Since I have removed the seventh drive that I was trying to add and shape what should I do next? Adding that drive caused the entire failure.


    When should I mount the filesystem? The RAID is currently marked clean, degraded with the missing drive #7.

    • Offizieller Beitrag

    You can mount when it is degraded but if one more drive fails, you lose everything. Personally, I would mount it and backup your data right away. Then try adding the seventh drive again.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What is the output of:


    cat /proc/mdstat
    cat /etc/fstab
    blkid

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdf[0] sdg[5] sdd[4] sdc[3] sdb[2] sda[1]
          11720301072 blocks super 1.2
    
    
    unused devices: <none>



    Code
    root@openmediavault:~# blkid
    /dev/sda: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="81f2c8c2-297e-be                                             0f-5775-9603ebcb88fa" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdb: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="a94506be-8b62-af                                             4a-79db-5e8422057948" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdd: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="e7e608d9-2085-5c                                             ac-8f60-8c75a2134be9" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdc: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="03ffbb2d-5203-f3                                             4c-d217-bf5231ac331e" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sde1: UUID="61341bc8-96ea-4048-b6b6-cd2e4786133d" TYPE="ext4"
    /dev/sde5: UUID="b8330aef-aede-45bd-b288-7ef0da62d7ce" TYPE="swap"
    /dev/sdf: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="bc580f54-8994-47                                             ad-c653-9582a35a3927" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    /dev/sdg: UUID="161d27f1-6d6b-5b9b-925d-630ad00256e6" UUID_SUB="18ede836-ad14-2f                                             a1-caf3-e561c5403093" LABEL="openmediavault:Pool" TYPE="linux_raid_member"
    • Offizieller Beitrag

    Your array isn't running again. That is why you can't mount it. Did you let it finish rebuilding last time? You can keep executing cat /proc/mdstat to see the status. Don't do anything until it finished.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Is the rebuild finished when the RAID shows up in the GUI? After running the CLI commands it finishes by saying that 6 of 7 are completed. Then the RAID is listed in the GUI. It fails when I try to mount it and I have to reboot and start over. Is it possible to make the system forget the failed attempt to add drive 7?

    • Offizieller Beitrag

    The stop and assemble commands I gave you a few posts ago. Don't do anything in the web interface until the output from cat /proc/mdstat shows that it is ok.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Waited a long time after this and it still fails to mount.


    • Offizieller Beitrag

    What is the output of:


    mkdir -p /media/test
    mount /dev/md127 /media/test

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • root@openmediavault:~# mkdir -p /media/test
    root@openmediavault:~# mount /dev/md127 /media/test
    mount: you must specify the filesystem type
    root@openmediavault:~#

    • Offizieller Beitrag

    Assuming it is ext4: mount -t ext4 /dev/md127 /media/test

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@openmediavault:~# mount -t ext4 /dev/md127 /media/test
    mount: wrong fs type, bad option, bad superblock on /dev/md127,
           missing codepage or helper program, or other error
           (could this be the IDE device where you in fact use
           ide-scsi so that sr0 or sda or so is needed?)
           In some cases useful info is found in syslog - try
           dmesg | tail  or so
    • Offizieller Beitrag

    I'm out of ideas other than using photorec to try to recover data.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!