Crash during grow of a raid 5

  • Hello,


    yesterday i add a disk on my raid 5 but during the rebuild ... a disk fail...


    so my raid is down with 2 disk of 5 connected but not recognized :(


    setup : OMV 0.3
    4 * 1To Seagate
    1*1To Samsung (the new disk)


    how can recover my raid ?


    Thanks for your help, and excuse my poor english.

  • i d'ont know if this could help :



    root@openmediavault:~# mdadm --misc --detail /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
    Raid Devices : 5
    Total Devices : 3
    Persistence : Superblock is persistent


    Update Time : Thu Nov 1 20:04:01 2012
    State : active, FAILED, Not Started
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Delta Devices : 1, (4->5)


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Events : 32259


    Number Major Minor RaidDevice State
    0 8 64 0 active sync /dev/sde
    1 8 80 1 active sync /dev/sdf
    2 0 0 2 removed
    3 8 16 3 active sync /dev/sdb
    4 0 0 4 removed
    root@openmediavault:~# mdadm -E /dev/sda
    /dev/sda:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x4
    Array UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Name : openmediavault:RAID (local to host openmediavault)
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Raid Devices : 5


    Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
    Array Size : 7814090752 (3726.05 GiB 4000.81 GB)
    Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : 262b311e:79c82970:2cdfa0da:c71dc968


    Reshape pos'n : 9396224 (8.96 GiB 9.62 GB)
    Delta Devices : 1 (4->5)


    Update Time : Thu Nov 1 11:00:14 2012
    Checksum : 9bfd218b - correct
    Events : 13680


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 4
    Array State : AAAAA ('A' == active, '.' == missing)
    root@openmediavault:~# mdadm -E /dev/sdc
    /dev/sdc:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x4
    Array UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Name : openmediavault:RAID (local to host openmediavault)
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Raid Devices : 5


    Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
    Array Size : 7814090752 (3726.05 GiB 4000.81 GB)
    Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : 3cea8b43:733b9909:3aa48527:457b02ee


    Reshape pos'n : 1900666880 (1812.62 GiB 1946.28 GB)
    Delta Devices : 1 (4->5)


    Update Time : Thu Nov 1 20:03:39 2012
    Checksum : e5c66c5 - correct
    Events : 32256


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 2
    Array State : AAAA. ('A' == active, '.' == missing)

    • Offizieller Beitrag

    Raid 5 is lost with two disks missing. Sorry but you most likely lost everything.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • no...


    i try a --assemble --force without the new disk


    and raid go up... mount, access data... and one of the disk fail , rebuild process stop a 46.8% ... and smart monitor alerts me for dead disk ..


    so i add the new disk as a spare drive, but after a --fail the disk don't take the lead ....


    i don't know how exclude the dead disk without risk


    another question : i have a removed disk (the new added an deleted, how to remove them ?)


    Thanks

    • Offizieller Beitrag

    I misunderstood. I thought two failed at the same time.


    You need to: mdadm --remove /dev/md127 /dev/sd#
    to remove the dead disk


    What does your cat /proc/mdstat look like now?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active (read-only) raid5 sdf[0] sdb[5](S) sdc[3] sdd[2] sdg[1]
    2930284032 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]


    unused devices: <none>
    root@openmediavault:~# mdadm -D /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
    Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
    Raid Devices : 5
    Total Devices : 5
    Persistence : Superblock is persistent


    Update Time : Fri Nov 2 19:48:04 2012
    State : clean, degraded
    Active Devices : 4
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 1


    Layout : left-symmetric
    Chunk Size : 512K


    Delta Devices : 1, (4->5)


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Events : 33138


    Number Major Minor RaidDevice State
    0 8 80 0 active sync /dev/sdf
    1 8 96 1 active sync /dev/sdg
    2 8 48 2 active sync /dev/sdd
    3 8 32 3 active sync /dev/sdc
    4 0 0 4 removed


    5 8 16 - spare /dev/sdb




    in fact /dev/sdd is dead .. if i mount the partition.. faultly ...


    but i don't understand why the sapre disk don't be active :/

    • Offizieller Beitrag

    That helps a lot.


    Do this:


    mdadm --fail /dev/md127 /dev/sdd
    mdadm --remove /dev/md127 /dev/sdd


    mdadm --grow /dev/md127 --raid-devices=4


    The first two steps remove the dead drive. The last step makes the spare an active drive.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • mdadm: set /dev/sdd faulty in /dev/md127
    root@openmediavault:~# mdadm --remove /dev/md127 /dev/sdd
    mdadm: hot remove failed for /dev/sdd: Device or resource busy
    root@openmediavault:~# mdadm -D /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
    Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
    Raid Devices : 5
    Total Devices : 5
    Persistence : Superblock is persistent


    Update Time : Fri Nov 2 20:06:51 2012
    State : clean, FAILED, recovering
    Active Devices : 3
    Working Devices : 4
    Failed Devices : 1
    Spare Devices : 1


    Layout : left-symmetric
    Chunk Size : 512K


    Reshape Status : 48% complete
    Delta Devices : 1, (4->5)


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Events : 33146


    Number Major Minor RaidDevice State
    0 8 80 0 active sync /dev/sdf
    1 8 96 1 active sync /dev/sdg
    2 8 48 2 faulty spare rebuilding /dev/sdd
    3 8 32 3 active sync /dev/sdc
    4 0 0 4 removed


    5 8 16 - spare /dev/sdb


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdf[0] sdb[5](S) sdc[3] sdd[2](F) sdg[1]
    2930284032 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [UU_U_]
    [=========>...........] reshape = 48.6% (475260640/976761344) finish=8011.5min speed=1042K/sec


    unused devices: <none>


    :(

    • Offizieller Beitrag

    Is the drive mounted? If so, umount it. It not, try booting a livecd (like systemrescuecd) to run these commands.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Actually it looks like it is rebuilding although slowly. Maybe you can't remove the faulty drive until it is done.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • yes but... i've allready tried this and speed down to ...10Kbs.. time estimated : 300 000 minuts...



    actually :


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdf[0] sdb[5](S) sdc[3] sdd[2](F) sdg[1]
    2930284032 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [UU_U_]
    [=========>...........] reshape = 48.6% (475260640/976761344) finish=85045.9min speed=98K/sec


    unused devices: <none>


    The disk have dead cluster ... block at 48.6% :(

    • Offizieller Beitrag

    echo 50000 > /proc/sys/dev/raid/speed_limit_min


    might speed it up if the limit isn't already set there.


    Did you try unplugging the dead drive?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    OMV lists the serial numbers. Most drives have the serial numbers on them. So, you should be able to find it that way.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Cooking the drive will not help :)

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • root@openmediavault:~# mdadm -D /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jun 2 07:59:38 2012
    Raid Level : raid5
    Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
    Raid Devices : 5
    Total Devices : 4
    Persistence : Superblock is persistent


    Update Time : Fri Nov 2 21:45:53 2012
    State : active, FAILED, Not Started
    Active Devices : 3
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 1


    Layout : left-symmetric
    Chunk Size : 512K


    Delta Devices : 1, (4->5)


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 04c071c4:310aad7a:a9c332e4:c9e7b1d3
    Events : 33154


    Number Major Minor RaidDevice State
    0 8 64 0 active sync /dev/sde
    1 8 80 1 active sync /dev/sdf
    2 0 0 2 removed
    3 8 32 3 active sync /dev/sdc
    4 0 0 4 removed


    5 8 0 - spare /dev/sda


    root@openmediavault:~# mdadm --grow /dev/md127 --raid-devices=4
    mdadm: /dev/md127: Cannot get array details from sysfs

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!