Inactive-Array RAID5

  • Hello guys, I have a serious problem with a RAID5, this content full 3x4TB HD, so I need fix without loosing data, please help me with this hard work. Some details:


    Code
    cat /proc/mdstat: 
    Personalities : 
    md0 : inactive sdd[0](S) sdc[1](S)
          7813775024 blocks super 1.2
    
    unused devices: <none>


    Code
    blkid:
    /dev/sda1: UUID="36b09d97-b3d9-468f-874d-cf3eced0e1da" TYPE="ext4" PARTUUID="0009e9b4-01" 
    /dev/sda5: UUID="ab4934a4-850d-45be-81aa-94bd123613b2" TYPE="swap" PARTUUID="0009e9b4-05" 
    /dev/sdd: UUID="c6178cc8-262f-56df-e588-2c97e7aa2e6c" UUID_SUB="4f3aba4b-d752-8496-6dab-e164f4b6d617" LABEL="PRODUCCION:Produccion" TYPE="linux_raid_member" 
    /dev/sdc: UUID="c6178cc8-262f-56df-e588-2c97e7aa2e6c" UUID_SUB="bdc0334a-063d-8f1e-b3f0-519c683d01b1" LABEL="PRODUCCION:Produccion" TYPE="linux_raid_member" 
    /dev/sdb1: PARTLABEL="LDM metadata partition" PARTUUID="bfad6539-ecbe-11e3-b8f5-0010c6b06aae" 
    /dev/sdb2: PARTLABEL="Microsoft reserved partition" PARTUUID="986e2150-2926-4fcb-87bb-a10f7bbb93d2" 
    /dev/sdb3: PARTLABEL="LDM data partition" PARTUUID="bfad653c-ecbe-11e3-b8f5-0010c6b06aae"



    Thank you in advance. :/

    Lenovo Thinkcentre Tower M92p + HDD 120GB OS + 8 TB RAID5 (3x4TB HDD WD&Seagate)
    Debian Wheezy 7.8 64 bits + OMV 1.12 kralizec + 3.16 backport kernel


    Radxa Rock + NAND 8 GB OS + 1 TB HD Western Digital
    Debian Wheezy 7 ARM 32 bits + OMV 1.12 kralizec

    • Offizieller Beitrag

    mdadm --stop /dev/md0
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks @ryecoaaron always for your support, I did and show this:


    Code
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted

    Lenovo Thinkcentre Tower M92p + HDD 120GB OS + 8 TB RAID5 (3x4TB HDD WD&Seagate)
    Debian Wheezy 7.8 64 bits + OMV 1.12 kralizec + 3.16 backport kernel


    Radxa Rock + NAND 8 GB OS + 1 TB HD Western Digital
    Debian Wheezy 7 ARM 32 bits + OMV 1.12 kralizec

    • Offizieller Beitrag

    Try:


    mdadm --zero-superblock /dev/sdb (probably will say it doesn't have a superblock but just in case)
    dd if=/dev/zero of=/dev/sdb bs=512 count=100000
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No results, same message:


    Code
    mdadm --zero-superblock /dev/sdb
    mdadm: Unrecognised md component device - /dev/sdb


    Code
    dd if=/dev/zero of=/dev/sdb bs=512 count=100000
    100000+0 records in
    100000+0 records out
    51200000 bytes (51 MB) copied, 1.81299 s, 28.2 MB/s


    Code
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: no recogniseable superblock on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted


    sdb disc still alive?

    Lenovo Thinkcentre Tower M92p + HDD 120GB OS + 8 TB RAID5 (3x4TB HDD WD&Seagate)
    Debian Wheezy 7.8 64 bits + OMV 1.12 kralizec + 3.16 backport kernel


    Radxa Rock + NAND 8 GB OS + 1 TB HD Western Digital
    Debian Wheezy 7 ARM 32 bits + OMV 1.12 kralizec

    • Offizieller Beitrag

    It shows up in fdisk and allows you to dd to it. Should be fine relatively ok.


    Try:


    mdadm --assemble /dev/md0 /dev/sd[cdb] --verbose --force

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Same result:

    Code
    mdadm --assemble /dev/md0 /dev/sd[cdb] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: no recogniseable superblock on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted

    Lenovo Thinkcentre Tower M92p + HDD 120GB OS + 8 TB RAID5 (3x4TB HDD WD&Seagate)
    Debian Wheezy 7.8 64 bits + OMV 1.12 kralizec + 3.16 backport kernel


    Radxa Rock + NAND 8 GB OS + 1 TB HD Western Digital
    Debian Wheezy 7 ARM 32 bits + OMV 1.12 kralizec

  • @ryecoaaron I could started the array with:


    Code
    mdadm --assemble /dev/md0 /dev/sd[cd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 0.
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: added /dev/sdd to /dev/md0 as 0
    mdadm: no uptodate device for slot 4 of /dev/md0
    mdadm: added /dev/sdc to /dev/md0 as 1
    mdadm: /dev/md0 has been started with 2 drives (out of 3)


    I think /dev/sdb has a file system problem, this drive its the first of the array (I supose), is risky work like this? What would be more advisable to do?


    I Array Management I have able the Repair button, may can fix the /dev/sdb drive?

    • Offizieller Beitrag

    raid doesn't use filesystems. The filesystems are on top of raid and that isn't your problem. The dd command I gave you should wipe the drive. If it still isn't working, I'm not sure I would trust it.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The RAID still working now, with 2 drives, so the /dev/sdb its clean now with the command dd? What I do now with this drive? I'd like to add to the array, to have protection, it is a new drive, 2 WDC drives are new and the ST4000 drive its 3 years old. Can i do that? How?

    Lenovo Thinkcentre Tower M92p + HDD 120GB OS + 8 TB RAID5 (3x4TB HDD WD&Seagate)
    Debian Wheezy 7.8 64 bits + OMV 1.12 kralizec + 3.16 backport kernel


    Radxa Rock + NAND 8 GB OS + 1 TB HD Western Digital
    Debian Wheezy 7 ARM 32 bits + OMV 1.12 kralizec

    • Offizieller Beitrag

    From the web interface, click the grow button and add sdb.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I dont have able the grow button, but rescue yes, if I push it I will lose data? See attache please.

    • Offizieller Beitrag

    rescue should work. If you are really worried about it, backup data from the degraded array before doing this. Remember, raid isn't backup...

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!