RAID 5 array disappeared

  • Hy everyone!


    Last night I shut down my computer and after powering it on again my RAID 5 array disappeared, a RAID 0 array was there instead with 2 drives (sda, sdb, sdc was missing)!

    I tried to stoping the new RAID0 array and recreate the RAID5 array with no luck. sdc has no superblock error, after that tried reassembling with teh other two drives sda, sdb.

    sda looked fine but on sdb it said partition will be erased if I continue, so I stopped.

    I'm using 3 WD Blue 1GB drives (2x WD10JPVX, 1xWD10JPVT).


    Zitat

    mdadm -A /dev/md127 /dev/sda /dev/sdb /dev/sdc

    mdadm: Cannot read superblock on /dev/sdc

    mdadm: no RAID superblock on /dev/sdc

    mdadm: /dev/sdc has no superblock - assembly aborted

    /dev/sdc is missing, but why?





    Here are some information.


    Zitat

    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md127 : inactive sdb[3](S) sda[4](S)

    1953523120 blocks super 1.2

    unused devices: <none>


    /dev/sdc is missing, but in the OMV create array webpage I can select it but only this drive even if I stop the RAID 0 array!



    Zitat

    mdadm --detail --scan --verbose

    INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=openmediavault:NAS UUID=6d519931:e4e15e4a:97492251:72262763

    devices=/dev/sda,/dev/sdb




    Can someone help me, I can't do much more on myself.

    I1m afraid the array is dead, yesterday it was working perfectly. ;(

  • chente

    Hat das Thema freigeschaltet.
  • KM0201

    Hat das Thema freigeschaltet.
  • I1m afraid the array is dead, yesterday it was working perfectly.

    First thing's first:

    Don't use QUOTE boxes (symbol ")

    Use CODE boxes instead, please. (symbol </> on the banner)


    Second, the details of the hardware.

    What device?

    How are the drives connected?


    Like you saw it, your drive SDC is missing from everywhere.

    If blkid doesn't see it, then the OS isn't seeing it.


    Powerdown the server properly and check the SATA connections and anything that might look out of place.


    If everything looks correct, power up and check again:

    blkid

    lsblk

  • I got it.


    The hardware: Its a computer I use as a home server:

    MSI Motherboard with integrated Sandy Bridge Celeron CPU.

    4GB ram.

    Pico PSU + Power Brick.

    Fractal Design Case.

    32GB ADATA SSD for OMV.

    3x WD Blue 2.5" HDD.

    Everything is connected with SATA cable.


    The strange thing is that sdc is not missing from everywhere.

    In the OMV GUI I can see it in the Drives list, SMART, and RAID sections.

    In CLI fdisk sees it too, when I try to recreate the array with sdc it says it doesn't have a super block not the drive doesn't exist.

    I checked the SMART attributes for all four drives and they look fine.


    I replaced the sata cable for sdc and it makes no difference.

    I looked into the system log, sdc seems faulty:

    Code
    -Buffer I/O error on dev sdc, logical block 1, async page read
    -blk_update_request: I/O error, dev sdc sctor 10 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
    -Unrecovered read error - auto reallocate failed
    -[sdc] tag#14 Sense Key : Medium Error [current]


  • In CLI fdisk sees it too,

    Ok, saw it better this time. My bad.

    And lsblk also shows it.

    I try to recreate the array with sdc it says it doesn't have a super block not the drive doesn't exist.

    Not much knowledge about mdadm but won't recreate destroy the data on the drives?


    Since you have 2x proper working drives, you can try to mount it clean, degraded to, at least, have access to the DATA.


    Make backup of all that you can.


    I'll tag geaves since he's way more expert on this than me.

  • Zitat

    Since you have 2x proper working drives, you can try to mount it clean, degraded to, at least, have access to the DATA.


    Make backup of all that you can.


    That's the plan, but i can't mount it because it says:

    Code
    mdadm: partition table exists on /dev/sda but will be lost or
    
    meaningless after creating array
    
    mdadm: /dev/sdb appears to be part of a raid array:
    
    level=raid5 devices=3 ctime=Fri Jul 19 15:55:43 2013
    
    Continue creating array?

    I'm afraid it won't work because of this warning.

    The strange thing is here seems everything is ok with the two drives.

    Event count, checksum is ok and both are active:

    There shouldn't be a partition on drive sda I think.

  • That's the plan, but i can't mount it because it says:

    You are trying to create it

    What I said is to mount it clean, degraded.


    If you search some posts from geaves about RAID, you'll find the proper command to use.

    • Offizieller Beitrag

    From the output /dev/sdc appears to be a failing drive so attempting to assemble the array with that drive is going to throw errors.


    The output in #1 suggests the array is inactive;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[ab]


    this should reassemble the array in a clean/degraded state, however, if you have run a create option from the cli as per #5 then the above commands will not work


  • It worked, thank you very much! :)

    I made a backup from everything.

    Took out the faulty drive and tested it, its dead.

    I replaced it with an identical drive, but I can't add it in OMV GUI.

    The computer sees it, enbled SMART monitoring for it.

    I deleted the partition from it in windows before installing it.

    I created an Ext4 partition on it to see if it works perfeectly, it worked.

    Deleted the Ext4 partition from it, but the GUI can't see it in RAID recovery menu.

    How can I add it to the array from CLI?


    Code
     blkid
    /dev/sdb: UUID="6d519931-e4e1-5e4a-9749-225172262763" UUID_SUB="6546250e-b620-0edd-2309-edb68ae68d15" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sda: UUID="6d519931-e4e1-5e4a-9749-225172262763" UUID_SUB="b033aeca-01f9-abee-5c3d-1a181369ff61" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sdd1: UUID="8A9C-20FD" TYPE="vfat" PARTUUID="e99306c3-811f-4cb7-9342-e5d08cfae0fd"
    /dev/sdd2: UUID="22c5a0cf-3c6b-4b2e-b5b5-4c2081772ada" TYPE="ext4" PARTUUID="299559f7-ac8f-4a01-a3d6-17f9c89937b8"
    /dev/sdd3: UUID="e6b3f336-fb85-4116-989a-7c10d4d85f04" TYPE="swap" PARTUUID="7b26ca28-7fae-4c21-b65b-4b130deb2765"
    /dev/md127: LABEL="NAS" UUID="9b314aea-9d11-42fa-aad5-a5dcb001a857" TYPE="ext4"
    /dev/sdc: PTUUID="ab4c599a" PTTYPE="dos"
  • If the drive was used for anything else, you will probably need to wipe it first before it will be available in the RAID UI. (Storage > Disks Menu, little eraser icon/button)

  • Zitat

    If the drive was used for anything else, you will probably need to wipe it first before it will be available in the RAID UI. (Storage > Disks Menu, little eraser icon/button)


    That's it.

    Thank you very much. :)

    I thought it would remove it from the list, silly me. :D

    I used always new drives for replacing old ones and never had this problem.


    It is recovering the array right now. :thumbup:

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!