[RESOLVED] Degraded RAID5

  • Guys I got the following emails from my OMV box:



    I have a RAID5 with 5 disks. In the UI, the state is "clean, degraded". This means a drive is bad? right?


    In the "Physical Disks" area, OMV is showing serial numbers for all disks, except one (/dev/sdf). Could that be the drive with the problem?


    Anything else I can do to figure out what the problem is? One more thing, once the drive is identify, what are the steps to replace it in OMV?


    EDIT: I am not using hardware raid, just OMV (software) raid. From reading carefully about it does seem the problem is /dev/sdf. What are the steps to follow when I get the replacement drive?


    EDIT: I rebooted OMV, and now /dev/sdf is not shown anywhere in "Physical Disks". RAID5 is "clean,degraded"

  • Here is the mdadm output:


  • Zitat von "ryecoaaron"

    You are definitely missing a drive according to mdadm. I am guessing /dev/sdf was the fifth drive and it failed??


    Right so do I just connect the new drive where the failed one was and start the system? Was wondering if I had to do any admin tasks to have the bad drive replaced

  • Zitat von "bigcat"

    Right so do I just connect the new drive where the failed one was and start the system? Was wondering if I had to do any admin tasks to have the bad drive replaced


    Well, bought a new drive, put it in, it shows up in the "physical disks" now as /dev/sdf, however RAID5 still shows degraded with 4 disks only, doesn't show /dev/sdf. Looks like something has to be done to add the new drive (replace old one) and rebuild the array. Appreciate any help.

  • Zitat von "bigcat"


    Well, bought a new drive, put it in, it shows up in the "physical disks" now as /dev/sdf, however RAID5 still shows degraded with 4 disks only, doesn't show /dev/sdf. Looks like something has to be done to add the new drive (replace old one) and rebuild the array. Appreciate any help.


    I ran the following from command line and now shows "clean,degraded,recovering()"


    Code
    mdadm --add /dev/md0 /dev/sdf


    Code
    root@openmediavault:~# cat /proc/mdstat 
    Personalities : [raid6] [raid5] [raid4] 
    md0 : active raid5 sdf[5] sda[0] sdb[4] sdd[3] sdc[1]
          3907041280 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
          [>....................]  recovery =  0.0% (386432/976760320) finish=210.5min speed=77286K/sec
    
    unused devices: <none>


    The information here http://bugtracker.openmediavau…t_bug_page.php?bug_id=201 was useful.

  • I am not sure that's what I wanted to do, mdadm shows now as rebuilding a "spare", does that sound right?


  • Zitat von "bigcat"

    I am not sure that's what I wanted to do, mdadm shows now as rebuilding a "spare", does that sound right?


    never mind, recovery is done and it's all good now.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!