How to recover RAID10 ?

    • How to recover RAID10 ?

      Hello,

      I'm noob on Linux and french, so please excuse my bad english in advance. ^^

      I am on HP ProLian G7 with one SSD and 4 HDD (RAID 10)

      /dev/md0:
      Version : 1.2
      Creation Time : Sun Apr 7 20:19:24 2019
      Raid Level : raid10
      Array Size : 19532611584 (18627.75 GiB 20001.39 GB)
      Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)
      Raid Devices : 4
      Total Devices : 4
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Sun May 19 19:30:28 2019
      State : clean
      Active Devices : 4
      Working Devices : 4
      Failed Devices : 0
      Spare Devices : 0

      Layout : near=2
      Chunk Size : 512K

      Name : omv:raid (local to host omv)
      UUID : bee1ff98:f891bf3a:59b82de4:add412d4
      Events : 21376

      Number Major Minor RaidDevice State
      0 8 16 0 active sync set-A /dev/sdb
      1 8 32 1 active sync set-B /dev/sdc
      2 8 48 2 active sync set-A /dev/sdd
      3 8 64 3 active sync set-B /dev/sde


      I want to replace one HDD with another (same size of course)

      When I do that, RAID is not displayed, so I can't use Recover button


      blkid

      /dev/sda1: UUID="88554e2a-1ee4-4e9c-a603-2e55bf7acd0a" TYPE="ext4" PARTUUID="d0f87a3b-01"

      /dev/sda5: UUID="e6508bde-d883-4af3-9a5c-c44b9831d209" TYPE="swap" PARTUUID="d0f87a3b-05"

      /dev/sdc: UUID="bee1ff98-f891-bf3a-59b8-2de4add412d4" UUID_SUB="14640ad5-8e4d-eed1-55d1-fbdef24b9bf7" LABEL="omv:raid" TYPE="linux_raid_member"


      /dev/sdd: UUID="bee1ff98-f891-bf3a-59b8-2de4add412d4" UUID_SUB="7cc82e0d-e089-5d85-ee73-256117823812" LABEL="omv:raid" TYPE="linux_raid_member"

      /dev/sde: UUID="bee1ff98-f891-bf3a-59b8-2de4add412d4" UUID_SUB="cddcc2b8-cb3c-4bf8-4461-efb0ca8527a1" LABEL="omv:raid" TYPE="linux_raid_member"

      /dev/sdb: PTUUID="25a84a45-6dab-440c-a295-a373c82ef1d4" PTTYPE="gpt"


      mdadm --detail /dev/md0

      /dev/md0:
      Version : 1.2
      Raid Level : raid0
      Total Devices : 3
      Persistence : Superblock is persistent

      State : inactive

      Name : omv:raid (local to host omv)
      UUID : bee1ff98:f891bf3a:59b82de4:add412d4
      Events : 21358

      Number Major Minor RaidDevice

      - 8 64 - /dev/sde
      - 8 32 - /dev/sdc
      - 8 48 - /dev/sdd


      cat /proc/mdstat

      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md0 : inactive sde[3](S) sdd[2](S) sdc[1](S)
      29298917376 blocks super 1.2


      May I use this command line to recover RAID ?

      mdadm --assemble /dev/md0 /dev/sdb
    • I have also replaced a disk in a 4-disk raid-10 system. . . . .
      I powered down the system
      took out the disk with the errors
      inserted the new disk
      powered on the system
      logged on to the GUI control panel and saw that the RAID array was not shown !!!

      PANIC! PANIC! :(
      then did som google searching and reading on the mdadm command and its many parameters and tried many commands on the root console without luck
      Thank god that my OMV server is only in the testing status and it does not yet contain data :) it is time to test and experiment
      Did some more reading and then . . .

      I then logged on as root on the console:
      mdadm --detail /dev/md0 (I see a disk is missing, just like you did)
      mdadm --misc -R /dev/md0 (this will start Running the raid array)

      You should now be able to see the raid array in the GUI control panel as a degraded array

      back to the root console:
      mdadm --manage /dev/md0 --add /dev/sde (this will add the disk to the array and the array will automatically start to recover)

      You should now see the array in the GUI control panel as degraded and recovering . . . it will take some time to complete

      The post was edited 1 time, last by hefran ().

    • I am not able to help, sorry! But I really like that you actually are testing scenarios out before you deploy for real. Kudos to you!

      I would just reconfigure from scratch and restore data from a backup to the new empty array. But that, I assume, removes many of the advantages with RAID.
      OMV 4, 7 x ODROID HC2, 1 x ODROID HC1, 5 x 12TB, 1 x 8TB, 1 x 2TB SSHD, 1 x 500GB SSD, GbE, WiFi mesh
    • It started so well with the recovery of my raid-10 array (see previous reply)
      It syncronized and the degraded array ended displayed in the GUI control panel as a clean array
      I continued with the build of my OMV server, by creating users, groups and eventually shared folders
      I was not able to create shared folders and OMV reported errors every time I wanted to apply the changes
      on the root control screen, I noticed a lot of technical messages and perhaps errors
      I rebooted the OMV server and tried again - no luck
      Finally I lost my patience and deleted the raid-10 array and created it again from scratch (as suggested by Adoby)

      When the OMV system is up and running again, I think I would like to copy some data to the raid-10 array
      and do some more reading on how to recover a raid-10 array
      and then remove a disk once more and the try to recover again

      hopefully I will succeed and learn some more :)
    • Thank you hefran for your advices.
      So I don't know what to do.

      I want to replace my disk but I don't want to clean array and configure a new RAID...

      I didn't use any command for now.

      Do you think If I just tape :
      mdadm --misc -R /dev/md0
      mdadm --manage /dev/md0 --add /dev/sdb
      I will have same problems as you ?
    • Finally I used :

      mdadm --manage /dev/md0 --set-faulty /dev/sda
      mdadm --manage /dev/md0 --remove /dev/sda

      Shutdown, replace disk, boot

      mdadm --manage /dev/md0 --add /dev/sda

      mdadm --detail /dev/md0

      /dev/md0:
      Version : 1.2
      Creation Time : Sun Apr 7 20:19:24 2019
      Raid Level : raid10
      Array Size : 19532611584 (18627.75 GiB 20001.39 GB)
      Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)
      Raid Devices : 4
      Total Devices : 4
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Wed May 22 19:16:29 2019
      State : clean, degraded, recovering
      Active Devices : 3
      Working Devices : 4
      Failed Devices : 0
      Spare Devices : 1

      Layout : near=2
      Chunk Size : 512K

      Rebuild Status : 0% complete

      Name : omv:raid (local to host omv)
      UUID : bee1ff98:f891bf3a:59b82de4:add412d4
      Events : 21399

      Number Major Minor RaidDevice State
      4 8 0 0 spare rebuilding /dev/sda
      1 8 16 1 active sync set-B /dev/sdb
      2 8 32 2 active sync set-A /dev/sdc
      3 8 48 3 active sync set-B /dev/sdd