RAID 5 growing error

  • Dear All,

    I wanted to grow my RAID 5 volume (4x4To HDD) with 1 supplementaty HDD

    After installing the new HDD, I did a grow option in my OMV RAID section, selected the new HDD and then click OK

    OMV returned the following error

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; mdadm --grow --raid-devices=5 '/dev/md127' 2>&1' with exit code '1': mdadm: /dev/md127 is performing resync/recovery and cannot be reshaped
    
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; mdadm --grow --raid-devices=5 '/dev/md127' 2>&1' with exit code '1': mdadm: /dev/md127 is performing resync/recovery and cannot be reshaped in /usr/share/php/openmediavault/system/process.inc:197
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(369): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->grow(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('grow', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'grow', Array, Array, 1)
    #5 {main}

    and now my RAID volume is 4 HDD + 1 spare


    what went wrong?

    how can I now transform the spare HDD into active synced in order to expand my volume?

    thank you for your help

    JR

  • Hi again,

    I finally used mdadm to add my HDD...

    but the reshaping process is really really slow!


    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md126 : active raid1 sdg[0] sdf[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    md127 : active raid5 sdh[6] sdd[7] sdc[4] sdb[5] sda[1]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [>....................]  reshape =  0.1% (4199412/3906888704) finish=96367.7min speed=674K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk

    with this speed, I will need 2 months to reshape my array :(

    how can I improve that?

  • moreje

    Hat den Titel des Themas von „RAID 5 growing“ zu „RAID 5 growing error“ geändert.
  • I am still in troubles with my RAID5 configuration.

    after reboot, my array went back to inactive:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
    md126 : active raid1 sdh[0] sdg[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 4/30 pages [16KB], 65536KB chunk
    
    md127 : inactive sda[1](S) sdb[6](S) sdc[5](S) sde[7](S) sdd[4](S)
          19534445240 blocks super 1.2
    Code
    mdadm --detail --scan
    INACTIVE-ARRAY /dev/md127 metadata=1.2 name=OMV-JEROME:BIGSPACE UUID=919d5af5:b8bea9c0:00b903a0:2016c45f
    ARRAY /dev/md/FRODON metadata=1.2 name=DTC-JEJE:FRODON UUID=f511cc1a:15f4ffcf:3fb29e89:7df0a893

    and I don't understand why my mdadm.conf is empty, i can't even see the existing other array. is it normal?




    I don't know what to do to get my array back , with hopefully its data ! :(


    Anyone could help me with the steps to fix my problem?


    Thank you for your help

  • Stop the array

    mdadm --stop /dev/md127 and reassamble it mdadm --assemble --scan


    If this does not work, you will have to use force

    mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd

    but check drive names if you rebooted in the meantime. (Force means "I know hat I am doing...")


    Do not reboot when a raid is rebuilding!

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • thank you Zoki,

    before I try this reassemble step, do you have any suggestions concerning the rebuild speed?

    because 2 monthes will be a bit long to avoid any reboot :s

  • If the speed is so low, it is possible that one of the disks is dying.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Do you know why my /etc/mdadm/mdadm.conf is empty ?

    the active array should not be listed here?

  • after the mdadm --assemble command, here is what I get:

    Code
    md127 : active raid5 sdd[5] sda[7] sde[4] sdc[6] sdb[1]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [>....................]  reshape =  0.2% (10016216/3906888704) finish=81933.1min speed=792K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    md126 : active raid1 sdh[0] sdg[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 5/30 pages [20KB], 65536KB chunk

    it's reshaping.... but really slow


    iostats gives weired results for md127:

  • thank you...

    with these tips, plus activating write cache mode on my HDDs, things are a bit better:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdd[5] sda[7] sde[4] sdc[6] sdb[1]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [>....................]  reshape =  0.7% (30649156/3906888704) finish=17789.9min speed=3631K/sec
          bitmap: 6/30 pages [24KB], 65536KB chunk
    
    md126 : active raid1 sdh[0] sdg[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 2/30 pages [8KB], 65536KB chunk

    But still slow... 3500 K/sec... it will take more than 10 days to reshape the array...


    I did not deactivate bitmap because I don't know if I can do that during the reshape process ?

  • If you enable disk write cache on a raid arraid, you should have a UPS"

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • the probability of power loss during 10 days period is much less than during 2 months.... wich is the reshaping duration announced when cache is deactivated ;)


    btw, I still don't understand why reshaping is so slow: 3500 k/s ! it should be between 20000 and 50000

    after all the tweaking tips I've tested, what are the points I should check that could explain?

    • Offizieller Beitrag

    With no information regarding hardware, this is either a) controller issue b) hard drive


    Considering the number of drives I would start with the controller, m'board with 4/6 sata ports + pcie sata controller and the drives are probably mixed between the two judging by the port number references next to each of the drives

  • thank you for your feedback.. let me do a short history of my setup

    - previous functionnal setup was:

    2X SSD in RAID1 for OMV5 OS

    4X 4T HDD in RAID5: vol 1

    2 x 4T HDD in RAID1: vol 2


    - then i switched to OMV6 on a new SSD / unplugged the previous SSDs. At this stage, perfs were OK


    - then I tried to grow my RAID5 vol1 with a new 4T HDD

    the problems came here!

    => concerning the new HDD:

    tested: switch PCI card SATA port to MB SATA port : same results

    tested: cable replacement: same results


    my next test

    OS SSD is connected to PCI card SATA => I will try to switch to MB SATA and stop using my PCI card


    other question:

    is this iostat output informative?

    sda, wich is the new hard drive has very poor read stats. is it normal? perhaps yes considering the fact that nothing should be written here during the reshape process?


    bonus question: is there a way to cancel the reshape process, remove the new HDD and go back to my previous 4x RAID configuration. waiting to fix this issue ans see if I get back my previous perfs?


    thank you again for your help! if you need more infos on my setup that could help, let me know!

    • Offizieller Beitrag

    What's interesting from the above is that the OS SSD is running from the pcie sata card, ususally these are not capable of allowing a boot device to function unless they have their own bios.


    If sda is the new drive then that is the drive which is 'being synced' and that drive is attached to the pcie sata card, would suggest there is a bottleneck, hence the slow rebuild.


    How many sata ports on the m'board?, how many ports on the sata card? Some information on both might be helpful

    bonus question: is there a way to cancel the reshape process

    Yes and no, a search would throw some answers but you run the risk of data corruption.


    BTW instead of using iostat install nmon -> apt install nmon then run nmon from cli and select from the menu.


    As far as I can remember you can grow the array in OMV's GUI but it actually doesn't 'grow' in the way a user would expect, grow will add a new drive to an array, but it will not grow the size, that has to be completed from the cli. The error in your first message tells you that the array is rebuilding and therefore cannot perform mdadm --grow, this would suggest that the drive had already been added.


    Mdadm conf; the only reason behind that file being empty is that the array was created on the cli, creating an array from omv's gui the script used will write the appropriate reference to the conf file

  • OK, here is the actual setup: 6 MB ports , 2 PCI cards

    MB SATA 1 => md127

    MB SATA 2 => md127

    MB SATA 3 => md127

    MB SATA 4 => md 127

    MB SATA 5 => empty

    MB SATA 6 => md 127

    PCI1 SATA 1 => md126

    PCI1 SATA 2 =>md126

    PCI2 SATA1 => OMV6


    PS: booting from my SATA on PCI cards has never been an issue for me.

    Now I will try to put the SSD OS on MB Sata....


    EDIT after reorganisation:

    MB SATA 1 => md127

    MB SATA 2 => md127

    MB SATA 3 => md127

    MB SATA 4 => md 127

    MB SATA 5 => md 127

    MB SATA 6 => OMV6 SSD

    PCI1 SATA 1 => md126

    PCI1 SATA 2 =>md126


    I've removed the 2nd PCI card

    now OS & RAID md127 are on Motherboard SATA ports

    my PCI card hosts md126 HDD which runs fine.

    after reboot, I needed to get the array online and read/write.

    ==> same perfs : around 3500 K/sec :(

    • Offizieller Beitrag

    Ok, so you shut down the server, moved the drives around 'to test' if that was the issue :huh: you do realise when you do that the drive reference gets changed, and therefore makes any diagnosis harder, so output of the following;


    cat /proc/mdstat

    blkid

    fdisk -l | grep "Disk "

    mdadm --detail /dev/md127 this might fail

    mdadm --detail /dev/md126


    and if you believe you don't have a possible hardware problem see here

  • here are the outputs:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md126 : active raid1 sdh[0] sdg[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 9/30 pages [36KB], 65536KB chunk
    
    md127 : active raid5 sdc[1] sdf[4] sdd[6] sde[5] sdb[7]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [=>...................]  reshape =  6.6% (261621248/3906888704) finish=16995.8min speed=3574K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk


Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!