RAID 5 growing error

  • Ok, which is the new drive from above

    /dev/sdb


    PS: I started a backup of important files from md127 to md126.... while reshaping was on, perfs were about 20Mo/hour :(

    so I put the reshape into frozen mode => now I can copy at 30 Mo/s....

    • Offizieller Beitrag

    /dev/sdb is a Toshiba Desktop drive, correct, I suggest you look deeper into each drives specs, for instance the WD (WDC WD40EFAX-68J) is an SMR drive the Toshiba's appear to be as well.


    The topic of SMR and CMR drives has come up on the forum before and mixing the two in a raid configuration is not a good idea.


    Maybe Zoki or ananas could shed some more light, but your problem appears to be related to your drives

  • Here is a very nice write up on the topic of SMR vs CMR: https://www.hardwareluxx.de/in…nen-aufnahmemethoden.html (unfortunately in german)


    If one of your drives you are writing to is an SMR expect very slow write speed if writing large amounts of data, but in the article they tested a resynch in a NAS (ext4) and the SMR was only 20% slower then the CMR, as al sectors are written in sequence and so writeing the sector multiple times is not necessary. This measurement is true for a QNAP TS253.


    But I would never have expected these write rates as we see here.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • thank you, if I am not wrong, all the HDD of my array are SMR, there is no mix.

    even if perfs would be lower with this tech of HDD, here I have much much lower!


    don't say to me that my only option is to wait for 10 days that the reshape terminates! hoping that this new array will have normal read/write perfs

    I 'm not really happy to have my NAS unusable for so long :(

  • EDIT:

    as a last chance, I tried to boot on SystemRescue image provided by OMV

    mdadm --run /dev/md126 (id has changed)

    mdadm --readwrite /dev/md126

    and this is what I got:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md126 : active raid5 sdc[1] sdd[6] sdb[7] sde[5] sdf[4]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [==>..................]  reshape = 11.0% (432049616/3906888704) finish=1920.6min speed=30153K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    md127 : active (auto-read-only) raid1 sdg[2] sdh[0]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 0/30 pages [0KB], 65536KB chunk

    10 x faster !

    reshape finished in 1 day, which is what I expected


    so conclusion, I have a problem with my OMV6 install, seems not to be hardware related....

    any ideas?

  • so conclusion, I have a problem with my OMV6 install, seems not to be hardware related....

    any ideas?

    Did you have some dockers or any other heavy services running when the reshape was slow ?

    Cause well the performence increase is impressive ! Great idea to boot on System Rescue !


    The problem remain that the nas is unusable during the reshape ...

  • Just wanted to say thanks for this tip. I was having the same issues as you, and as soon as I used the System Rescue ISO, my reshape speeds increased ten fold.


    Thanks again!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!