I/O performance on RAID check

  • I think some kind of performance decline can´t be avoided during a raid check of a software raid, because bandwidth and CPU power is needed for the check. The only one solution to improve the performance during a raid check is to use a hardware raid controller.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • I've setup a RAID6 (mdadm) with 4x 10TB. I've noticed, that when cron executes a checkarray there is a massive impact on I/O performance.


    When copying data from other array to the checked one, write performance is good. One can see with "cat /proc/mdstat" that resync speed decreased and increases after copy to array is done.
    Strange this is, when copying from the checked array to another one, performance is bad.


    So write on array while check is good (~80% of default), but read is poor (with only 20% of default performance). CFQ scheduler is enabled and ionice is set to idle on md0_resync process. It seems that reads are not recognized as relevant for ionice to lower rebuild speed, but write operations do.


    Can anyone explain this?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!