I/O performance on RAID check

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • I think some kind of performance decline can´t be avoided during a raid check of a software raid, because bandwidth and CPU power is needed for the check. The only one solution to improve the performance during a raid check is to use a hardware raid controller.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • I've setup a RAID6 (mdadm) with 4x 10TB. I've noticed, that when cron executes a checkarray there is a massive impact on I/O performance.

      When copying data from other array to the checked one, write performance is good. One can see with "cat /proc/mdstat" that resync speed decreased and increases after copy to array is done.
      Strange this is, when copying from the checked array to another one, performance is bad.

      So write on array while check is good (~80% of default), but read is poor (with only 20% of default performance). CFQ scheduler is enabled and ionice is set to idle on md0_resync process. It seems that reads are not recognized as relevant for ionice to lower rebuild speed, but write operations do.

      Can anyone explain this?