Increase Software RAID5 write speed

  • Hi,
    i'm using N40L with software Raid5 (mdadm 4x3TB XFS) and Openmediavault.
    Write Speed was low with AFP & SMB, with speed drops to 0MB/s.
    Average at 80MB/sec
    Processor and RAM were <20% usage.


    So i tried many SMB & AFP Settings without success.


    Then i found this Tweak:
    Now writing with 110MB/s through GB-LAN constantly (big files). :D


    local Raid5 write speed doubled. 8)



    Tuning:

    Code
    echo 8192 > /sys/block/md[color=#FF0040]0[/color]/md/stripe_cache_size


    Default setting was 256 instead of 8192
    Read current setting:

    Code
    cat /sys/block/md*NUMBER*/md/stripe_cache_size


    (md*NUMBER* has to be set to the Raid-Array md-Number, e.g. md2, 3, 4, 127,...)
    Navigate in Terminal over SSH to /sys/block/md*NUMBER*.


    HowTo:
    Linux Software Raid Performance Tuning
    MDADM RAID GUIDE


    To make setting permanent write commant into rc.local:


    Zitat

    To make changes permanent (SSH as root):

    Code
    nano –w /etc/rc.local


    Scroll all the way to the bottom and add the following lines (single-spaced); be sure to add #these lines BEFORE the “exit 0” entry:

    Code
    # RAID cache tuning
    echo 8192 > /sys/block/md"NUMMER"/md/stripe_cache_size

    – add a separate line for each raid array as specified by “mdx”


    If uncertain navigate to the http link above to see a copy of /etc/rc.local
    Ctrl+O and hit enter to save, Ctrl+X to exit. Reboot


    The speed drops are gone!


    Following tweaks weren't set, because i don't know if they may lead to data loss if already data on Raid5:

    Code
    blockdev --setra 4096 /dev/sd?
    blockdev --setra 32768 /dev/md?


    After Raid-Setup without data on disks this setting will give additional performance boost.


    Processor at write at 25-30% and 8GB RAM at 6% load. GB-LAN seems to be the bottleneck now.

    This setting should be implemented into OMV by default when Raid5 is created.
    Pappl

  • The forumla is:


    memory_consumed = system_page_size * nr_disks * stripe_cache_size


    Stripe Cache size number is given in pages. So it is 256 pages in default setting.


    The pagesize is typically 4kb, but can vary from system to system.


    8192 pages equals 8192*4kb=32MB ... and this for X numbers of disks (3 +1) in my case would be then 96MB for the raid cache.... 512MB ? where do you have this number from? Or do you have that much disks in your raid?

    Everything is possible, sometimes it requires Google to find out how.

  • I believe that there is something else in your line. The full chain needs to be analyzed and without any further information, this is a hard challenge for us.


    So there are some threads about performance tuning open with generic answers. Refer to that places and ask further questions there please.

    Everything is possible, sometimes it requires Google to find out how.

    • Offizieller Beitrag

    No, it is not built-in to OMV now.
    No, it does not apply to Raid 10.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    No, it is not built-in to OMV now.
    No, it does not apply to Raid 10 because this is a striping change (raid 10 doesn't stripe).


    raid 10 doesn't stripe? I think you are using a different raid 10 implementation than most of us :)

    • Offizieller Beitrag
    Zitat von "Tom"

    raid 10 doesn't stripe? I think you are using a different raid 10 implementation than most of us :)


    No, I'm not thinking of a different raid 10. Yes, it does stripe but it doesn't have the same raid 5 stripe setting. Just trying to keep it simple. Corrected my post.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have tried this advice for my Raid 5 array, but currently my Write performance is about 15-50MB/s (smaller file lower performance).


    I currently have a Proliant n40L - with 4 Seagate Drives ST3000DM001-9YN166 (which are 4k format), Raid with 512k strip size.


    On top of that I have LVM - then publish that over my network as a iSCSI target. (currently formated with HFS+ (osx))


    My question is (after doing hours of research- and related to Raid5 write speed topic) would bypassing LVM increase writes? I have no issue with reads that currently hit 150MB/s+ (yes running on dual gig ethernet lanes).


    Either that or the LVM layer is incorrectly aligned to my md device. (but then again bypassing LVM would make this issue go away...)


    Cheers, in advance,
    Al

  • Short answer:
    No LVM will not decrese your write speed and also LVM is typically not misaligned.


    Long answer:
    LVM does not add anything to a missaligned disk. But maybe you have a lower level issue here.


    Have you created partitions on your drives, bevore adding them to md?


    If so, remove the partitions and create the md directly on the drives. The first case could lead to unaligned write sectors, where the later method will allways create aligned sectors. md will create the writeable area at 1mb from device start. Same is true for LVM, so that you will not have any issues with alignment.


    The second thing, that could be wrong is a higher level issue - MAC, network, ethernet power, and iSCSI stacks in both OS. I am not sure how good a mac (and what mac you have) is at pure network throughput and esp. if you use iSCSI on top of it. Also where does your data come from, that you trying to write to the iSCSI drive? Is the read operation not fast enough to sustain the write performance of OMV?


    So many variables in that equation. ... but LVM is not your enemy.

    Everything is possible, sometimes it requires Google to find out how.

  • sure, here it is:


    • Offizieller Beitrag

    You should have


    exit 0


    at the bottom of the file.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!