FS recomendation

  • Hi Folks,


    i was just wondering, what FS your using and why.


    Im currently on 3 Erasmus and using ext4 for everything.


    I run an HP ProLiant Micro Gen 8 with additional hardware Raid card HP P420 and 2gbit/s uplink on bond0.


    My OS is runing off the nativ SATA controler inside my box, running form a 2,5" Sata disc. but that sata controler only supports sofware raid, and I did not want to plug my cpu with soft raid duty, so I got a P420 hw raid card with 1gb of ecc cache and capacitor/battery backup.


    My Raid controler is setup for
    2x 512gb raid 0 ssd cache split for 150gb for VM-SAS and 600gb for data-sata
    2x 10tb raid1 sata data array (to bee extended to 4x 10tb raid1)
    2x 150gb raid1 sas VMs array (to be replaced by 2x 1tb ssd, actually I want to use my cache ssds and get some new ones for ssd, because i cant use the full capacity of the currend ssds with that hw raid controler, a bit more than half of the capacity. 750-800gb total of 1,024 tb.)
    total on the raid controler will be 8 discs, 4 sata 10 tb, 4x sata ssd


    all my discs/arrays currently run ext4


    I do have an IO Wait factor of about 20-60% if i start a copy job by SMB to a SSD equiped client system. Its sucking or writing a stable 100% of 1gb'/s network volume (indicated by windows task manager and performance chart from OMV). as soon as i acess a vm or the files from a second client, the first transfer rate drops and IO wait jumps up drastically.


    thats why i want to throw spindels at the controler. changing data raid1 from 2 to 4 discs and changing vm SAS from my currend SAS discs to SSDs


    the data discs are benchmarked at over 100mb/s so having 2 of em should easily supply enough capacity for 2 gb/s uplink even it I use 2 diferent machines to load up that server and that VMs are on a different array with a 100% ssd cache. they should not impact on the throughput. but they do.


    even if i move by CLI files from array data to array SAS the IO-Wait jumps up big style and the responsivenes of the SMB drops to death.... kind of.


    but I also wonder, if I did choose the right FS for my systems?


    Cheers
    Manne

  • hi there, not that i know a lot but the data throughput you get is related to various factors


    NIC connection to system - pci or pcie (versions and implementatios) /vs Northbridge, and so on, disks throughput, buffer, bus, layer 3/4 overhead, application used and their efficiency (samba, cifs, ftp, etc), compression type (if any) "software/hardware", network infrastructure and architecture, and lots more.


    i recommend for your specific case that you try various configs and remember to always implement what you know because in case of failure you and only you will have to deal with it.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!