Raid 10 with 8 x 250GB SSD

  • I am currently running a raid 10 with 6 x 250 GB samsung 850 Pro SSDs with an EXT4 files system.
    In the next couple of weeks I am going to remake the raid array using 8 x 250 GB Samsung 850 Pro SSDs with an EXT4 file system


    On this array I house the user home directories, business files, docker, plex, NFS mounts for VM's


    I am trying to figure out if the default values for when OMV creates a raid 10 with SSDs is correct. I would like to maximize the performance of this raid.


    System details: drives attached via an LSI SAS HBA 6GB/s - 8 port, so each drive has its own connection. The system itself has 48 gigs of ram and dual Xeon X5670 CPU. The network is 10 GB SFP+



    The default setup of mdadm is a 512k chunk ..... is this correct for an SSD setup ?
    Now for the EXT4 file system are the default /etc/fstab sufficient for SSD Raid? I see the discard flag, so I am assuming trim is active, but should something else be set as this is an SSD high performance setup ?


    Thank you !!

  • is this correct for an SSD setup

    • SSDs die for different reasons than HDDs (they don't stop to work for no reason like HDDs due to mechanical failures but fail after n hours of operation or n TBW -- TB written)
    • All larger SSD use something similar to RAID-0 internally to increase performance. RAID-0 is the 'natural' SSD operation mode.
    • RAID implementations that do not support TRIM will reduce the lifespan of your SSD (since garbage collection and wear leveling aren't effective any more)
    • RAID-1 (and RAID-10) made out of SSDs is pretty useless (since SSDs die for other reasons than what this RAID stuff has been invented for: dealing with unpredictable storage failures)
    • RAID-1 (and RAID-10) made out of a bunch of absolutely identical SSDs even using same firmware revision is 100% useless since more than one SSD will die at almost the same time
    • RAID-1 by mdraid is pretty useless in general (provides only availability but no data protection -- snapshots -- and no data integrity -- checksumming)

    For fast zpools we use mirror vdevs in one large pool (the mirrors are made out of 2 different enterprise grade SSDs each). Sequential performance is both underwhelming and irrelevant but random IO performance is great and since these pools need 24/7 availability the SSDs wasted for redundancy are ok. Otherwise dropping the redundancy would be the way better idea but since there are running a bunch of virtualized servers on these storage pools that's not an option.

  • tkaiser ... I understand what you are saying about SSD, but in my budget and use case I think I will be OK......


    - My system is backed up to a FreeNas & OMV Box + some of the data is also backed up onto USB & remote storage (In theory my LAN users have the ability to access the same data from three locations on the network (VStorage1, VStorage2, Vstorage3, but through Zentyal AD I automatically mount the shares I want them to use and also through permissions the other targets won't let the users write). If there is a major issue, I can change the permissions and logon script through AD to mount a different share ..... just the file transfer performance would be reduced


    - Sequential transfers don't really matter to me on the SSD array as the big data sits on a 6x4TBx2 raid 50 array (also backed up to FreeNas + another OMV system)


    I am not using ZFS on OMV because for production I wanted to stick with a native file system provided by OMV .... I have fooled around with ZFS on OMV in the past, but the experience with the plugin was so so
    - Also wanted maximum performance with my budget available hardware


    I would like the SSD array to have high IO and great performance.
    - Used for small files being modified, opened and transferred by users through SMB and also I have virtual machines targets through NFS for my proxmox cluster (VM storage)


    I want to make sure that OMV defaults for Raid10 SSD array are good or should I create my own Mdadm array through the command line.
    I would like to stick with the EXT4 file system, but also want to make sure that OMV configures and mounts it with the correct options.


    I am not entirely sure how to make sure I am getting the best performance possible out of the array. Right now I feel (no real data) that my SSD array is the same or slower vs my mechanical raid 50 (7200 RPM enterprise drives)

  • I understand what you are saying about SSD, but in my budget and use case I think I will be OK

    Again: Building a RAID-1 or RAID-10 out of identical SSDs is almost fooling yourself if they run with same firmware and are of same age (and/or same amount of data written). Since SSDs are something entirely different than HDDs. Simply think about what the redundant RAID modes try to do, for which reasons HDDs die and for which reasons SSDs die.


    It makes absolutely no sense to build a RAID-1 or RAID-10 out of identical SSDs since this ensures that drives will fail at almost the same time so your array will NOT provide increased availability. Doing this is just a waste of resources.


    We had one customer running into this bug (RAID-1 with Intel 320 SSDs gone after power failure since not aware that the SSDs needed a firmware update) and we have another one using Crucial M4 SSDs half a decade ago where the famous '5184 hours bug' crashed servers and corrupted installations on a bunch of servers all with those M4 bought in a single batch in RAID-1 configs. Data corruption caused by the stupid nature of mdraid's RAID-1 implementation (not allowing for data integrity check/enforcement).


    So if your SSDs are all of same type and age it's simply insane to build a RAID-10 instead of a RAID-0 out of them. Since redundancy won't protect you and you're both wasting disks and lowering performance.

  • Again, I understand that it is not best idea and you do not recommend it.
    My SSD's are the same brand, but have been purchased at different times. I started with 4 from one purchase, 2 from another and again 2 just a couple of days ago. So if I pair them correctly, than the chances of them failing all at once would be reduced enough for my comfort level. With the backup in place, I will be sleeping soundly at night.



    However, my original question about raid10 mdadm configuration and EXT4 portioning are yet to be answered. Are the default perimeters in (mdadm/EXT4) OMV the correct setup for high performance raid10 using SSD's


    From recent experience the default network parameters are good for 1GB Lan, but not for 10GB Lan. I am wondering if the same is true for my disk setup

    • Offizieller Beitrag

    From recent experience the default network parameters are good for 1GB Lan, but not for 10GB Lan. I am wondering if the same is true for my disk setup

    It would be difficult for OMV to create a highly optimized raid setup when it really depends on the filesystem, workload, and devices. It is possible to update your raid (chunk and block) and filesystem (stride and stripe width) tuning parameters. There are lots of tutorials on the web about tuning performance.


    What network parameters did you change?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • For the network, so far I changed some of the kernel memory buffers, network ring buffer, 9000 MTU and a few of the SMB specific settings from the sticky found on this forum. (I can share the exact settings .... I was going to post about my experience once I have the switch setup and run some more tests/tunes)


    My Iperf connection shows a 6.5 Gigabit connection (still not sure why it is not full 10 gigabit) .... tried different cards and SFP+ cables, but seem to always show the same connection speed
    - Direct connection using a DAC SFP+ cable (the unifi switch has been on back order forever)
    - Connected to a R710 running proxmox 5.1 host and testing using a windows VM running installed on a Samsung 960 EVO directly passed to the VM; I also have NFS shares mounted via a virtual disk form proxmox (NFS is sitting on my ssd raid10 setup


    I went form having speed which went up and down from 650 MB/s to 250 MB/s and now pretty consistently I am seeing 750 MB/s to 790 MB/s on via Cifs (from VM to storage array .... the other direction is slower, again not sure why
    Using NFS I sometimes see all the way up to 900 MB/s, but if the file is a couple of GB the speed drops down to 200/300 not sure why

  • Well, your setup adds a ton of complecity and one expected result of virtualization is performance degradation.


    Also when testing the way you do bottlenecks at both source and destination can be responsible for slowdowns.


    Benchmarking usually requires a more active approach, e.h. when running an iperf replacing iperf with iperf3 as first measure (since iperf3 shows 2 second interval statistics too and also reports count of retransmits). Then it's important to run htop in parallel to get the idea whether the single threaded iperf3 task is bound by maxing out a CPU core.


    Then you need to test your storage locally both at the client and the server if you test the way you do (simply copying large chunks of data from local to remote storage and vice versa). Consumer SSDs tend to slow down after a certain amount of data has been written.


    And to eliminate local storage being the bottleneck I would always test with Helios LanTest on the client for two reasons:

    • generating test data from scratch so local storage is not involved
    • using only a single task with a fixed block size to measure gives you more precise numbers even if Explorer copies later will show better numbers (difference explained here)
  • I'd suggest hardware raid vs softraid in this particular situation honestly


    I use a LSI 9260-8i in my lab server at work with SSD's and the performance has worked well for me & is consistent.

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!