Which RAID or HBA card

    • rabudde wrote:

      I can copy data inside RAID volume with round about 170-200MB/s so I can satisfy my 2GB LAN interface
      What is a 2GB LAN interface? I know 1GbE and next 2.5GbE but nothing in between (already asked that back then but you chose to ignore. Why?)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Sc0rp wrote:

      awwww, come on, @tkaiser ... it is a bond of two 1GbE-interfaces - what else?
      If it's a bond (link aggregation) then why being concerned about getting 170/200 MB/s at the storage layer if the network is bottlenecked by single link Gigabit Ethernet speed anyway?

      If we would talk about something else than bonding (eg. SMB Multichannel or SAN stuff like multipathing) I would understand talking about a '2GB LAN interface' but link aggregation / bonding does NOT work this way :)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • rabudde wrote:

      Yes, I meant it's a link aggregation, i.e. LACP, with 2GBit/s. Why would then a single link be the bottleneck?

      Since LACP/bonding/trunking works this way. It's not a magic bandwidth increase but just a link distribution / failover algorithm. We should never talk about '2GBit/s' when using a bond with 2 links since it's really just 2 x 1 GBit/sec. Same when you use 3 or 5 links, then it's just 3 x 1 GBit/sec or 5 x 1 GBit/sec. A true 5GbE connection is something completely different since a single client/server connection can here benefit from 5 times the bandwidth compared to 1 Gbit/sec and also a lot lower latency!

      rabudde wrote:

      Sure clients are connected with single link to switch, but I can satisfy the NIC (and so the disk transfer bandwidth) with two clients then.
      I would check this before assuming $anything. I've had customers where the IT department moved power users into a new building connecting it with a quad link EtherChannel (Cisco speak for bonding) to the main building where a dual link Etherchannel was the interconnect to the rack cabinets. Due to insufficient bonding algorithms used (check xmit_hash_policy) all power users ended up on the same links, the average available bandwidth for them was way below 20 MB/s.

      Next 'problem': Even if you manage to get your clients on different links (again: check xmit_hash_policy settings -- there's no 'smartness' involved, with the wrong algorithm such a bond can consist of completely unused links!) so they can exclusively saturate a single GbE connection once they access data at the same time (concurrently) we're not talking about what you tested before. Only benchmarking sequential transfer speeds is not sufficient to get the idea what happens when different storage locations are accessed at the same time (now random IO performance / IOPS become somewhat important, something where RAID 5/6 do not shine that much).

      I use Helios' LanTest since two decades now for such tests and would recommend '10 GbE' settings. Save settings from one client on a server share, double click it later on 2 or more clients and let benchmarks run in parallel. Pretty easy to test for the usual performance problems LACP is known for (eg. different clients able to use different links in one direction but ending up all on a single link in the other direction fighting for bandwidth).
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Users Online 1

      1 Guest