Is there a way to monitor/see the Speed being used durng an Rsync among 2 diff OMV Servers Remotely, but connected via 10G

  • You could install the speedometer package on one of the machines, ssh into that machine and run speedometer with appropriate command line parameters.


    --
    Google is your friend and Bob's your uncle!


    A backup strategy is worthless unless you have a verified to work by testing restore strategy.


    OMV AMD64 7.x on headless Chenbro NR12000 1U Intel Xeon CPU E3-1230 V2 @ 3.30GHz 32GB ECC RAM.


  • Gah!


    My 10GB Network must really suck dirty a$$hole.


    10GB NIC [Pusher], 10GB NIC [Puller], CAT8 Ethernet [the 2 OMV and 10GB NIC are same room] and a 10GB Cisco SG500XG Switch.


    345 MiB


    I wonder if the NIC I am using in 1 Server, though 10GB, being 5-6 years old and an Onboard NIC would have less performance than a 1 year old 10GB Add-on Card and also, would making a Bridge interface off a 10G NIC cause any degradation?


    I even did 9000 MTU on both Servers, my Switch supports Jumbo frames etc...all the same. Has to be the HD speed.

  • Based on the math, that is a little more 1/4 the theoretical maximum bandwidth.


    10Gb (lower case b = bits) is capable of 1250 MB/s (upper case B is bytes)

    MB is a little different than MiB though (MB is based on 1000 being the conversion factor and MiB is based on 1024 being the conversion factor so there is a small discrepancy.


    All that said, 1250MB is about 1192MiB, so still about that 1/4 area.


    There are several things that can factor into this issue. The first is the protocol in use. SMB is a fairly chatty protocol that can easily take up 5% of the bandwidth. The next factor is any kind of NIC tuning or protocol tuning to make batter use of the 10Gb speeds. This can include buffer tweaks and flow control tweaks. The third is anything in between the two systems, like switches (ie. is there any tuning that can be done there). The fourth is interference cause by cable running along side AC or any other RF/EMF interference. Lastly would be the question of if you might have a bad cable or a bad tranceiver.


    A good check would be to do an iperf test between the systems, as that will tell you what the infrastructure is capable of regardless of protocol. You can then look at the NIC tuning to maximize that. If that is as good as you can make it, then you can look at protocol tweaking or changing to a different one (ie. RSYNC over SMB will probably be a fair bit less efficient than RSYNC over ssh, NFS or iSCSI)


    Lastly, I will say that you will never hit the theoretical maximum of the connection, but with proper tuning and an efficient protocol, you could see somewhere in the 50% to 75% area as a rule of thumb.

  • Well that s definitely a lot of factors.

    The NIC Cards are; NICGIGA Intel X540-T1 and a NICGIGA Marvell AQtion AQC113C [No "tuning" on either OMV [7] Machine]

    The Switch is a Cisco SG500XG - No real "tuning" done, I can change to Jumbo Frames etc

    Code
         3     XG3         10G-Copper     Up     Enabled             10G     Full         Unprotected   
         6     XG6         10G-Copper     Up     Enabled             10G     Full         Unprotected       

    These are the status of both Switchports on the Cisco.


    I am using Rsync via OMV GUI, no tweaks or options. Not sure if that is SMB,SSH or NFS.

    On each OMV machine, I verify both NIC's are at 10000 Speed.

    Like I said, no tweaking done on either OMV Machine, and the picture attached is the only Interface options I cna change outside of selecting Jumbo Frames.

    When my current "slow" rsync is complete in 300 hours I will try the iperf test.

    As far as cables, I did indeed change them. I bought like 4 sets of cat8 8' each. And all my devices are on one tall rack system. The Ethernet cables are set for 40Gbps capable.


    Only thing I did change was that each Interface on each OMV is using a Bridge Interface, not sure if that has any effect and also I DID enable Jumbo frames on the Cisco Switch and 9000 MTU on both OMV's and then my Rsync was like 900K, down from 340MiB.. So clearly I am not doing it correctly.

  • Rsync can use any link you set between the machines. If you are remote mounting a share, then it would be samba or nfs. If you are using the client/server type of setup it would be something else like ssh. (I would have to dig into that option to confirm.


    The NIC tuning usually involves adjusting Tx/Rx buffers and/or MTU (1500 is normal 9000 is jumbo frames). Since the NIC's are intel chipset, there is a very good chance you can adjust those buffers and MTU using ethtool. If setting jumbo frame, it has to be set on the nic's as well as the switch in order for it to work correctly. A mismatch will make performance worse.


    Since you are using a bridge interface, you should disable Large Receive Offload (LRO).


    Your buffers are probably set for 256 as a default, but can likely go up to 2048, which should improve performance a lot, and jumbo frames help with large file transfers.


    From what I can see, the X540 has flow control disabled by default. Performance can be improved a lot by enabling it if you are getting any packet loss, as the systems will not have to keep re-transmitting until successful. I don't really know why a lot of fater NIC's are defaulting to disabling flow control when it helps so much for stability in data transfer.


    You would probably have to play around with the settings to get optimal ones, but here are some basic command syntaxes. I would suggest digging into this yourself and doing some testing though. Don't take all of this at face value from me, as I don't have a 10Gb setup here at home, and these are just some general guidelines.


    Set tx and rx buffers

    ethtool -G <interface_name> tx <value>

    ethtool -G <interface_name> rx <value>


    set MTU (all NICS and switch ports need to be set the same)

    ip link set <interface_name> mtu <mtu_size>


    Disable LRO

    ethtool -K <interface_name> lro off


    Enable flow control (all NICS and switch ports need to be set the same)

    ethtool -A <interface_name> rx on tx on

  • Interesting.. Well my current rsync, doing 11TB is actually almost done... So I suppose at 350MiB it did not go toooooo slow, but of course it has been like 6+ hours. I will try to changes later on a diff rsync.


    Thank you for the details.

  • Interesting.. Well my current rsync, doing 11TB is actually almost done... So I suppose at 350MiB it did not go toooooo slow, but of course it has been like 6+ hours. I will try to changes later on a diff rsync.


    Thank you for the details.

    For what it's worth, When we were running 10Gb at work using iSCSI, we could get around 1000 to 1100 on a perfectly tuned Linux (CentOS) system hitting our SAN system, which was based on Debian with a custom kernel and running an HFS+ filesystem with a virtual filesystem translator on top of it. Other linux varieties tended to be a little bit slower, but only in the 900 to 1000 area. Mac systems were usually in 750 to 900 area, and windows in the 600 to 750 area.

    Yeah that makes sense. I totally understand I’ll not hit those speeds or even not even 800/700 but I do feel 350 is too slow. I’ll indeed mess around once this cycle is done with the tuning. I am also quite sure the drives I have, though great for longevity, may not be the best for “speed”. If I can get to 600sh I’ll be so happy.


    Once again this was all iSCSI, with custom kernel and filesystem, and a perfectly tuned setup. More generic/non-custom configurations, even with tuning will probably be more in the 600 to 750 area. If you can get more than that you are doing excellent.


    And 6 or so hours for 11 TB is not bad, but about 2 to 3 times what I would expect for a tuned 10Gb connection, assuming your drives can keep up with the speed.

  • Yeah that makes sense. I totally understand I’ll not hit those speeds or even not even 800/700 but I do feel 350 is too slow. I’ll indeed mess around once this cycle is done with the tuning. I am also quite sure the drives I have, though great for longevity, may not be the best for “speed”. If I can get to 600sh I’ll be so happy.

  • One other thing I forgot to mention that can have a big impact on speed and since we did talk about drives, is drive fragmentation. I'm not sure what filesystems you are running, but both XFS and EXT4 have defrag utilities (xfs_fsr and e4drfrag). If you schedule these to run daily, weekly, or even monthly depending on how much change happens to your files, they can keep everything in order and improve speeds a fair bit. An initial RSYNC be not be too bad, but over time as files get changed, deleted, and new ones written, performance will start to degrade due to things getting fragmented.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!