Rsync speeds over 10Gb NICs seem way too low

  • I built a new OMV server to take over as my daily driver and convert the existing into my backup, so my current step is moving all the main data from the old unit into the new one. In doing so, I wanted to test out the new 10Gb NICs that I installed in each, and I'll use this for all future backups and moves.

    Iperf3 showed speeds of about 9.2Gb/s which I was happy about. However, when doing the main rsync command and moving a few TB of data from the old system into my new Mergerfs pool, I noticed that speeds on small files were anywhere from 3 to 7 MB/s and larger files roughly 15MB/s. This is obviously considerably slow as I expected no lower than 100MB/s.

    My mergerfspool is built off of 4 seagate SAS drives, 10TB each (3 data and one parity) and I'm using Snapraid on top of MFS. The drives are enterprise SAS at 7200K RPM. The servers are running (source) dual xeon L5640 CPU with 32 GB ram and (destination) Intel i3 12100 with 32GB ram. During the rsync, neither CPU was moving much, both below 7% usage and ram was around 20% for each, so nothing seemed to struggle at all

    My command running through SSH:

    rsync -harvzP --stats root@ /srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/

    Am I missing something glaringly obvious? Basically, what I'm trying to say in my command (which I'm running from the destination server in a tmux session) is connect to the source server at which is the IP of the 10Gb NIC, not the server, and get the 'media' folder from my unionfs pool, then copy that folder's contents to the 'media' folder in this server's mergerfs pool.

    I ran this through the Rsync module in the OMV web gui as a task instead just to see the difference and I got closer to 50MB/s but still seems low, so the native module in the GUI is running at least a little faster than the SSH version

    Am I stuck with these speeds or is there a better way to do this?

  • I don't use these type of pools. But aren't the read/write characteristics of your data pools important? If, for example, you only have the iops of effectively one disk on either server then that limits your max xfer speed which may be no better than 125MB/s.

    Clearly, in this case using rsync + ssh seems to introduce a large overhead. Is it better or worse if you don't use compression?

    As I only have a 1Gbe network, I've no personal experience of using various protocols over 10Gbe, but this article might be of interest: https://delightlylinux.wordpre…gabit-ethernet-and-linux/

  • Good points, I've done a lot more testing this morning just to see what is affected by pools (doing scp on files in and out of the pool just to see) and scp definitely seems to have faster transfers overall. I've moved to a new rsync command:

    rsync -hazP --stats -e "ssh -T -c -o Compression=no -x" root@ /srv/mergerfs/norman_pool2/media

    and I haven't seen much difference yet, but I'm also seeing cpu and memory on both servers are still wide open

  • Are you certain that the bottleneck isn't your FS setup? I have been doing some rsyncs the last couple of days from an AMD 200GE to an HP MicroServer N54L over 10GbE and received 80MB/s on average on large files over rsync/SSH with the slow CPUs of the HP being the limiting factor (small files are obviously detrimental to performance).

    Mounting the shares over NFS yielded more than 100MB/s, with a push-sync from the AMD to the HP (rather than pull from the AMD to the HP) being faster, as the CPUs of the HP were maxed out again.

    Reading from the storage (both systems have a 4x4TB Soft-RAID5) yield around 300MB/s when reading, occasionally a bit more, which is about the limit of the HDDs. SSD/NVMe syncs are even faster.

    I would check the stats/graphs in "Diagnostics > Performance Statistics > Disk I/O" when rsync'ing to rule out a problem with your storage setup.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!