I built a new OMV server to take over as my daily driver and convert the existing into my backup, so my current step is moving all the main data from the old unit into the new one. In doing so, I wanted to test out the new 10Gb NICs that I installed in each, and I'll use this for all future backups and moves.
Iperf3 showed speeds of about 9.2Gb/s which I was happy about. However, when doing the main rsync command and moving a few TB of data from the old system into my new Mergerfs pool, I noticed that speeds on small files were anywhere from 3 to 7 MB/s and larger files roughly 15MB/s. This is obviously considerably slow as I expected no lower than 100MB/s.
My mergerfspool is built off of 4 seagate SAS drives, 10TB each (3 data and one parity) and I'm using Snapraid on top of MFS. The drives are enterprise SAS at 7200K RPM. The servers are running (source) dual xeon L5640 CPU with 32 GB ram and (destination) Intel i3 12100 with 32GB ram. During the rsync, neither CPU was moving much, both below 7% usage and ram was around 20% for each, so nothing seemed to struggle at all
My command running through SSH:
rsync -harvzP --stats root@10.10.10.15:/srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/ /srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/
Am I missing something glaringly obvious? Basically, what I'm trying to say in my command (which I'm running from the destination server in a tmux session) is connect to the source server at 10.10.10.15 which is the IP of the 10Gb NIC, not the server, and get the 'media' folder from my unionfs pool, then copy that folder's contents to the 'media' folder in this server's mergerfs pool.
I ran this through the Rsync module in the OMV web gui as a task instead just to see the difference and I got closer to 50MB/s but still seems low, so the native module in the GUI is running at least a little faster than the SSH version
Am I stuck with these speeds or is there a better way to do this?