Totally agreed. But all that taken into consideration, upgrading my current setup from connecting my Mac to the OMV’s NVME storage with a 2.5gbe to 10gbe will upgrade the speed considerably. Doing so with a 40gbs thunderbolt connection will enhance it even more, to a potencial 4x, real world speed to be seen…and without the cost of a new switch…and the NAS sits right beside my Mac…
Not necessarily.
As an example, we have a large 1/2 Petabyte SAN implementing 4 spinning disc chassis totaling 64 Exos HDD's with a 24 SSD drive cache tier on top of it for increased bandwidth. It has Linux, Windows, and Mac's clients connected.
One of those clients is an M1 Ultra Mac Pro. It has a 100Gbps connection to our SAN via an Atto Fast Frame card. We were not seeing great speeds from the Atto card (around 1500MBps or 12,000Mbps), so as a test we swapped to an Atto thunderlink thunderbolt connected box with a 25Gbps connection and are getting 1200MBps or 9600Mbps. 1/4 the maximum connection bandwidth, but 3/4 of the speed reached on the 100Gbps connection.
Linux clients with the same 100Gbps connection can get 2 to 3 times the speed and Windows clients with the same 100Gbps connection are about 3/4 the speed.
This is all iscsi connection to minimize hardware overhead. None of it can saturate the 100Gbps connections. Linux might be able to saturate the 25Gbps connection, but nothing else can. At best it's more like about 40% of the connection bandwidth on other OS's.
iperf tests show much higher "normal" numbers through the network switches, and the math says that the storage chassis should be able to keep up to the bandwidth, but in this case the bottleneck is in the way the OS's are handling the connection.
There is more than just the bandwidth math to consider