Hi all...anything that I should be known about 10gbe NICs, specially controllers that will or not work in OMV? I´ve seen Intel, Marvell...
Thanks!
Hi all...anything that I should be known about 10gbe NICs, specially controllers that will or not work in OMV? I´ve seen Intel, Marvell...
Thanks!
Look for anything that works on Linux. I currently have a Solarflare SFC9020 but previously had Mellanox. At work. we use emulex primarily but also qlogic and intel.
Look for anything that works on Linux. I currently have a Solarflare SFC9020 but previously had Mellanox. At work. we use emulex primarily but also qlogic and intel.
Fine, so I'll look for Intel, there are plenty...now, something I can't find out about...will a direct connection between the OMV NAS and a Mac using thunderbolt interfaces work?
will a direct connection between the OMV NAS and a Mac using thunderbolt interfaces work?
Never tried since I don't have two systems with thunderbolt.
Never tried since I don't have two systems with thunderbolt.
Theoretically it would be the ideal way to connect a NAS for photo/video editing, since it would give maximum speed, 4x times the 10gbe NICs (using NVMEs on the NAS) while saving the need for 10gbe switches...would greatly enhance the use for OMV for these uses...some mainstream brands do work with thunderbolt, like QNAP, so I guess it´s doable...
There is no reason you can't use thunderbolt. It just isn't configurable from the web interface. I would need hardware to try but it might not be hard to add to OMV.
As for the speed, most people don't have hardware that can keep up with faster than 10Gbe networking.
There is no reason you can't use thunderbolt. It just isn't configurable from the web interface. I would need hardware to try but it might not be hard to add to OMV.
As for the speed, most people don't have hardware that can keep up with faster than 10Gbe networking.
I´ll try to get a hand on such an add on board, if I do I'll report back. As for hardware, machines with thunderbolt connections are actually more prevalent then with 10gbe I think...most modern laptops are coming with it...
As for hardware, machines with thunderbolt connections are actually more prevalent then with 10gbe I think...most modern laptops are coming with it...
Many OMV users are using RPis. But most OMV users don't want to sit that close to their NAS. Mine is in the basement.
I´ll try to get a hand on such an add on board, if I do I'll report back. As for hardware, machines with thunderbolt connections are actually more prevalent then with 10gbe I think...most modern laptops are coming with it...
You also have to keep in mind that just because there is a thunderbolt connection, it doesn't mean a system can use all of that bandwidth. A 40Gbps thunderbolt converted to Mbps to keep it on par with storage is 40000Mbps as a theoretical maximum.
If the workload is on one drive, a spinning disk it will probably be a maximum speed of 1200Mbps, a regular SSD around 4000Mbps, an NVMe probably around 28000Mbps (all theoretical maximums). From that theoretical maximum you have to subtract all hardware and protocol overheads, latencies and bottlenecks, not to mention any limits imposed by the protocols.
If your hardware and protocols can use the full bandwidth, then you have to start looking at some kind of RAID storage to distribute the workload across enough drives to saturate that connection, preferably using a hardware RAID controller to look after any check summing for redundancy.
The point is, it doesn't matter how fast your connection is if there is something in the mix that can't keep up. It will only be as fast as the slowest link in the chain.
The point is, it doesn't matter how fast your connection is if there is something in the mix that can't keep up. It will only be as fast as the slowest link in the chain.
Totally agreed. But all that taken into consideration, upgrading my current setup from connecting my Mac to the OMV’s NVME storage with a 2.5gbe to 10gbe will upgrade the speed considerably. Doing so with a 40gbs thunderbolt connection will enhance it even more, to a potencial 4x, real world speed to be seen…and without the cost of a new switch…and the NAS sits right beside my Mac…
Totally agreed. But all that taken into consideration, upgrading my current setup from connecting my Mac to the OMV’s NVME storage with a 2.5gbe to 10gbe will upgrade the speed considerably. Doing so with a 40gbs thunderbolt connection will enhance it even more, to a potencial 4x, real world speed to be seen…and without the cost of a new switch…and the NAS sits right beside my Mac…
Not necessarily.
As an example, we have a large 1/2 Petabyte SAN implementing 4 spinning disc chassis totaling 64 Exos HDD's with a 24 SSD drive cache tier on top of it for increased bandwidth. It has Linux, Windows, and Mac's clients connected.
One of those clients is an M1 Ultra Mac Pro. It has a 100Gbps connection to our SAN via an Atto Fast Frame card. We were not seeing great speeds from the Atto card (around 1500MBps or 12,000Mbps), so as a test we swapped to an Atto thunderlink thunderbolt connected box with a 25Gbps connection and are getting 1200MBps or 9600Mbps. 1/4 the maximum connection bandwidth, but 3/4 of the speed reached on the 100Gbps connection.
Linux clients with the same 100Gbps connection can get 2 to 3 times the speed and Windows clients with the same 100Gbps connection are about 3/4 the speed.
This is all iscsi connection to minimize hardware overhead. None of it can saturate the 100Gbps connections. Linux might be able to saturate the 25Gbps connection, but nothing else can. At best it's more like about 40% of the connection bandwidth on other OS's.
iperf tests show much higher "normal" numbers through the network switches, and the math says that the storage chassis should be able to keep up to the bandwidth, but in this case the bottleneck is in the way the OS's are handling the connection.
There is more than just the bandwidth math to consider
I would have to agree with BernH. We have IBM FS9200 series arrays (all nvme flash) that are rated to do 45GB/s (not Gbps) and a system with 32Gbps fiber channel adapter rarely saturates the connection.
Display MoreNot necessarily.
As an example, we have a large 1/2 Petabyte SAN implementing 4 spinning disc chassis totaling 64 Exos HDD's with a 24 SSD drive cache tier on top of it for increased bandwidth. It has Linux, Windows, and Mac's clients connected.
One of those clients is an M1 Ultra Mac Pro. It has a 100Gbps connection to our SAN via an Atto Fast Frame card. We were not seeing great speeds from the Atto card (around 1500MBps or 12,000Mbps), so as a test we swapped to an Atto thunderlink thunderbolt connected box with a 25Gbps connection and are getting 1200MBps or 9600Mbps. 1/4 the maximum connection bandwidth, but 3/4 of the speed reached on the 100Gbps connection.
Linux clients with the same 100Gbps connection can get 2 to 3 times the speed and Windows clients with the same 100Gbps connection are about 3/4 the speed.
This is all iscsi connection to minimize hardware overhead. None of it can saturate the 100Gbps connections. Linux might be able to saturate the 25Gbps connection, but nothing else can. At best it's more like about 40% of the connection bandwidth on other OS's.
iperf tests show much higher "normal" numbers through the network switches, and the math says that the storage chassis should be able to keep up to the bandwidth, but in this case the bottleneck is in the way the OS's are handling the connection.
There is more than just the bandwidth math to consider
😳
Update on this: bought a thunderbolt card, installed, installed the Thunderbolt packages using the CLI, created a new Ethernet connection in OMV, connected a cable between the server and the MacBook…voilá! Just worked, and I’m getting 5x times the speed I was getting over 2.5gbe… couldn’t be happier…🤗 For editing photos and vídeo, no need to spend on 10gbe infrastructure. Perfect for my use case…
Don’t have an account yet? Register yourself now and be a part of our community!