Hello.
My name is Pawel and I'm new user of OMV.
I've brand new installation on my Dell R510 (2 x X5675, 32GB RAM, LSI-9260-8i, 2 x BCM5716).
I have R10 on it - 6xHGST - HUA72303 7200rpm 64MB cache
Local performance is excellent:
hdparm -tT - Timing buffered disk reads: 1320 MB in 3.00 seconds = 439.48 MB/sec
dd read for bs=1000M count=100 i get (105 GB) copied, 273.952 s, 383 MB/s
dd write for bs=1000M count=100 i get (105 GB) copied, 274.508 s, 382 MB/s
My problem is network related. Before bonding I wanted to check speed of network services. In the beginning it was connected thought CISCIO SG 200-18, but because of speed problems now it's connected directly via dedicated PCI-e NIC port – nothing has changed after that.
On my netxtreme 1Gb I can get only about 60MB/s througput on iSCSI/CIFS/SCP.
Iperf with linux based servers get about 1Gb, my Windows 2003 only 700/800Mb (bcm5709c).
I've change Cat6 cable , disabled all features etc. on NIC's, check with JF up to 9000 etc. but I get only throughput increased from 50MB to 60MB only. I’ve done a lot of test with different sysctl’s but nothing hate me closer to 100MB border.
What else do you suggest to do about that?
Now I dream to have a just a little bit more than 100MB but finally with bonding more than 200. Any help will be appreciated.
Best regards. Pawel
1Gb not as fast as it can B
-
- OMV 1.0
- fakamaka
-
-
Did you try the backports 3.16 kernel? There is a button to install it in omv-extras.
-
No. I will do it in a minute. Keep fingers crossed
-
-
Unfortunately nothing has changed :/.
I also run on this R510 - Dell Diagnostic Distro (Centos 7 based) and speed was quite the same - just for info.
I think i miss something ... but what ... I need to refresh my mind probably.
P -
My main server is a Dell PowerEdge T410 with dual E5620s. It also uses the Broadcom 5716 NIC. It has no problem hitting 110mb/s over samba. It is still running the backports 3.14 kernel. Strange that you are having speed issues.
If all else fails, drop a pci-e intel adapter in it.
-
With ifconfig does indicate any packet errors during transfer?
-
-
@subzero79 :
RX packets:35131337 errors:0 dropped:0 overruns:0 frame:0
TX packets:2760347 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:52390375526 (48.7 GiB) TX bytes:1139671967 (1.0 GiB)@ryecoaaron :
It's strange for me too, I thought about that, especially because on both ends there is broadcom.
Second not dedicated card is connected via switch with Marvell with the same result's, so I want to make sure that this intel NIC is my plan ... Z ;). -
I have had such issues in the past.
It turned out that it always had been:
a) the switch
b) the NICswhich was/weren't up for the task. Or where broken....
-
Why not troubleshoot by direct connect both machines together without any device between them?
-
-
Sometimes cheap switches built into routers do not play well. If you buy a good switch it will many times eliminate problems with fuzzy NICs. I am along the lines of blublub. Maybe even a bad cable.
-
Hello.
@edwardwong As I mentioned in my first post - this is how it's connected now to minimize point's of failure.
@tekkb - that's why I use this CISCIO SG 200-18 - this is dedicated device for it with managing, but with and without them the throughput is a problem.During this weekend, I've installed on this R510 FreeNAS and Windows 2012 to make sure that this is not software/driver relevant. On both of them <=60MB/s was a max, so I'm make decision to add new Intel NIC to it.
Best regards. Pawel -
<p>
Hello. <br />
@edwardwong As I mentioned in my first post - this is how it's connected now to minimize point's of failure.<br />
@tekkb - that's why I use this CISCIO SG 200-18 - this is dedicated device for it with managing, but with and without them the throughput is a problem.</p><p>During this weekend, I've installed on this R510 FreeNAS and Windows 2012 to make sure that this is not software/driver relevant. On both of them <=60MB/s was a max, so I'm make decision to add new Intel NIC to it.<br />
Best regards. Pawel</p>
<p>I understand, but now it's troubleshooting, try with a direct connection to isolate problem first, once you figured out what's wrong then you can revert as you wish.</p>
-
-
but I get only throughput increased from 50MB to 60MB only. I’ve done a lot of test with different sysctl’s but nothing hate me
it's a normal speed for ext4 + OMV. If you want to quickly use the bonding:[iurl=http://img-fotki.yandex.ru/get/15484/22696312.0/0_16c5a3_8e7e4fd2_orig.jpg][/iurl]
[iurl=http://img-fotki.yandex.ru/get/6817/22696312.0/0_16c5a4_63d0d52d_orig.jpg][/iurl]
the actual speed of the server:[iurl=http://img-fotki.yandex.ru/get/15489/22696312.0/0_16c5a5_a4378a62_orig.jpg][/iurl]
I plan to return to the zfs. )
-
it's a normal speed for ext4 + OMV. If you want to quickly use the bonding:
You can test disk read/write, if you can reach 120 in the single disk, you should be able to reach 110 in a single nic. He might do bonding, but then again he needs dual link in his client (laptop) to reach 220MB/s and a disk capable of writing at that speed (ssd). But he has capable HW (switch) to do link aggregation.
I have a realtek card (the cheapest one they ship these days in MB) and i do 110MB/s. To discard nic issues the best option is always to get an intel card.
-
Ugghhh it took some time and conclusions are not clear.
I've changed NIC to Intel 6 x 1Gb but nothing changed.Now my OMV is bonded with Cisco again and I can saturate finally this bond ~250Mb.
What helped - frankly speaking I have no .... idea!! Even with bond I had a problem to send more that 80Mb!
Today I was testing bond options - changing bond mode from 802.3ad to the rest options and checking throughput (IO/Network). When I've changed it to active-pasive I get error after applying settings. All connection's was broken and I have a problem to get to it via network. Reseting networking service etc. didn't solve the problem so because this is not production server I just reboot. After reboot everything was working well but performance was still about the same, so I've changed back to 802.3ad and some magic BUM - without any tuning etc. now everything is working well.I'm happy and dizzy - I hate situation whet the Magic happens because I didn't know the day when it roll back.
So now it's time to do some AD integration - in FreeNAS it's gr8 you of the box. Is there any good practice about that :D?
Best regards. Pawel -
-
You really good luck! This is the only thing I can say!
-----------------------------------------------------------
coque galaxy J1 etui galaxy J1
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!