iSCSI Extremely Slow

    • iSCSI Extremely Slow

      Hello everyone,

      Is there a reason why iSCSI would be really slow on OMV? This is being comparative to FreeNAS where I would be able to use Windows Server Backup to back up a 500 GB drive in a day but on OMV it takes a solid two days to back up 240 GB and then the backup times out.

      I've just tried doing the same backup to a SMB/CIFS share and it's going much faster. Could I be doing something wrong on iSCSI settings or is it just slow on OMV?

      Thanks.
    • I'm sure it is a setting because this is the first I have heard about iscsi being slow.
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.6
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      I'm sure it is a setting because this is the first I have heard about iscsi being slow.


      Would you happen to know which setting? I'm using defaults right now.

      Just ran a test where I transferred a 10 GB SQLIO generated file to the iSCSI target drive, it would fluctuate from 2 MB to 9 MB per second, according to Windows. Tried it on the SMB/CIFS share, transferred at around 100 MB per second. There's got to be something I'm missing but I have no idea what.
    • shadowzero wrote:

      Let me take a look at my settings and see if I have anything different from the default. For your windows machine do you have mpio enabled? Is the iscsi traffic on the same lan network with the rest of your pcs or do you have it on a vlan or separate…


      I do not have MPIO enabled and the iSCSI traffic is on the same lan network but I have a 4-port 10 GB ethernet card and those ports are bonded.

      Excuse me if I'm wrong but would any of that matter if I'm getting the speeds I want using the SMB/CIFS share and I got good speeds using FreeNAS' iSCSI right before this? I'm willing to enable MPIO but since it was never enabled before, I'm not sure how big of a difference it would make, you know?
    • I was curious to see if you were using mpio and what your network connection was like. You don't have to enable mpio. I won't go into detail about how it works but if you're interested to learn about it, have a look here. technet.microsoft.com/en-us/library/ee619734(v=ws.10).aspx
      Give these settings a try and see if it helps. Edit your target properties. Take a screenshot of your current settings in case you need to change them back.

      Change your max connections from 1 to 4.
      Change your max sessions from 0 to 8.
      Enable Immediate data.
      Max outstanding R2T from 1 to 8.
      NOP interval 20.
      NOP Timeout 30.

      See if that gives you any improvement.
      ShadowZero -- OMV Fan since 0.3
    • Yeah, I'll probably try it out actually. Thank you.

      So, one of those settings helped but only a bit. It'll start off transferring at around 90-100 MB/s but then after about 10 or 20 seconds, it'll drop down to around 10 MB/s again. I can tinker around and see what might be causing this but if you have any thoughts, that'd be nice.
    • shadowzero wrote:

      Is your OMV and Windows machine both using 10GB? Also, I am curious why you bonded 4 10GB ports together. What mode did you choose? Are you using any other ports for OMV or everything on the 4 port 10GB card?
      Do you also have a 10GB switch?

      I'm using 802.3ad as the bonding mode. The servers may only be on 1GB but I still have the same issues when using the onboard nic and I've tried a 1 port 1GB external nic. Do you suggest I stop using bonding or use a different mode
    • I would suggest this to troubleshoot. Connect a NIC on OMV that is not bonded and give it a static IP of 192.168.20.1 with a netmask of 255.255.255.0 or /24. This is your iSCSI target. Add a NIC to the Windows machine and give it an IP of 192.168.20.2 with the same netmask. Connect the two machines directly on a separate network cable not connected to the rest of your network. Map your initiator to use 192.168.20.1 as the target. See if your getting the same results. Let me know what the results are.
      ShadowZero -- OMV Fan since 0.3
    • I agree you are likely filling up the cache then your speed drops. That is why it starts at about 100MB/s then slows to 10MB/s. iSCSI has no compression, and the expected transfer rate will be even lower when you take (Ethernet, IP and iSCSI) protocol overhead into account. I would of expected the isolated connection to perform better but in your case it didn't. I'll do some more testing on my end and fine tune some of the settings.
      ShadowZero -- OMV Fan since 0.3