Multi-channel support via SMBv3

    • OMV 2.x
    • Multi-channel support via SMBv3

      Just curious what the status of multi-channel support in Samba for OMV 3 might be? I've been trying to research the implementation in the Linux kernel, but I can't seem to find any concrete answers.

      This question was previously asked here: SMB3 when?
      But the question wasn't really answered and I didn't want to necropost.
      [Blocked Image: https://i.imgur.com/0lPO0ln.png]

      I'd really like to consolidate my current gaming rig's hard drives and hopefully put all my games on a separate share on the OMV NAS. I'm currently getting 100+ MB/s transfer speeds during tests using this tool. 808.dk/?code-csharp-nas-performance If I could trunk/bond/aggregate/team a few more ports then I'd be a happy camper. My gaming rig is currently Windows 10.

      Thanks!
    • Never heard of multi-channel support. Looks like a very experimental feature. It is also a samba feature not a kernel feature from what I read.

      That said, OMV uses debian packages. So, it if is added to debian, then it will be added to OMV automatically. OMV 3.x is based on Debian Jessie which uses samba4 by default which should give you smb3. Some users have been successful in using samba4 on OMV 2.x from the backports repo.

      Putting your games files on the nas would better done with iscsi not samba in my opinion. Probably too much latency with samba.
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Each connection to iscsi is a block device (basically like a hard drive). So, if the files were on different "hard drives", I would guess that would work but I don't know for sure. Try it :) 10Gb ports would definitely be better.
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I'm very much a newb when it comes to this! First I need to Google the proper way to set up iSCSI!

      So (just a random idea) if I set up 2 x iSCSI targets as physical devices, then I can potentially set them up in a software stripe in Windows. This would require each iSCSI to have its own disk and its own RJ45 interface properly setup. Is this at all possible? Also, is there a way to carve out a section of my current ZFS pool rather than separating disks to allocate to iSCSI?
    • I have used very little iSCSI especially in a setup like that. Maybe @shadowzero can answer that?? On the server side, I don't think you need separate disks for each iSCSI. Not sure if you can use a ZFS pool with the iSCSI plugin either. Maybe @nicjo814 can answer that one?
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Hmm, I'm not sure what I'm missing. I have a ZFS pool called ​TPool that is a stripe of mirrors (equivalent of RAID10) that is currently being referenced by several shares. When I go to the iSCSI plugin, I allow for my username "johndoe86x" for the incoming and outgoing transfer modes. I have the identifier and alias as ​test, and when I go to add a LUN, there's no option to add anything.
    • johndoe86x wrote:

      So (just a random idea) if I set up 2 x iSCSI targets as physical devices, then I can potentially set them up in a software stripe in Windows.


      Hey sorry for the late reply. Any iSCSI targets you create should be seen as raw disk space to the initiator starting off. So yes you could do what you mentioned, you could stripe the targets if you wanted to. Just make sure the permissions on each target you create allow the initiator to access them.
      ShadowZero -- OMV Fan since 0.3
    • @nicjo814

      Ah, that's brilliant. Thanks for your help. I will definitely work on this later tonight!

      Edit: I'm tickled w/the initial tests. Going through a single LUN with 1 NIC yields 110-117 MB/s read-write speeds. Once I get two port NICs in the host and client machine, I'll run more tests to see if the throughput is doubled if the LUNs are striped on the client side.

      The post was edited 1 time, last by johndoe86x ().