[OMV3] Need advice for iSCSI target

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • [OMV3] Need advice for iSCSI target

      Hello,

      I'd like first to thank everyone involved in development and support for OMV for their good job.

      So far I'm using 2 Synology NAS @home to provide basic services (CIFS/NFS/iSCSI/Backup) and some Virtualization.

      As one of my NAS needs to be "refreshed", I decided to try something more open, and bought an HP Microserver Gen8 upgraded with 8GB and Core i3. OMV3.0.64 is installed on a dedicated 32GB SSD and I've created a R5 (md not hard) based on 4x2TB disks.
      I've installed VirtualBox based on ryecoaaron's packages (kernel 4.9.0 bpo) as I need a ProxMox instance running on each NAS just for cluster Quorum purpose (2x physical nodes + 2x virtual nodes) and started to sync data from the "soon to be decommissioned" NAS.

      My OMV is about to be ready but I need to configure iSCSI. I've then installed the plugin and went to the target/LUN creation but ... There's no "File IO" option for the LUN, only "Block IO" :(

      I've used the whole disks to create the R5 so I do not have space to create a dedicated volume/device for iSCSI LUNs creation ...
      I've tried to manualy create my LUN in "Type=fileio" but it looks like there's a missing module in my kernel:

      Mar 6 09:18:17 omv1 iscsitarget[1208]: Starting iSCSI enterprise target service:modprobe: FATAL: Module iscsi_trgt not found.
      Mar 6 09:18:17 omv1 iscsitarget[1208]: failed!

      root@omv1:~# modprobe iscsi_trgt
      modprobe: FATAL: Module iscsi_trgt not found.


      Now here's my question:

      I want/need to have VirtualBox & iSCSI running on my OMV. I don't realy want to destroy my populated R5 in order to "free" space for iSCSI in Block mode. What do you suggest ?

      - Is it possible to free space without destroying the R5 ?
      - Is there's a kernel I could use that supports VB 5.1.x and includes iscsi_trgt module ?
      - Any chance having "File IO" available in openmediavault-iscsi plugin ?

      Thanks a lot in advance for your help !

      Olivier
    • Please have a look here.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hi votdev,

      Does that mean that iSCSI would not work at all with 4.x kernel ? Even if I have had a free device for LUN creation ?
      What about VirtualBox 5.1.x with 3.16 kernel ? Will it work ? And BTW, I'll still have to use ietadm outside of OMV if I want "File IO" LUNs right ?

      Cheers
    • Belokan wrote:

      Hi votdev,

      Does that mean that iSCSI would not work at all with 4.x kernel ? Even if I have had a free device for LUN creation ?
      What about VirtualBox 5.1.x with 3.16 kernel ? Will it work ? And BTW, I'll still have to use ietadm outside of OMV if I want "File IO" LUNs right ?

      Cheers

      OK here's what I've done so far based on the link you sent me:

      - Install kernel/headers and boot on 3.16
      - Reinstall virtualbox-dkms 5.1.14 in order to build modules for 3.16
      - Start VirtualBox plugin (and the VM)
      - Use ietadm to create a LUN in File IO

      So basically I have everything I need now but what about the "sustainability" of my NAS ? With the VBox plugin being discontinued and the iSCSI File IO not available by default is OMV the distribution that fits my needs ?
      I don't know how open-minded the forum is but what distrib would you suggest in case OMV is not "the one for me" ?

      Thanks !
    • Belokan wrote:

      Does that mean that iSCSI would not work at all with 4.x kernel ? Even if I have had a free device for LUN creation ?
      From my tests (been a few weeks), the iSCSI module won't compile with any of the 4.x kernels in the Debian repos. Has nothing to do with free devices.

      Belokan wrote:

      What about VirtualBox 5.1.x with 3.16 kernel ?
      Should work fine.

      Belokan wrote:

      basically I have everything I need now but what about the "sustainability" of my NAS ?
      That should work well for a long time I would think.

      Belokan wrote:

      With the VBox plugin being discontinued
      The other maintainer stopped supporting it but I have been supporting it since there are so many people who want it. All of the virtualbox issues have nothing to do with plugin. It is all issues with the virtualbox package itself. We can't control what happens with the virtualbox package and there is no fix that the plugin can make.

      Belokan wrote:

      I don't know how open-minded the forum is but what distrib would you suggest in case OMV is not "the one for me" ?
      I think OMV should work just fine. Any service that compiles modules (iSCSI, virtualbox, zfs) are at risk when a new version of the kernel (4.8 vs 4.9 not minor updates) is released. You might consider virtualizing OMV using Proxmox or ESXi. I do this. It eliminates the need for virtualbox (and is quite a bit faster). I have a separate virtual machine just for iSCSI.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Hello Aaron,

      Thanks for your detailed answer. I did not get that 3.16 was the default kernel for OMV3.0, I thought is was based on 4.x ... So no problem here.
      And if you don't see any potential issue with configuring and managing iSCSI LUNs "outside of OMV" it's fine for me.

      Now regarding your last sentence, I know it is outside of the original question scope, but could you be more specific about your configuration ?

      Right now I have 2 physical Proxmox nodes on dedicated hardware and 2 NAS serving NFS and iSCSI for the PM cluster, and as well hosting each a virtual Proxmox node (VBox) used for quorum. This way, I'm able to use HA on a PM group based on the 2 physical nodes and I can "lose" one physical node and still have a valid quorum of 3 votes.

      I can not do anything else with the Synology NAS but I could convert the Proliant running OMV to a 3rd PM node. Then create a VM localy on this one with direct access to the 4x SATA disks and install OMV on it ? And then this virtualized OMV will serve NFS/CIFS for my network (PM nodes, Windows shares, media server) and another VM used as dedicated iSCSI target ?

      I can imagine how it would work and mostly how I could maintain it ... But it has sort of "inception" taste don't you think ? So please, give me more detail if you can find some time I'd really appreciate your output on this configuration !

      Cheers,

      Olivier
    • Belokan wrote:

      Then create a VM localy on this one with direct access to the 4x SATA disks and install OMV on it ? And then this virtualized OMV will serve NFS/CIFS for my network (PM nodes, Windows shares, media server) and another VM used as dedicated iSCSI target ?
      Yes, this is pretty much what I was thinking. I don't know if you need a separate VM for iSCSI but that is what I do. The iSCSI VM doesn't supply storage to any VM on the same node though (although it works). My main fileserver is a VM on an ESXi cluster that has some VMDK storage and four sata disks passed though via RDM.

      Belokan wrote:

      But it has sort of "inception" taste don't you think ?
      That is kind of what I was thinking of running Proxmox on Virtualbox for your third node :)
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!