Beiträge von shadowzero

    I would suggest this to troubleshoot. Connect a NIC on OMV that is not bonded and give it a static IP of 192.168.20.1 with a netmask of 255.255.255.0 or /24. This is your iSCSI target. Add a NIC to the Windows machine and give it an IP of 192.168.20.2 with the same netmask. Connect the two machines directly on a separate network cable not connected to the rest of your network. Map your initiator to use 192.168.20.1 as the target. See if your getting the same results. Let me know what the results are.

    I have a 4-port 10 GB ethernet card and those ports are bonded.


    Is your OMV and Windows machine both using 10GB? Also, I am curious why you bonded 4 10GB ports together. What mode did you choose? Are you using any other ports for OMV or everything on the 4 port 10GB card?
    Do you also have a 10GB switch?

    I was curious to see if you were using mpio and what your network connection was like. You don't have to enable mpio. I won't go into detail about how it works but if you're interested to learn about it, have a look here. https://technet.microsoft.com/…ry/ee619734(v=ws.10).aspx
    Give these settings a try and see if it helps. Edit your target properties. Take a screenshot of your current settings in case you need to change them back.


    Change your max connections from 1 to 4.
    Change your max sessions from 0 to 8.
    Enable Immediate data.
    Max outstanding R2T from 1 to 8.
    NOP interval 20.
    NOP Timeout 30.


    See if that gives you any improvement.

    Let me take a look at my settings and see if I have anything different from the default. For your windows machine do you have mpio enabled? Is the iscsi traffic on the same lan network with the rest of your pcs or do you have it on a vlan or separate network if it is 1GB?

    Glad to hear you got it working :thumbup: I do have to apologize as I thought you had already moved to the 3.16 kernel. I would recommend you stay on the back ports kernel as well. As for the iSCSI half I cannot speak on it. The plugin won't interface with the compiled sources. As long as you remove the iscsi plugin if it is installed, you shouldn't have any update issues but don't take my word for it. ;) Good luck and let us know how your testing progresses.

    I would say yes it is still in development. As for booting CentOS diskless the PXE plugin is not capable of doing that on it's own but it will allow you to add the label. However that doesn't mean you couldn't do it. It would require you to set it up manually. There are lots of guides and howtos on the internet to do this. You will not need to setup the tftp or pxe half as this plugin provides that part. There may be some additional configuration.

    Well I have done some digging and I'm not really sure what the problem could be. I also looked up the message: "DXE 91" which is not really an error. This is part of the boot process and this code means: "Driver connecting is started". I am beginning to think you may have a faulty card on your hands. I would try to eliminate the card that may be causing the problem. Here is some suggestions you can try. I know it will take some time to do it but I think this is your best option to rule out any hardware issues.


    Install a single card and test it in each slot 3,4, and 6. Start with the card currently in slot 3. Run the sasircu list and display 0 commands. See if you get the errors you were seeing before. Check the log as well. If you see the delay you mentioned before, I think your safe to say that is your culprit.
    Repeat the step above with each card.


    Try it with 2 cards in slots 3 and 4. Use cards that were in slot 3 and 4. Run the sasircu list and display 0 and 1 commands.
    Try it with 2 cards in slots 3 and 6. Use cards that were in slot 3 and 4. Repeat
    Try it with 2 cards in slots 4 and 6. Use cards that were in slot 3 and 4. Repeat


    Try it with 2 cards in slots 3 and 4. Use cards that were in slot 3 and 6. Run the sasircu list and display 0 and 1 commands.
    Try it with 2 cards in slots 3 and 6. Use cards that were in slot 3 and 6. Repeat
    Try it with 2 cards in slots 4 and 6. Use cards that were in slot 3 and 6. Repeat


    The only thing I can suggest with all 3 cards installed is to check if your booting in legacy BIOS or or EFI. If your running legacy BIOS, try turning off functions you don't need like PXE boot for example. You may be filling up the Option ROM allocation table. I don't think this is the case but I wouldn't rule it out either.


    Sorry for the long post. I hope this helps. If you still run into the same problem then I am at a loss.

    Can you take a picture and share it so I can see the physical connections? I can help troubleshoot it better that way. Show me the two or three controllers you have and how they are connected to the drives.

    I'm looking at the log now. You have 2 controllers installed? Any reason for that? I assume you have both installed.

    IBM Serveraid M1015 and Dell PERC H200 flashed in IT mode


    The 4th disk doesnt appear to be formatted.


    EDIT: I think I know why you have two controllers installed now. I could be wrong of course. ;) They may of came with single drive connectors per channel. Might I suggest looking at one of these http://www.satacables.com/12312312312.jpg so you can reduce your controllers by one and support up to eight drives on one controller.