Got iscsi working on stretch with bcache

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Got iscsi working on stretch with bcache

      Just thought I'd let every know that I set up an iscsi target and used bcache and an ssd to speed things up. I came over from freenas and wanted to give this a try.
      First I set up a raid array,
      Setup Bcache as per this info tech-g.com/2017/08/10/bcache-how-to-setup/#comment-346001
      Setup Iscsi as per this info tecmint.com/setup-iscsi-target-and-initiator-on-debian-9/

      Unfortunately none of it shows in the gui, but it works fine. Running 6 VMs on a single raid 10 array.

      Just FYI.
    • Yes, all on the same server.
      I haven't been able to run performance numbers because the transition was a real intense rush.
      I'm just physically trying to recover, really wasn't that smooth. The main problem to solve is the data churn when the different VM's are seeking all over the disks.
      It obviously wasn't that good by the time I was finishing at 2am. But I'm hoping as the day goes on the cache will learn what it has to hold.
      In freenas/zfs the cache and ram are in a relationship restricting the amount of SSD you can use. Bcache has the same relationship except it's more like 50 to 1, instead of 10 to 1.
      SSD's are so much cheaper than ram. "I too like to live dangerously". I'm out on a limb with this setup in production. But I'm taking daily backups and using cloud services for the important stuff.
      I don't know what OMV's brief and goals are. But Debian is a grouse starting point, better than the other nas guys stuck in other environment's. If you ticked the performance check box buy using the beta of bcachefs and iscsi targets. You'd have a real weapon on your hands.

      Just IMO.

      The post was edited 1 time, last by xrstokes ().

    • tkaiser wrote:

      xrstokes wrote:

      SSD's are so much cheaper than ram.
      And magnitudes slower at the same time.
      So far I've not seen a single occurrence where bcache had positive effects on NAS performance (not even with VMs -- for this use case we moved everywhere to a couple of cheap SATA SSDs as one zpool made out of mirrored vdevs and share it via NFS to the vSphere clusters).
      I didn't want to turn off bcache, so I ran a test against the spare single disk I've got. Vs the iscsi target. As you can see. The random IO performance is miles better, and this is what my VM's crave. Why this over ZFS and such? Bcache survives the dirty shutdowns like a champ. Even when loaded with "dirty data" on the ssds. It still runs them onto the backing storage after a power out. I'm using a mirror for the SSD's, but I understand that if I loose both, that dirty data is gone. I run periodic backups for this purpose.
      Images
      • Capture.JPG

        88 kB, 805×368, viewed 53 times