Lucid's OMV + ESXi Build + Fiber Channel

  • UPDATED 2/14/2014 - Clean up and updated FC link


    This is my first post, I thought I'd share what my setup is and how I've adapted OMV to do my bidding. :evil: Most of this is taken from my personal notes, and I tend to over explain everything to myself. That is also why it's formatted for Dokuwiki... :D


    I've also done OMV with raw mapped drives presented to ESXi locally, AD integration, roaming profiles, iSCSi, etc. This is the current Lab setup;


    ===== Hardware =====
    * 1 Dell PE1900, 4gb ram, 4x1.6ghz, 40gb ssd, 6x500gb hdd, 2x NIC, 1x 4g fiber
    * OMV 0.5
    * 1 HP Pavilion, 6gb ram, 4x2.6ghz, 40gb ssd, 3x NIC, 1x 4g fiber
    * ESXi 5.1
    * 1 Battery Backup for both boxes + gigabit switch + monitor
    * Est Investment: about $800-1000 (scored the 500gb drives for free)


    ===== The Dell =====
    Essentially a SAN that operates independently of most server duties. This setup was initially just an experiment but it has been awesome thus far. ESXi works much better when it is presented storage in this manner. It runs OMV 0.5 (Debian 6) with a kernel update to provide Fiber Channel via SCST.
    The 6 500gb drives are in a raid 10 array. There are 2x gigabit nics, one for general purpose and the other is a direct line to the HP. NFS is presented to ESXi over the dedicated link (more secure + faster). A 1g LAN connection is ok for most VM's and is very flexible.


    The 4g Fiber connection is used for high priority VM's, performance is great! Almost as fast as my SSD in my desktop! These results are from a Win7 VM running on the HP(ESXi) with it's datastore set to one of the fiber LUNs stored on the Dell(OMV):
    Queue Depth of 4
    4k write: 50mb/s (~12800 IOPs)
    4k read: 60mb/s (~15360 IOPS)
    After 512k r/w: maxed out at 411mb/s sustained


    The 4g fiber connection is saturated with this configuration. Very low latency and OMV supplies caching! ESXi can even boot from this. (havent done this yet) Maybe improvements in IOPs could be found with better drives and more ram?


    ===== The HP =====
    Simply put, ESXi boots from the 40gb SSD. The SSD holds core VM's (router, domain, wiki etc). Then ESXi can access OMV via NFS or FC. FC is presented to ESXi as disk image files.


    ===== FC =====
    OMV makes RAID a no-brainer, getting Fiber Channel to work was more of a chore. The performance compared to gigabit was about what you'd expect, nearly 4x faster, and almost no latency.


    Guide here:
    http://forums.openmediavault.org/viewtopic.php?f=15&t=3510


    ===== BBU =====
    I plugged in my Cyberpower's USB to the Dell. The client they provide is....ok. It works but it must have been written/translated by someone who's primary language isn't English. After getting acquainted with that, The Dell now SSH's to the HP, queries it for running VMs, powers them down as defined in the script. After the (HP) is powered down, the Dell shuts down. The HP on boot also sends a WOL packet to the Dell.


  • Only 1000$ for those Dell and HP Server?
    I could't believe !

    I love my beautiful NAS !
    Hardware: Mainboard ASUS H87I-PLUS|CPU Xeon E3 1230V3|RAM 2x8G|SSD Samsung 120GB|HDD 1x3T + 1x2T|Scalability: 5*SATA
    Software: Vmware ESXi 5.5 build Vitural Machines(OMV 0.5.35,Ubuntu,Debia wheezy,Windows 7)

  • It's all who you know! the PE1900 was a Craigslist find, the HP was bought from a friend, then free 500gb drives. The 40gb SSD's are older intels from eBay, then the 4g fiber cards are from ebay as well.

  • very good. Let me study tonigt.
    Thanks for your sharing.
    I am building my nas,your article is 及时雨

    I love my beautiful NAS !
    Hardware: Mainboard ASUS H87I-PLUS|CPU Xeon E3 1230V3|RAM 2x8G|SSD Samsung 120GB|HDD 1x3T + 1x2T|Scalability: 5*SATA
    Software: Vmware ESXi 5.5 build Vitural Machines(OMV 0.5.35,Ubuntu,Debia wheezy,Windows 7)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!